There are a lot more degrees of freedom in world models.
LLMs are fundamentally capped because they only learn from static text -- human communications about the world -- rather than from the world itself, which is why they can remix existing ideas but find it all but impossible to produce genuinely novel discoveries or inventions. A well-funded and well-run startup building physical world models (grounded in spatiotemporal understanding, not just language patterns) would be attacking what I see as the actual bottleneck to AGI. Even if they succeed only partially, they may unlock the kind of generalization and creative spark that current LLMs structurally can't reach.
A few years ago I've made this simple thought experiment to convince myself that LLM's won't achieve superhuman level (in the sense of being better than all human experts):
Imagine that we made an LLM out of all dolphin songs ever recorded, would such LLM ever reach human level intelligence? Obviously and intuitively the answer is NO.
Your comment actually extended this observation for me sparking hope that systems consuming natural world as input might actually avoid this trap, but then I realized that tool use & learning can in fact be all that's needed for singularity while consuming raw data streams most of the time might actually be counterproductive.
I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation. World models don't solve any of these problems; they are fundamentally the same kind of deep learning architectures we are used to work with. Heck, if you think learning from the world itself is the bottleneck, you can just put a vision-action LLM on a reinforcement learning loop in a robotic/simulated body.
> I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation.
Even with continuous backpropagation and "learning", enriching the training data, so called online-learning, the limitations will not disappear. The LLMs will not be able to conclude things about the world based on fact and deduction. They only consider what is likely from their training data. They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.
Whether humans always apply that much effort to conclude these things is another question. The point is, that humans fundamentally are capable of doing that, while LLMs are structurally not.
The problems are structural/architectural. I think it will take another 2-3 major leaps in architectures, before these AI models reach human level general intelligence, if they ever reach it. So far they can "merely" often "fake it" when things are statistically common in their training data.
> Even with continuous backpropagation and "learning"
That's what I said. Backpropagation cannot be enough; that's not how neurons work in the slightest. When you put biological neurons in a Pong environment they learn to play not through some kind of loss or reward function; they self-organize to avoid unpredictable stimulation. As far as I know, no architecture learns in such an unsupervised way.
Forgive me for being ignorant - but 'loss' in supervised learning ML context encode the difference between how unlikely (high loss) or likely (low loss) was the network in predicting the output based on the input.
This sounds very similar to me as to what neurons do (avoid unpredictable stimulation)
So, I have been thinking about this for a little while. Image a model f that takes a world x and makes a prediciton y. At a high-level, a traditional supervised model is trained like this
f(x)=y' => loss(y',y) => how good was my prediction? Train f through backprop with that error.
While a model trained with reinforcement learning is more similar to this. Where m(y) is the resulting world state of taking an action y the model predicted.
f(x)=y' => m(y')=z => reward(z) => how good was the state I was in based on my actions? Train f with an algorithm like REINFORCE with the reward, as the world m is a non-differentiable black-box.
While a group of neurons is more like predicting what is the resulting word state of taking my action, g(x,y), and trying to learn by both tuning g and the action taken f(x).
f(x)=y' => m(y')=z => g(x,y)=z' => loss(z,z') => how predictable was the results of my actions? Train g normally with backprop, and train f with an algorithm like REINFORCE with negative surprise as a reward.
After talking with GPT5.2 for a little while, it seems like Curiosity-driven Exploration by Self-supervised Prediction[1] might be an architecture similar to the one I described for neurons? But with the twist that f is rewarded by making the prediction error bigger (not smaller!) as a proxy of "curiosity".
Humans are notoriously bad at formal logic. The Wason selection task is the classic example: most people fail a simple conditional reasoning problem unless it’s dressed up in familiar social context, like catching cheaters. That looks a lot more like pattern matching than rule application.
Kahneman’s whole framework points the same direction. Most of what people call “reasoning” is fast, associative, pattern-based. The slow, deliberate, step-by-step stuff is effortful and error-prone, and people avoid it when they can. And even when they do engage it, they’re often confabulating a logical-sounding justification for a conclusion they already reached by other means.
So maybe the honest answer is: the gap between what LLMs do and what most humans do most of the time might be smaller than people assume. The story that humans have access to some pure deductive engine and LLMs are just faking it with statistics might be flattering to humans more than it’s accurate.
Where I’d still flag a possible difference is something like adaptability. A person can learn a totally new formal system and start applying its rules, even if clumsily. Whether LLMs can genuinely do that outside their training distribution or just interpolate convincingly is still an open question. But then again, how often do humans actually reason outside their own “training distribution”? Most human insight happens within well-practiced domains.
> The Wason selection task is the classic example: most people fail a simple conditional reasoning problem unless it’s dressed up in familiar social context, like catching cheaters.
I've never heard about the Wason selection task, looked it up, and could tell the right answer right away. But I can also tell you why: because I have some familiarity with formal logic and can, in your words, pattern-match the gotcha that "if x then y" is distinct from "if not x then not y".
In contrast to you, this doesn't make me believe that people are bad at logic or don't really think. It tells me that people are unfamiliar with "gotcha" formalities introduced by logicians that don't match the everyday use of language. If you added a simple additional to the problem, such as "Note that in this context, 'if' only means that...", most people would almost certainly answer it correctly.
Mind you, I'm not arguing that human thinking is necessarily more profound from what what LLMs could ever do. However, judging from the output, LLMs have a tenuous grasp on reality, so I don't think that reductionist arguments along the lines of "humans are just as dumb" are fair. There's a difference that we don't really know how to overcome.
I think people MOSTLY foresee and anticipate events in OUR training data, which mostly comprises information collected by our senses.
Our training data is a lot more diverse than an LLMs. We also leverage our senses as a carrier for communicating abstract ideas using audio and visual channels that may or may not be grounded in reality. We have TV shows, video games, programming languages and all sorts of rich and interesting things we can engage with that do not reflect our fundamental reality.
Like LLMs, we can hallucinate while we sleep or we can delude ourselves with untethered ideas, but UNLIKE LLMs, we can steer our own learning corpus. We can train ourselves with our own untethered “hallucinations” or we can render them in art and share them with others so they can include it in their training corpus.
Our hallucinations are often just erroneous models of the world. When we render it into something that has aesthetic appeal, we might call it art.
If the hallucination helps us understand some aspect of something, we call it a conjecture or hypothesis.
We live in a rich world filled with rich training data. We don’t magically anticipate events not in our training data, but we’re also not void of creativity (“hallucinations”) either.
Most of us are stochastic parrots most of the time. We’ve only gotten this far because there are so many of us and we’ve been on this earth for many generations.
Most of us are dazzled and instinctively driven to mimic the ideas that a small minority of people “hallucinate”.
There is no shame in mimicking or being a stochastic parrot. These are critical features that helped our ancestors survive.
> They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.
Can you be a bit more specific at all bounds? Maybe via an example?
> Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation.
While I suspect latter is a real problem (because all mammal brains* are much more example-efficient than all ML), the former is more about productisation than a fundamental thing: the models can be continuously updated already, but that makes it hard to deal with regressions. You kinda want an artefact with a version stamp that doesn't change itself before you release the update, especially as this isn't like normal software where specific features can be toggled on or off in isolation of everything else.
* I think. Also, I'm saying "mammal" because of an absence of evidence (to my *totally amateur* skill level) not evidence of absence.
The fact that models aren't continually updating seems more like a feature. I want to know the model is exactly the same as it was the last time I used it. Any new information it needs can be stored in its context window or stored in a file to read the next it needs to access it.
> The fact that models aren't continually updating seems more like a feature.
I think this is true to some extent: we like our tools to be predictable. But we’ve already made one jump by going from deterministic programs to stochastic models. I am sure the moment a self-evolutive AI shows up that clears the "useful enough" threshold we’ll make that jump as well.
Stochastic and unpredictability aren't exactly the same. I would claim current LLMs are generally predictable even if it is not as predictable as a deterministic program.
Unless you use your oen local models then you don't even know when OpenAI or Anthropic tweaked the model less or more. One week it's a version x, next week it's a version y. Just like your operating system is continuously evolving with smaller patches of specific apps to whole new kernel version and new OS release.
There is still a huge gap between a model continuously updating itself and weekly patches by a specialist team. The former would make things unpredictable.
You could have continual learning on text and still be stuck in the same "remixing baseline human communications" trap. It's a nasty one, very hard to avoid, possibly even structurally unavoidable.
As for the "just put a vision LLM in a robot body" suggestion: People are trying this (e.g. Physical Intelligence) and it looks like it's extraordinarily hard! The results so far suggest that bolting perception and embodiment onto a language-model core doesn't produce any kind of causal understanding. The architecture behind the integration of sensory streams, persistent object representations, and modeling time and causality is critically important... and that's where world models come in.
I don't understand why online learning is that necessary. If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI. A hippocampus is a nice upgrade to that, but not super obviously on the critical path.
> If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI.
I like how people are accepting this dubious assertion that Einstein would be "useful" if you surgically removed his hippocampus and engaging with this.
It also calls this Einstein an AGI rather than a disabled human???
I guess the sheer amount and also variety of information you would need to pre-encode to get an Einstein at 40 is huge. Every day stream of high resolution video feed and actions and consequences and thoughts and ideas he has had until the age of 40 of every single moment. That includes social interactions, like a conversation and mimic of the other person in combination with what was said and background knowledge about the other person. Even a single conversation's data is a huge amount of data.
But one might say that the brain is not lossless ... True, good point. But in what way is it lossy? Can that be simulated well enough to learn an Einstein? What gives events significance is very subjective.
"Reading, after a certain age, diverts the mind too much from its creative pursuits. Any man who reads too much and uses his own brain too little falls into lazy habits of thinking".
That's true. Though could that hippocampus-less Einstein be able to keep making novel complex discoveries from that point forward? Seems difficult. He would rapidly reach the limits of his short term memory (the same way current models rapidly reach the limits of their context windows).
Putting stuff you have learned into a markdown file is a very "shallow" version of continual learning. It can remember facts, yes, but I doubt a model can master new out-of-distribution tasks this way. If anything, I think that Google's Titans[1] and Hope[2] architectures are more aligned with true continual learning (without being actual continual learning still, which is why they call it "test-time memorization").
The sum of human knowledge is more than enough to come up with innovative ideas and not every field is working directly with the physical world. Still I would say there's enough information in the written history to create virtual simulation of 3d world with all ohysical laws applying (to a certain degree because computation is limited).
What current LLMs lack is inner motivation to create something on their own without being prompted. To think in their free time (whatever that means for batch, on demand processing), to reflect and learn, eventually to self modify.
I have a simple brain, limited knowledge, limited attention span, limited context memory. Yet I create stuff based what I see, read online. Nothing special, sometimes more based on someone else's project, sometimes on my own ideas which I have no doubt aren't that unique among 8 billions of other people. Yet consulting with AI provides me with more ideas applicable to my current vision of what I want to achieve. Sure it's mostly based on generally known (not always known to me) good practices. But my thoughts are the same way, only more limited by what I have slowly learned so far in my life.
Virtual simulations are not substitutable for the physical world. They are fundamentally different theory problems that have almost no overlap in applicability. You could in principle create a simulation with the same mathematical properties as the physical world but no one has ever done that. I'm not sure if we even know how.
Physical world dynamics are metastable and non-linear at every resolution. The models we do build are created from sparse irregular samples with large error rates; you often have to do complex inference to know if a piece of data even represents something real. All of this largely breaks the assumptions of our tidy sampling theorems in mathematics. The problem of physical world inference has been studied for a couple decades in the defense and mapping industries; we already have a pretty good understanding of why LLM-style AI is uniquely bad at inference in this domain, and it mostly comes down to the architectural inability to represent it.
Grounded estimates of the minimum quantity of training data required to build a reliable model of physical world dynamics, given the above properties, is many exabytes. This data exists, so that is not a problem. The models will be orders of magnitude larger than current LLMs. Even if you solve the computer science and theory problems around representation so that learning and inference is efficient, few people are prepared for the scale of it.
(source: many years doing frontier R&D on these problems)
I guess you need two things to make that happen. First, more specialization among models and an ability to evolve, else you get all instances thinking roughly the same thing, or deer in the headlights where they don't know what of the millions of options they should think about. Second, fewer guardrails; there's only so much you can do by pure thought.
The problem is, idk if we're ready to have millions of distinct, evolving, self-executing models running wild without guardrails. It seems like a contradiction: you can't achieve true cognition from a machine while artificially restricting its boundaries, and you can't lift the boundaries without impacting safety.
I have a pet peeve with the concept of "a genuinely novel discovery or invention", what do you imagine this to be? Can you point me towards a discovery or invention that was "genuinely novel", ever?
I don't think it makes sense conceptually unless you're literally referring to discovering new physical things like elements or something.
Humans are remixers of ideas. That's all we do all the time. Our thoughts and actions are dictated by our environment and memories; everything must necessarily be built up from pre-existing parts.
W Brian Arthur's book "The Nature of Technology" provides a framework for classifying new technology as elemental vs innovative that I find helpful. For example the Huntley-Mcllroy diff operates on the phenomenon that ordered correspondence survives editing. That was an invention (discovery of a natural phenomenon and a means to harness it). Myers diff improves the performance by exploiting the fact that text changes are sparse. That's innovation. A python app using libdiff, that's engineering.
And then you might say in terms of "descendants": invention > innovation > engineering. But it's just a perspective.
Suno is transformer-based; in a way it's a heavily modified LLM.
You can't get Suno to do anything that's not in its training data. It is physically incapable of inventing a new musical genre. No matter how detailed the instructions you give it, and even if you cheat and provide it with actual MP3 examples of what you want it to create, it is impossible.
The same goes for LLMs and invention generally, which is why they've made no important scientific discoveries.
Einstein’s theory of relativity springs to mind, which is deeply counter-intuitive and relies on the interaction of forces unknowable to our basic Newtonian senses.
There’s an argument that it’s all turtles (someone told him about universes, he read about gravity, etc), but there are novel maths and novel types of math that arise around and for such theories which would indicate an objective positive expansion of understanding and concept volume.
Sure, but don't conflate the representation format with the structure of what's being represented.
Everything is bits to a computer, but text training data captures the flattened, after-the-fact residue of baseline human thought: Someone's written description of how something works. (At best!)
A world model would need to capture the underlying causal, spatial, and temporal structure of reality itself -- the thing itself, that which generates those descriptions.
You can tokenize an image just as easily as a sentence, sure, but a pile of images and text won't give you a relation between the system and the world. A world model, in theory, can. I mean, we ought to be sufficient proof of this, in a sense...
In the last step of training LLMs, reinforcement learning from verified rewards, LLMs are trained to maximize the probability of solving problems using their own output, depending on a reward signal akin to winning in Go. It's not just imitating human written text.
Fwiw, I agree that world models and some kind of learning from interacting with physical reality, rather than massive amounts of digitized gym environments is likely necessary for a breakthrough for AGI.
> One major critique LeCun raises is that LLMs operate only in the realm of language, which is a simple, discrete space compared to the continuous, complex physical world we live in. LLMs can solve math problems or answer trivia because such tasks reduce to pattern completion on text, but they lack any meaningful grounding in physical reality. LeCun points out a striking paradox: we now have language models that can pass the bar exam, solve equations, and compute integrals, yet “where is our domestic robot? Where is a robot that’s as good as a cat in the physical world?” Even a house cat effortlessly navigates the 3D world and manipulates objects — abilities that current AI notably lacks. As LeCun observes, “We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”
It’s an interesting observation, but I think you have it backwards. The examples you give are all using discrete symbols to represent something real and communicating this description to other entities. I would argue that all your examples are languages.
Whats the first L stand for? Thats not just vestogial, their model of the world is formed almost exclusively from language rather than a range of things contributing significantly like for humans.
The biggest thing thats missing is actual feedback to their decisions. They have no "idea of that because transformers and embeddings dont model that yet. And langiage descriptions and image representations of feedback arent enough. They are too disjointed. It needs more
How is a Linear stream of symbols able to capture the relationships of a real world?
It's like the people who are so hyped up about voice controlled computers. Like you get a linear stream of symbols is a huge downgrade in signals, right? I don't want computer interaction to be yet more simplified and worsened.
Compare with domain experts who do real, complicated work with computers, like animators, 3D modelers, CAD, etc. A mouse with six degrees of freedom, and a strong training in hotkeys to command actions and modes, and a good mental model of how everything is working, and these people are dramatically more productive at manipulating data than anyone else.
Imagine trying to talk a computer through nudging a bunch of vertexes through 3D space while flexibly managing modes of "drag" on connected vertexes. It would be terrible. And no, you would not replace that with a sentence of "Bot, I want you to nudge out the elbow of that model" because that does NOT do the same thing at all. An expert being able to fluidly make their idea reality in real time is just not even remotely close to the instead "Project Manager/mediocre implementer" relationship you get prompting any sort of generative model. The models aren't even built to contain specific "Style", so they certainly won't be opinionated enough to have artistic vision, and a strong understanding of what does and does not work in the right context, or how to navigate "My boss wants something stupid that doesn't work and he's a dumb person so how do I convince him to stop the dumb idea and make him think that was his idea?"
I really hate the world model terminology, but the actual low level gripe between LeCunn and autoregressive LLMs as they stand now is the fact that the loss function needs to reconstruct the entirety of the input. Anything less than pixel perfect reconstruction on images is penalized. Token by token reconstruction also is biased towards that same level of granularity.
The density of information in the spatiotemporal world is very very great, and a technique is needed to compress that down effectively. JEPAs are a promising technique towards that direction, but if you're not reconstructing text or images, it's a bit harder for humans to immediately grok whether the model is learning something effectively.
I think that very soon we will see JEPA based language models, but their key domain may very well be in robotics where machines really need to experience and reason about the physical the world differently than a purely text based world.
Isn't the Sora video model a ViT with spatiotemporal inputs (so they've found a way to compress that down), but at the same time LeCunn wouldn't consider that a world model?
There will be no "unlocking of AGI" until we develop a new science capable of artificial comprehension. Comprehension is the cornucopia that produces everything we are, given raw stimulus an entire communicating Universe is generated with a plethora of highly advanceds predator/prey characters in an infinitely complex dynamic, and human science and technology have no lead how to artificially make sense of that in a simultaneous unifying whole. That's comprehension.
> LLMs are fundamentally capped because they only learn from static text -- human communications about the world -- rather than from the world itself, which is why they can remix existing ideas but find it all but impossible to produce genuinely novel discoveries or inventions.
No hate, but this is just your opinion.
The definition of "text" here is extremely broad – an SVG is text, but it's also an image format. It's not incomprehensible to imagine how an AI model trained on lots of SVG "text" might build internal models to help it "visualise" SVGs in the same way you might visualise objects in your mind when you read a description of them.
The human brain only has electrical signals for IO, yet we can learn and reason about the world just fine. I don't see why the same wouldn't be possible with textual IO.
There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
It could be a management issue, though, and I sincerely wish we will see more competition, but from what I quoted above, it does not seem like it.
Understanding world through videos (mentioned in the article), is just what video models have already done, and they are getting pretty good (see Seedance, Kling, Sora .. etc). So I'm not quite sure how what he proposed would work.
"and we didn't see anything" is not justified at all.
Meta absolutely has (or at least had) a word class industry AI lab and has published a ton of great work and open source models (granted their LLM open source stuff failed to keep up with chinese models in 2024/2025 ; their other open source stuff for thins like segmentation don't get enough credit though). Yann's main role was Chief AI Scientist, not any sort of product role, and as far as I can tell he did a great job building up and leading a research group within Meta.
He deserved a lot of credit for pushing Meta to very open to publishing research and open sourcing models trained on large scale data.
Just as one example, Meta (together with NYU) just published "Beyond Language Modeling: An Exploration of Multimodal Pretraining" (https://arxiv.org/pdf/2603.03276) which has a ton of large-experiment backed insights.
Yann did seem to end up with a bit of an inflated ego, but I still consider him a great research lead. Context: I did a PhD focused on AI, and Meta's group had a similar pedigree as Google AI/Deepmind as far as places to go do an internship or go to after graduation.
>> but he had access to many more resources in Meta, and we didn't see anything
> I wasn't criticising his scientific contribution at all, that's why I started my comment by appraising what he did.
You were criticising his output at Facebook, though, but he was in the research group at facebook, not a product group, so it seems like we did actually see lots of things?
> There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
That's true for 99% of the scientists, but dismissing their opinion based on them not having done world shattering / ground breaking research is probably not the way to go.
> I sincerely wish we will see more competition
I really wish we don't, science isn't markets.
> Understanding world through videos
The word "understanding" is doing a lot of heavy lifting here. I find myself prompting again and again for corrections on an image or a summary and "it" still does not "understand" and keeps doing the same thing over and over again.
Do not keep bad results in context. You have to purge them to prevent them from effecting the next output. LLMs deceptively capable, but they don’t respond like a person. You can’t count on implicit context. You can’t count on parts of the implicit context having more weight than others.
Most folks get paid a lot more in a corporate job than tinkering at home - using the 'follow the money' logic it would make sense they would produce their most inspired works as 9-5 full stack engineers.
But often passion and freedom to explore are often more important than resources
llama models pushed the envelope for a while, and having them "open-weight" allowed a lot of tinkering. I would say that most of fine tuned evolved from work on top of llama models.
For a hot minute Meta had a top 3 LLM and open sourced the whole thing, even with LeCunn's reservations around the technology.
At the same time Meta spat out huge breakthroughs in:
- 3d model generation
- Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.
- A whole new class of world modeling techniques (JEPAs)
> - Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.
If it was a breakthrough, why did Meta acquire Wang and his company? I'm genuinely curious.
Is it a troll? Even if we just ignore Llama, Meta invented and released so many foundational research and open source code. I would say that the computer vision field would be years behind if Meta didn't publish some core research like DETR or MAE.
I can’t reconcile this dichotomy: most of the landmark deep learning papers were developed with what, by today’s standards, were almost ridiculously small training budgets — from Transformers to dropout, and so on.
So I keep wondering: if his idea is really that good — and I genuinely hope it is — why hasn’t it led to anything truly groundbreaking yet? It can’t just be a matter of needing more data or more researchers. You tell me :-D
Its a matter of needing more time, which is a resource even SV VCs are scared to throw around. Look at the timeline of all these advancements and how long it took
Lecun introduced backprop for deep learning back in 1989
Hinton published about contrastive divergance in next token prediction in 2002
Alexnet was 2012
Word2vec was 2013
Seq2seq was 2014
AiAYN was 2017
UnicornAI was 2019
Instructgpt was 2022
This makes alot of people think that things are just accelerating and they can be along for the ride. But its the years and years of foundational research that allows this to be done. That toll has to be paid for the successsors of LLMs to be able to reason properly and operate in the world the way humans do. That sowing wont happen as fast as the reaping did. Lecun was to plant those seeds, the others who onky was to eat the fruit dont get that they have to wait
If his ideas had real substance, we would have seen substantial results by now.
He introduced I-JEPA in 2023, so almost three years ago at this point.
If he still hasn’t produced anything truly meaningful after all these years at Meta, when is that supposed to happen? Yann LeCun has been at Facebook/Meta since December 2013.
Your chronological sequence is interesting, but it refers to a time when the number of researchers and the amount of compute available were a tiny fraction of what they are today.
This couldn't have happened sooner, for 2 reasons.
1) the world has become a bit too focused on LLMs (although I agree that the benefits & new horizons that LLMs bring are real). We need research on other types of models to continue.
2) I almost wrote "Europe needs some aces". Although I'm European, my attitude is not at all that one of competition. This is not a card game. What Europe DOES need is an ATTRACTIVE WORKPLACE, so that talent that is useful for AI can also find a place to work here, not only overseas!
Regardless of your opinion of Yann or his views on auto regressive models being "sufficient" for what most would describe as AGI or ASI, this is probably a good thing for Europe. We need more well capitalized labs that aren't US or China centric and while I do like Mistral, they just haven't been keeping up on the frontier of model performance and seem like they've sort of pivoted into being integration specialists and consultants for EU corporations. That's fine and they've got to make money, but fully ceding the research front is not a good way to keep the EU competitive.
LeCun's technical approach with AMI will likely be based on JEPA, which is also a very different approach than most US-based or Chinese AI labs are taking.
If you're looking to learn about JEPA, LeCun's vision document "A Path Towards Autonomous Machine Intelligence" is long but sketches out a very comprehensive vision of AI research:
https://openreview.net/pdf?id=BZ5a1r-kVsf
Training JEPA models within reach, even for startups. For example, we're a 3-person startup who trained a health timeseries JEPA. There are JEPA models for computer vision and (even) for LLMs.
You don't need a $1B seed round to do interesting things here. We need more interesting, orthogonal ideas in AI. So I think it's good we're going to have a heavyweight lab in Europe alongside the US and China.
BTW, I went to your website looking for this, but didn't find your blog. I do now see that it's linked in the footer, but I was looking for it in the hamburger menu.
Thanks! We need to re-do the top navigation / hamburger menu -- we've added a bunch of new things in the past few months, and it badly needs to be re-organized.
This is very cool work! I have a quick follow-up: in the biomarker prediction task, what horizon (ie. how far into the future) did you set for the predictions? Prediction is hard beyond an hour, so it'd be impressive if your model handles that.
The prediction task is set up as predicting the next measured biomarkers based on a week of wearable data. So it's not necessarily predicting into the future, but predicting dataset Y given dataset X.
The specific biomarkers being predicted are the ones most relevant to heart health, like cholesterol or HbA1c. These tend to be more stable from hour to hour -- they may vary on a timescale of weeks as you modify your diet or take medications.
Very interesting. I am keenly interested in this space and coincidentally had my blood drawn this morning.
That said, have you considered that “Measure 100+ biomarkers with a single blood draw” combined with "heart health is a solved problem” reads a lot like Theranos?
FWIW, the single blood draw is 6-8 vials -- so we're not claiming to get 100 biomarkers from a single drop. The point of that is mostly that it just takes one appointment / is convenient.
Appreciate your work! Healthcare is a regulated industry. Everything (Research, proposals, FDA submissions, Compliance docs, Accreditation Standards, etc.) is documented and follows a process, which means there's a lot of thesis. You can't sneak in anything unverified or unreliable. Why does healthcare need a JEPA\World model?
Regulation is quickly catching up to modern AI techniques; for the most part, the approach is to verify outputs rather than process. For example, Utah's pilot to let AI prescribe medications has doctors check the first N prescriptions of each medication. Medicare is starting to pay for AI-enabled care, but tying payment to objective biomarkers like cholesterol or blood pressure actually got better.
Hm, Singapour looks more like "one of their base"; they will have offices in Paris, Montréal, Singapour and New York (according to both this article and the interview Yann Le Cun did this morning on France Inter, the most listened radio in France).
Of course, each relevant newspaper on those areas highlight that it's coming to their place, but it really seems to be distributed.
Which would be a good idea, as a European. I'd hate to see the investment go to waste on taxes that are spent on stupid shit anyway. Should go into R&D not fighting bureaucracy.
For such companies, France also offers generous R&D tax credits (Crédit Impôt Recherche): companies can recover roughly 30% of eligible R&D expenses incurred in France as a tax credit, which can eventually be refunded (in cash) if the company has no taxable profit.
While I’d love there to be a European frontier model, I do very much enjoy mistral. For the price and speed it outperforms any other model for my use cases (language learning related formatting, non-code non-research).
Partner in a fund that wrote a small check into this — I have no private knowledge of the deal - while I agree that one’s opinion on auto regressive models doesn’t matter, I think the fact of whether or not the auto regressive models work matters a lot, and particularly so in LeCun’s case.
What’s different about investing in this than investing in say a young researcher’s startup, or Ilya’s superintelligence? In both those cases, if a model architecture isn’t working out, I believe they will pivot. In YL’s case, I’m not sure that is true.
In that light, this bet is a bet on YL’s current view of the world. If his view is accurate, this is very good for Europe. If inaccurate, then this is sort of a nothing-burger; company will likely exit for roughly the investment amount - that money would not have gone to smaller European startups anyway - it’s a wash.
FWIW, I don’t think the original complaint about auto-regression “errors exist, errors always multiply under sequential token choice, ergo errors are endemic and this architecture sucks” is intellectually that compelling. Here: “world model errors exist, world model errors will always multiply under sequential token choice, ergo world model errors are endemic and this architecture sucks.” See what I did there?
On the other hand, we have a lot of unused training tokens in videos, I’d like very much to talk to a model with excellent ‘world’ knowledge and frontier textual capabilities, and I hope this goes well. Either way, as you say, Europe needs a frontier model company and this could be it.
I don't think it's "regardless", your opinion on LeCun being right should be highly correlated to your opinion on whether this is good for Europe.
If you think that LLMs are sufficient and RSI is imminent (<1 year), this is horrible for Europe. It is a distracting boondoggle exactly at the wrong time.
It's sufficient to think that there is a chance that they will not be, however, for there to be a non-zero value to fund other approaches.
And even if you think the chance is zero, unless you also think there is a zero chance they will be capable of pivoting quickly, it might still be beneficial.
I think his views are largely flawed, but chances are there will still be lots of useful science coming out of it as well. Even if current architectures can achieve AGI, it does not mean there can't also be better, cheaper, more effective ways of doing the same things, and so exploring the space more broadly can still be of significant value.
I think LeCun has been so consistently wrong and boneheaded for basically all of the AI boom, that this is much, much more likely to be bad than good for Europe. Probably one of the worst people to give that much money to that can even raise it in the field.
LeCun was stubbornly 'wrong and boneheaded' in the 80s, but turned out to be right. His contention now is that LLMs don't truly understand the physical world - I don't think we know enough yet to say whether he is wrong.
Whenever I see claims about AGI being reachable through large language models, it reminds me of the miasma theory of disease. Many respectable medical professionals were convinced this was true, and they viewed the entire world through this lens. They interpreted data in ways that aligned with a miasmatic view.
Of course now we know this was delusional and it seems almost funny in retrospect. I feel the same way when I hear that 'just scale language models' suddenly created something that's true AGI, indistinguishable from human intelligence.
The miasma theory of disease, though wrong, made lots of predictions that proved useful and productive. Swamps smell bad, so drain them; malaria decreases. Excrement in the street smells bad, so build sewage systems; cholera decreases. Florence Nightingale implemented sanitary improvements in hospitals inspired by miasma theory that improved outcomes.
It was empirical and, though ultimately wrong, useful. Apply as you will to theories of learning.
> Whenever I see claims about AGI being reachable through large language models, it reminds me of the miasma theory of disease.
Whenever I see people think the model architecture matters much, I think they have a magical view of AI. Progress comes from high quality data, the models are good as they are now. Of course you can still improve the models, but you get much more upside from data, or even better - from interactive environments. The path to AGI is not based on pure thinking, it's based on scaling interaction.
To remain in the same miasma theory of disease analogy, if you think architecture is the key, then look at how humans dealt with pandemics... Black Death in the 14th century killed half of Europe, and none could think of the germ theory of disease. Think about it - it was as desperate a situation as it gets, and none had the simple spark to keep hygiene.
The fact is we are also not smart from the brain alone, we are smart from our experience. Interaction and environment are the scaffolds of intelligence, not the model. For example 1B users do more for an AI company than a better model, they act like human in the loop curators of LLM work.
It's unintuitive to me that architecture doesn't matter - deep learning models, for all their impressive capabilities, are still deficient compared to human learners as far as generalisation, online learning, representational simplicity and data efficiency are concerned.
Just because RNNs and Transformers both work with enormous datasets doesn't mean that architecture/algorithm is irrelevant, it just suggests that they share underlying primitives. But those primitives may not be the right ones for 'AGI'.
If I'm understanding you, it seems like you're struck by hindsight bias. No one knew the miasma theory was wrong... it could have been right! Only with hindsight can we say it was wrong. Seems like we're in the same situation with LLMs and AGI.
The miasma theory of disease was "not even wrong" in the sense that it was formulated before we even had the modern scientific method to define the criteria for a theory in the first place. And it was sort of accidentally correct in that some non-infectious diseases are caused by airborne toxins.
It really depends what you mean by 'we'. Laymen? Maybe. But people said it was wrong at the time with perfectly good reasoning. It might not have been accessible to the average person, but that's hardly to say that only hindsight could reveal the correct answer.
Luck. RNNs can do it just as good, Mamba, S4, etc - for a given budget of compute and data. The larger the model the less architecture makes a difference. It will learn in any of the 10,000 variations that have been tried, and come about 10-15% close to the best. What you need is a data loop, or a data source of exceptional quality and size, data has more leverage. Architecture games reflect more on efficiency, some method can be 10x more efficient than another.
That's not how I read the transformer stuff around the time it was coming out: they had concrete hypotheses that made sense, not just random attempts at striking it lucky. In other words, they called their shots in advance.
I'm not aware that we have notably different data sources before or after transformers, so what confounding event are you suggesting transformers 'lucked' in to being contemporaneous with?
Also, why are we seeing diminishing returns if only the data matters. Are we running out of data?
The premise is wrong, we are not seeing diminishing returns. By basically any metric that has a ratio scale, AI progress is accelerating, not slowing down.
> Of course you can still improve the models, but you get much more upside from data, or even better - from interactive environments.
I'm on the contrary believe that the hunt for better data is an attempt to climb the local hill and be stuck there without reaching the global maximum. Interactive environments are good, they can help, but it is just one of possible ways to learn about causality. Is it the best way? I don't think so, it is the easier way: just throw money at the problem and eventually you'll get something that you'll claim to be the goal you chased all this time. And yes, it will have something in it you will be able to call "causal inference" in your marketing.
But current models are notoriously difficult to teach. They eat enormous amount of training data, a human needs much less. They eat enormous amount of energy to train, a human needs much less. It means that the very approach is deficient. It should be possible to do the same with the tiny fraction of data and money.
> The fact is we are also not smart from the brain alone, we are smart from our experience. Interaction and environment are the scaffolds of intelligence, not the model.
Well, I learned English almost all the way to B2 by reading books. I was too lazy to use a dictionary most of the time, so it was not interactive: I didn't interact even with dictionary, I was just reading books. How many books I've read to get to B2? ~10 or so. Well, I read a lot of English in Internet too, and watched some movies. But lets multiply 10 books by 10. Strictly speaking it was not B2, I was almost completely unable to produce English and my pronunciation was not just bad, it was worse. Even now I stumble sometimes on words I cannot pronounce. Like I know the words and I mentally constructed a sentence with it, but I cannot say it, because I don't know how. So to pass B2 I spent some time practicing speech, listening and writing. And learning some stupid topic like "travel" to have a vocabulary to talk about them in length.
How many books does LLM need to consume to get to B2 in a language unknown to it? How many audio records it needs to consume? Life wouldn't be enough for me to read and/or listen so much.
If there was a human who needed to consume as much information as LLM to learn, they would be the stupidest person in all the history of the humanity.
Are you asking how many books a large language model would need to read to learn a new language if it was only trained on a different language? probably just 1 (the dictionary)
Just because you raise 1 billion dollars to do X doesn't mean you can't pivot and do Y if it is in the best interest of your mission.
I won't comment on Yann LeCun or his current technical strategy, but if you can avoid sunk cost fallacy and pivot nimbly I don't think it is bad for Europe at all. It is "1 billion dollars for an AI research lab", not "1 billion dollars to do X".
It's been 6 months away for 5 years now. In that time we've seen relatively mild incremental changes, not any qualitative ones. It's probably not 6 months away.
Yeah. I feel like that like many projects the last 20% take 80% of time, and imho we are not in the last 20%
Sure LLMs are getting better and better, and at least for me more and more useful, and more and more correct. Arguably better than humans at many tasks yet terribly lacking behind in some others.
Coding wise, one of the things it does “best”, it still has many issues: For me still some of the biggest issues are still lack of initiative and lack of reliable memory. When I do use it to write code the first manifests for me by often sticking to a suboptimal yet overly complex approach quite often. And lack of memory in that I have to keep reminding it of edge cases (else it often breaks functionality), or to stop reinventing the wheel instead of using functions/classes already implemented in the project.
All that can be mitigated by careful prompting, but no matter the claim about information recall accuracy I still find that even with that information in the prompt it is quite unreliable.
And more generally the simple fact that when you talk to one the only way to “store” these memories is externally (ie not by updating the weights), is kinda like dealing with someone that can’t retain memories and has to keep writing things down to even get a small chance to cope. I get that updating the weights is possible in theory but just not practical, still.
I think we - in last few months - are very close to, if not already at, the point where "coding" is solved. That doesn't mean that software design or software engineering is solved, but it does mean that a SOTA model like GPT 5.4 or Opus 4.6 has a good chance of being able to code up a working version of whatever you specify, with reason.
What's still missing is the general reasoning ability to plan what to build or how to attack novel problems - how to assess the consequences of deciding to build something a given way, and I doubt that auto-regressively trained LLMs is the way to get there, but there is a huge swathe of apps that are so boilerplate in nature that this isn't the limitation.
I think that LeCun is on the right track to AGI with JEPA - hardly a unique insight, but significant to now have a well funded lab pursuing this approach. Whether they are successful, or timely, will depend if this startup executes as a blue skies research lab, or in more of an urgent engineering mode. I think at this point most of the things needed for AGI are more engineering challenges rather than what I'd consider as research problems.
Sure, Claude and other SOTA LLMs do generate about 90% of my code but I feel like we are not closer to solving the last 10% than we were a year ago in the days of Claude 3.7. It can pretty reliably get 90% there and then I can either keep prompting it to get the rest done or just do it manually which is quite often faster.
LLMs produce slop far to often to say they are in any way better than cold fusion in terms of usable results. "AI" kind of is the cold fusion of tech. We've always been 5 or 10 years away from "AGI" and likely always will be.
> fully ceding the research front is not a good way to keep the EU competitive
Tech is ultimately a red herring as far as what's needed to keep the EU competitive. The EU has a trillion dollar hole[0] to fill if they want to replace US military presence, and current net import over 50% of their energy. Unfortunately the current situation in Iran is not helping either of these as they constrains energy further and risks requiring military intervention.
Hard disagree, military might isn't going to secure anybody into the future, modern society and our economies will only get more vulnerable as time goes on and large wars or engagements will just push economies closer to collapse. And without a solid modern economy to back up the military, modern military will fall apart.
Europe doesn't want to be reliant (understandably) on the US military for defense, because if they are, as Trump has demonstrated, they will be pressured to make concessions not in their interests.
The need for a military is tightly coupled with the EU's need for energy. You can see this in the immediate impact that the war in Iran has had on Germany's natural gas prices [0]. But already unable to defend itself from Russia, EU countries are in a tough spot since they can't really afford to expend military resources defending their energy needs, and yet also don't have the energy independence to ignore these military engagements without risk. Meanwhile Russia has spend the last 4 years transition to a wartime economy and is getting hungry for expanded resource acquisition.
The world hasn't fundamentally changed since the stone age: humans need resources to survive and if there aren't enough people for those resources then violence will decide who has access the them.
> Regardless of your opinion of Yann or his views on auto regressive models being "sufficient" for what most would describe as AGI or ASI
My main concern with Lecunn are the amount of times he has repeatedly told people software is open source when it’s license directly violates the open source definition.
Is it good? This will almost certainly fail. Not because Yann or Europe, but because these sort of hyper-hyped projects fail. SSI and Thinking Machines haven’t lived to the hype.
To be fair to SSI, they were very explicit about their plan: "we are going to take money and not release anything until we one-shot superintelligence."
If you invested in that you knew what you were getting yourself into!
I didn't really know who he was, so I went and found his wikipedia, which is written like either he wrote it himself to stroke his ego, or someone who likes him wrote it to stroke his ego:
> He is the Jacob T. Schwartz Professor of Computer Science at the Courant Institute of Mathematical Sciences at New York University. He served as Chief AI Scientist at Meta Platforms before leaving to work on his own startup company.
That entire sentence before the remarks about him service at Meta could have been axed, its weird to me when people compare themselves to someone else who is well known. It's the most Kanye West thing you can do. Mind you the more I read about him, the more I discovered he is in fact egotistical. Good luck having a serious engineering team with someone who is egotistical.
You underestimate academia. Any academic that reads these two sentences only focuses on the first one: He has a named chair at Courant. In Germany, being a a Prof is added to your ID card/passport and becomes part of your official name, like knighthood in other countries.
It's not comparing him to anyone. He has an endowed professorship. This is standard in academia, and you give the name because a) it's prestigious for the recipient and b) it strokes the ego of the donor.
That’s not a comparison to another person. That’s his job title. It is not uncommon for universities to have distinguished chairs within departments named after a notable person—in this case, the founder of NYU’s Department of Computer Science.
I feel like I'm the only one not getting the world models hype. We've been talking about them for decades now, and all of it is still theoretical. Meanwhile LLMs and text foundation models showed up, proved to be insanely effective, took over the industry, and people are still going "nah LLMs aren't it, world models will be the gold standard, just wait."
I bet LLMs and world models will merge. World models essentially try to predict the future, with or without actions taken. LLMs with tokenized image input can also be made to predict the future image tokens. It's a very valuable supervised learning signal aside from pre-training and various forms of RL.
I think "world models" is the wrong thing to focus on when contrasting the "animal intelligence" approach (which is what LeCun is striving for) with LLMs, especially since "world model" means different things to different people. Some people would call the internal abstractions/representations that an LLM learns during training a "world model" (of sorts).
The fundamental problem with today's LLMs that will prevent them from achieving human level intelligence, and creativity, is that they are trained to predict training set continuations, which creates two very major limitations:
1) They are fundamentally a COPYING technology, not a learning or creative one. Of course, as we can see, copying in this fashion will get you an extremely long way, especially since it's deep patterns (not surface level text) being copied and recombined in novel ways. But, not all the way to AGI.
2) They are not grounded, therefore they are going to hallucinate.
The animal intelligence approach, the path to AGI, is also predictive, but what you predict is the external world, the future, not training set continuations. When your predictions are wrong (per perceptual feedback) you take this as a learning signal to update your predictions to do better next time a similar situation arises. This is fundamentally a LEARNING architecture, not a COPYING one. You are learning about the real world, not auto-regressively copying the actions that someone else took (training set continuations).
Since the animal is also acting in the external world that it is predicting, and learning about, this means that it is learning the external effects of it's own actions, i.e. it is learning how to DO things - how to achieve given outcomes. When put together with reasoning/planning, this allows it to plan a sequence of actions that should achieve a given external result ("goal").
Since the animal is predicting the real world, based on perceptual inputs from the real world, this means that it's predictions are grounded in reality, which is necessary to prevent hallucinations.
So, to come back to "world models", yes an animal intelligence/AGI built this way will learn a model of how the world works - how it evolves, and how it reacts (how to control it), but this behavioral model has little in common with the internal generative abstractions that an LLM will have learnt, and it is confusing to use the same name "world model" to refer to them both.
RL on LLMs has changed things. LLMs are not stuck in continuation predicting territory any more.
Models build up this big knowledge base by predicting continuations. But then their RL stage gives rewards for completing problems successfully. This requires learning and generalisation to do well, and indeed RL marked a turning point in LLM performance.
A year after RL was made to work, LLMs can now operate in agent harnesses over 100s of tool calls to complete non-trivial tasks. They can recover from their own mistakes. They can write 1000s of lines of code that works. I think it’s no longer fair to categorise LLMs as just continuation-predictors.
Thanks for saying this. It never ceases to amaze me how many people still talk about LLMs like it’s 2023, completely ignoring the RLVR revolution that gave us models like Opus that can one-shot huge chunks of works-first-time code for novel use cases. Modern LLMs aren’t just trained to guess the next token, they are trained to solve tasks.
I attended a talk from Yann LeCun, and he always had a strong opinion about auto-regressive models. Its nice to see someone not just chasing hype and doing more research.
AI is developing backwards. The simplest organisms eat and find food. More complex ones can smell and sense tremors. After several steps in evolution comes vision and complex thought.
AIs that can't smell, can't feel hunger, can't desire -- I do not think it can understand the world the way organic life does.
The giant seed round proves investors were willing to fund Mira Murati, not that the company had built anything durable.
Within months, it had already lost cofounder Andrew Tulloch to Meta, then cofounders Barret Zoph and Luke Metz plus researcher Sam Schoenholz to OpenAI; WIRED also reported that at least three other researchers left. At that point, citing it as evidence of real competitive momentum feels weak.
As someone in the tech twitter sphere this is yann and his ideas performing a suplex on LLM based companies. It is completely unfathomable to start an ai research company… Only sell off 20% and have 1 billion for screwing around for a few years.
Why world model? To emulate how we became sentient?
A "world" is just senses. In a way the context is one sense. A digital only world is still a world.
I think more success is in a model having high level needs and aspirations that are borne from lower level needs. Model architecture also needs to shift to multiple autonomous systems that interact, in the same ways our brains work - there's a lot under the surface inside our heads, it's not just "us" in there.
We only interact with our environment because of our low level needs, which are primarily: food, water. Secondary: mating. Tertiary: social/tribal credit (which can enable food, water and mating).
I have no faith in anyone doing AI to accomplish anything (especially relative to how much money they spend) except John Carmack. People should be trying to throw money at him
Wasn't there some recent argument that world models won't achieve AGI either due to overlooking the normative framework, fundamental symmetries of the world purely from data and collapse in multi-step reasoning? JEPA is sacrificing fidelity for abstract representation yet how does that help in the real world where fidelity is the most important point? It's like relying on differential equations yet soon finding out they only cover minuscule amount of real world problems and almost all interesting problems are unsolvable by them.
A fair amount of negative comments here, but Yann might very well be the person who brings the Bell Labs culture back to life. It’s been badly missing, and not just in Europe.
That's between 1 and 10 training runs on a large foundational model, depending on pricing discounts and how much they manage to optimize it. I priced this out last night on AWS, which is admittedly expensive, but models have also gotten larger.
He couldn't achieve at least parity with LLMs during his days at Meta (and having at his disposal billions in resources most probably) but he'll succeed now? What is the pitch?
The pitch isnt to try to squeeze money out of a product like altman does. Its to lay the groundwork for the next evolution in AI. Llms were built on decades of work and theyve hit their limits. We'll need to invest alot of time building foundations without getting any tangible yeild for the next step to work. Get too greedy and youll be stuck
What use is it to understand the physical world if all investments are misallocated to the virtual world? Perhaps the AI will detect that there is a housing shortage and politicians will finally believe it because AI said so?
Does anyone have a sense of how funding like this is typically allocated?
how much tends to go toward compute/training versus researchers, infrastructure, and general operations?
There's been a few very interesting JEPA publications from LeCun recently, particularly the leJEPA paper which claims to simplify a lot of training headaches for that class of models.
JEPAs also strike me as being a bit more akin to human intelligence, where for example, most children are very capable of locomotion and making basic drawings, but unable to make pixel level reconstructions of mental images (!!).
One thing I want to point out is that very LeCunn type techniques demonstrating label free training such as JEAs like DINO and JEPAs have been converging on performance of models that require large amounts of labeled data.
Alexandr Wang is a billionaire who made his wealth through a data labeling company and basically kicked LeCunn out.
Overall this will be good for AI and good for open source.
Yann LeCun said a number of things that are very dubious, like autoregressive LLMs are a dead end, LLMs do not have an internal world model, and this morning https://www.youtube.com/watch?v=AFi1TPiB058 (in french) that an IA cannot find a strategy to preserve itself against the will of its creator.
As a french, I wish him good luck anyway, I'm all for exploring different avenues of achieving AGI.
Looks like they'll be hiring on in Montreal in addition to Paris (and NYC and Signapore): https://jobs.ashbyhq.com/ami
I hope they grow that office like crazy. This would be really good for Canada. We have (or have had) the AI talent here (though maybe less so overall in Montreal than in Toronto/Waterloo and Vancouver and Edmonton).
And I hope Carney is promoting the crap out of this and making it worth their while to build that office out.
I don't really do Python or large scale learning etc, so don't see a path for myself to apply there but I hope this sparks some employment growth here in Canada. Smart choice to go with bilingual Montreal.
I'm still just so surprised any time I encounter people who think AI will be overall good for humanity
I pretty strongly think it will only benefit the rich and powerful while further oppressing and devaluing everyone else. I tend to think this is an obvious outcome and it would be obviously very bad (for most of us)
So I wonder if you just think you will be one of the few who benefit at the expense of others, or do you truly believe AI will benefit all of humanity?
> So I wonder if you just think you will be one of the few who benefit at the expense of others
It's not a zero sum game, IMO. It will benefit some, be neutral for others, negative for others.
For instance, improved productivity could be good (and doesn't have to result in layoffs, Jevon's paradox will come into play, IMO, with increased demand). Easier/better/faster scientific research could be good too. Not everyone would benefit from those, but not everyone has to for it to be generally good.
Autonomous AI-powered drone swarms could be bad, or could result in a Mutually Assured Destruction stalemate.
> improved productivity could be good (and doesn't have to result in layoffs
It already has resulted in layoffs and one of the weakest job markets we've seen in ages
Executives could not have used it as an excuse for layoffs faster, they practically tripped over themselves trying to use it as an excuse to lay people off
No, a zero sum game would require for the "winners" to take it from the "losers", and there is a limited amount to go around. If there is a majority of "winners" by expanding, some neutral, some negative, that is not a zero sum game.
If, for even 1s, they get in a position which is threatening, in any way, Big Tech AI (mostly US based if not all), they will be raided by international finance to be dismantled and poached hardcore with some massive US "investment funds" (which looks more and more as "weaponized" international finance!!). Only china is very immune to international finance. Those funds have tens of thousands of billions of $, basically, in a world of money, there is near zero resistance.
Don’t think that’s a fair interpretation of what I said.
Liquid money rich? No.
Can get pulled for big tech packages? Also no, for most of the employees.
AFAIK, big tech didn’t aggressively poach OpenAI-like talent, they did spend 10M+ pay packages but it was for a select few research scientists. Some folks left and came but it boiled down to culture mostly.
Once again, US companies and VCs are in this seed round. Just like Mistral with their seed round.
Europe again missing out, until AMI reaches a much higher valuation with an obvious use case in robotics.
Either AMI reaches over $100B+ valuation (likely) or it becomes a Thinking Machines Lab with investors questioning its valuation. (very unlikely since world models has a use-case in vision and robotics)
I can't read the article, but American investors investing into European companies, isn't US the one missing out here? Or does "Europe" "win" when European investors invest in US companies? How does that work in your head?
Personally I don't believe anyone is missing out on anything here.
But rvz earlier claimed that Europe is missing out, because US investors are investing in a European company. That's kind of surprising to me, so asking if they also believe that the US is "missing out" whenever European investors invest in US companies, or if that sentiment only goes one way.
Here you can see why it is so hard to compete as European startup with US startups - abysmal access to money. Investment of 1B USD in Europe is glorified as largest seed ever, but in USA it is another Tuesday.
For a foundation AI lab with a world famous AI researcher at the helm though, it's not so impressive. Won't even touch the sides of the hardware costs they'd need to be anywhere near competitive
Europeans have free healthcare and retirement. They consider putting their money with long term benefits not just become CEO on Tuesday and declare bankruptcy on Wednesday.
Retirement is the worst.
You are basically forced to pay into a unsustainable system ( at least in Germany ).
It already has to be subsidized by taxes .
Exactly. State retirement in Europes is not free nor great. We pay extra in taxes for it and it's only great for the present day retirees, not for those paying into the system right now who will retire into the future. It's the same as US social security, it's not some extra perk that Europeans have over Americans.
Top tier scientists aren't gonna be swayed by European state retirement systems.
It is an universal system but definitely not free .
In Germany you pay on average 17.5% of your salary for healthcare insurance and 18.6% for retirement .
However contribution caps exists . 70k for healthcare and 100k for retirement .
Adds up : We are seeing a clear exodus of both capital and talent from the US - with the current US administration’s shift toward cronyism - and the EU stands as the most compelling alternative with a uniform market of 500 million people and the last major federation truly committed to the rule of law.
That's a bonfire of capital into a gaping hole in the ground with zero chance outside of "military pork" and "overcharging the taxpayer" to ever make their money back.
The brain capital loss here is what's going to spook investors.
Justifiable.
There are a lot more degrees of freedom in world models.
LLMs are fundamentally capped because they only learn from static text -- human communications about the world -- rather than from the world itself, which is why they can remix existing ideas but find it all but impossible to produce genuinely novel discoveries or inventions. A well-funded and well-run startup building physical world models (grounded in spatiotemporal understanding, not just language patterns) would be attacking what I see as the actual bottleneck to AGI. Even if they succeed only partially, they may unlock the kind of generalization and creative spark that current LLMs structurally can't reach.
A few years ago I've made this simple thought experiment to convince myself that LLM's won't achieve superhuman level (in the sense of being better than all human experts):
Imagine that we made an LLM out of all dolphin songs ever recorded, would such LLM ever reach human level intelligence? Obviously and intuitively the answer is NO.
Your comment actually extended this observation for me sparking hope that systems consuming natural world as input might actually avoid this trap, but then I realized that tool use & learning can in fact be all that's needed for singularity while consuming raw data streams most of the time might actually be counterproductive.
I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation. World models don't solve any of these problems; they are fundamentally the same kind of deep learning architectures we are used to work with. Heck, if you think learning from the world itself is the bottleneck, you can just put a vision-action LLM on a reinforcement learning loop in a robotic/simulated body.
> I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation.
Even with continuous backpropagation and "learning", enriching the training data, so called online-learning, the limitations will not disappear. The LLMs will not be able to conclude things about the world based on fact and deduction. They only consider what is likely from their training data. They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.
Whether humans always apply that much effort to conclude these things is another question. The point is, that humans fundamentally are capable of doing that, while LLMs are structurally not.
The problems are structural/architectural. I think it will take another 2-3 major leaps in architectures, before these AI models reach human level general intelligence, if they ever reach it. So far they can "merely" often "fake it" when things are statistically common in their training data.
> Even with continuous backpropagation and "learning"
That's what I said. Backpropagation cannot be enough; that's not how neurons work in the slightest. When you put biological neurons in a Pong environment they learn to play not through some kind of loss or reward function; they self-organize to avoid unpredictable stimulation. As far as I know, no architecture learns in such an unsupervised way.
https://www.sciencedirect.com/science/article/pii/S089662732...
Forgive me for being ignorant - but 'loss' in supervised learning ML context encode the difference between how unlikely (high loss) or likely (low loss) was the network in predicting the output based on the input.
This sounds very similar to me as to what neurons do (avoid unpredictable stimulation)
So, I have been thinking about this for a little while. Image a model f that takes a world x and makes a prediciton y. At a high-level, a traditional supervised model is trained like this
f(x)=y' => loss(y',y) => how good was my prediction? Train f through backprop with that error.
While a model trained with reinforcement learning is more similar to this. Where m(y) is the resulting world state of taking an action y the model predicted.
f(x)=y' => m(y')=z => reward(z) => how good was the state I was in based on my actions? Train f with an algorithm like REINFORCE with the reward, as the world m is a non-differentiable black-box.
While a group of neurons is more like predicting what is the resulting word state of taking my action, g(x,y), and trying to learn by both tuning g and the action taken f(x).
f(x)=y' => m(y')=z => g(x,y)=z' => loss(z,z') => how predictable was the results of my actions? Train g normally with backprop, and train f with an algorithm like REINFORCE with negative surprise as a reward.
After talking with GPT5.2 for a little while, it seems like Curiosity-driven Exploration by Self-supervised Prediction[1] might be an architecture similar to the one I described for neurons? But with the twist that f is rewarded by making the prediction error bigger (not smaller!) as a proxy of "curiosity".
[1] https://arxiv.org/pdf/1705.05363
Humans are notoriously bad at formal logic. The Wason selection task is the classic example: most people fail a simple conditional reasoning problem unless it’s dressed up in familiar social context, like catching cheaters. That looks a lot more like pattern matching than rule application.
Kahneman’s whole framework points the same direction. Most of what people call “reasoning” is fast, associative, pattern-based. The slow, deliberate, step-by-step stuff is effortful and error-prone, and people avoid it when they can. And even when they do engage it, they’re often confabulating a logical-sounding justification for a conclusion they already reached by other means.
So maybe the honest answer is: the gap between what LLMs do and what most humans do most of the time might be smaller than people assume. The story that humans have access to some pure deductive engine and LLMs are just faking it with statistics might be flattering to humans more than it’s accurate.
Where I’d still flag a possible difference is something like adaptability. A person can learn a totally new formal system and start applying its rules, even if clumsily. Whether LLMs can genuinely do that outside their training distribution or just interpolate convincingly is still an open question. But then again, how often do humans actually reason outside their own “training distribution”? Most human insight happens within well-practiced domains.
> The Wason selection task is the classic example: most people fail a simple conditional reasoning problem unless it’s dressed up in familiar social context, like catching cheaters.
I've never heard about the Wason selection task, looked it up, and could tell the right answer right away. But I can also tell you why: because I have some familiarity with formal logic and can, in your words, pattern-match the gotcha that "if x then y" is distinct from "if not x then not y".
In contrast to you, this doesn't make me believe that people are bad at logic or don't really think. It tells me that people are unfamiliar with "gotcha" formalities introduced by logicians that don't match the everyday use of language. If you added a simple additional to the problem, such as "Note that in this context, 'if' only means that...", most people would almost certainly answer it correctly.
Mind you, I'm not arguing that human thinking is necessarily more profound from what what LLMs could ever do. However, judging from the output, LLMs have a tenuous grasp on reality, so I don't think that reductionist arguments along the lines of "humans are just as dumb" are fair. There's a difference that we don't really know how to overcome.
I think people MOSTLY foresee and anticipate events in OUR training data, which mostly comprises information collected by our senses.
Our training data is a lot more diverse than an LLMs. We also leverage our senses as a carrier for communicating abstract ideas using audio and visual channels that may or may not be grounded in reality. We have TV shows, video games, programming languages and all sorts of rich and interesting things we can engage with that do not reflect our fundamental reality.
Like LLMs, we can hallucinate while we sleep or we can delude ourselves with untethered ideas, but UNLIKE LLMs, we can steer our own learning corpus. We can train ourselves with our own untethered “hallucinations” or we can render them in art and share them with others so they can include it in their training corpus.
Our hallucinations are often just erroneous models of the world. When we render it into something that has aesthetic appeal, we might call it art.
If the hallucination helps us understand some aspect of something, we call it a conjecture or hypothesis.
We live in a rich world filled with rich training data. We don’t magically anticipate events not in our training data, but we’re also not void of creativity (“hallucinations”) either.
Most of us are stochastic parrots most of the time. We’ve only gotten this far because there are so many of us and we’ve been on this earth for many generations.
Most of us are dazzled and instinctively driven to mimic the ideas that a small minority of people “hallucinate”.
There is no shame in mimicking or being a stochastic parrot. These are critical features that helped our ancestors survive.
> They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.
Can you be a bit more specific at all bounds? Maybe via an example?
I'm sure that if a car appeared from nowhere in the middle of your living room, you would not be prepared at all.
So my question is: when is there enough training data that you can handle 99.99% of the world ?
> Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation.
While I suspect latter is a real problem (because all mammal brains* are much more example-efficient than all ML), the former is more about productisation than a fundamental thing: the models can be continuously updated already, but that makes it hard to deal with regressions. You kinda want an artefact with a version stamp that doesn't change itself before you release the update, especially as this isn't like normal software where specific features can be toggled on or off in isolation of everything else.
* I think. Also, I'm saying "mammal" because of an absence of evidence (to my *totally amateur* skill level) not evidence of absence.
The fact that models aren't continually updating seems more like a feature. I want to know the model is exactly the same as it was the last time I used it. Any new information it needs can be stored in its context window or stored in a file to read the next it needs to access it.
> The fact that models aren't continually updating seems more like a feature.
I think this is true to some extent: we like our tools to be predictable. But we’ve already made one jump by going from deterministic programs to stochastic models. I am sure the moment a self-evolutive AI shows up that clears the "useful enough" threshold we’ll make that jump as well.
Stochastic and unpredictability aren't exactly the same. I would claim current LLMs are generally predictable even if it is not as predictable as a deterministic program.
Unless you use your oen local models then you don't even know when OpenAI or Anthropic tweaked the model less or more. One week it's a version x, next week it's a version y. Just like your operating system is continuously evolving with smaller patches of specific apps to whole new kernel version and new OS release.
There is still a huge gap between a model continuously updating itself and weekly patches by a specialist team. The former would make things unpredictable.
You could have continual learning on text and still be stuck in the same "remixing baseline human communications" trap. It's a nasty one, very hard to avoid, possibly even structurally unavoidable.
As for the "just put a vision LLM in a robot body" suggestion: People are trying this (e.g. Physical Intelligence) and it looks like it's extraordinarily hard! The results so far suggest that bolting perception and embodiment onto a language-model core doesn't produce any kind of causal understanding. The architecture behind the integration of sensory streams, persistent object representations, and modeling time and causality is critically important... and that's where world models come in.
I don't understand why online learning is that necessary. If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI. A hippocampus is a nice upgrade to that, but not super obviously on the critical path.
> If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI.
I like how people are accepting this dubious assertion that Einstein would be "useful" if you surgically removed his hippocampus and engaging with this.
It also calls this Einstein an AGI rather than a disabled human???
I guess the sheer amount and also variety of information you would need to pre-encode to get an Einstein at 40 is huge. Every day stream of high resolution video feed and actions and consequences and thoughts and ideas he has had until the age of 40 of every single moment. That includes social interactions, like a conversation and mimic of the other person in combination with what was said and background knowledge about the other person. Even a single conversation's data is a huge amount of data.
But one might say that the brain is not lossless ... True, good point. But in what way is it lossy? Can that be simulated well enough to learn an Einstein? What gives events significance is very subjective.
He basically said that himself:
"Reading, after a certain age, diverts the mind too much from its creative pursuits. Any man who reads too much and uses his own brain too little falls into lazy habits of thinking".
-- Albert Einstein
Kinda a moot point in my eyes because I very much doubt you can arrive at the same result without the same learning process.
That's true. Though could that hippocampus-less Einstein be able to keep making novel complex discoveries from that point forward? Seems difficult. He would rapidly reach the limits of his short term memory (the same way current models rapidly reach the limits of their context windows).
It could possibly be useful but I don't see why it would be AGI.
Where does that training data come from?
Agents have the ability of continual learning.
Putting stuff you have learned into a markdown file is a very "shallow" version of continual learning. It can remember facts, yes, but I doubt a model can master new out-of-distribution tasks this way. If anything, I think that Google's Titans[1] and Hope[2] architectures are more aligned with true continual learning (without being actual continual learning still, which is why they call it "test-time memorization").
[1] https://arxiv.org/pdf/2501.00663
[2] https://arxiv.org/pdf/2512.24695
The sum of human knowledge is more than enough to come up with innovative ideas and not every field is working directly with the physical world. Still I would say there's enough information in the written history to create virtual simulation of 3d world with all ohysical laws applying (to a certain degree because computation is limited).
What current LLMs lack is inner motivation to create something on their own without being prompted. To think in their free time (whatever that means for batch, on demand processing), to reflect and learn, eventually to self modify.
I have a simple brain, limited knowledge, limited attention span, limited context memory. Yet I create stuff based what I see, read online. Nothing special, sometimes more based on someone else's project, sometimes on my own ideas which I have no doubt aren't that unique among 8 billions of other people. Yet consulting with AI provides me with more ideas applicable to my current vision of what I want to achieve. Sure it's mostly based on generally known (not always known to me) good practices. But my thoughts are the same way, only more limited by what I have slowly learned so far in my life.
> virtual simulation of 3d world
Virtual simulations are not substitutable for the physical world. They are fundamentally different theory problems that have almost no overlap in applicability. You could in principle create a simulation with the same mathematical properties as the physical world but no one has ever done that. I'm not sure if we even know how.
Physical world dynamics are metastable and non-linear at every resolution. The models we do build are created from sparse irregular samples with large error rates; you often have to do complex inference to know if a piece of data even represents something real. All of this largely breaks the assumptions of our tidy sampling theorems in mathematics. The problem of physical world inference has been studied for a couple decades in the defense and mapping industries; we already have a pretty good understanding of why LLM-style AI is uniquely bad at inference in this domain, and it mostly comes down to the architectural inability to represent it.
Grounded estimates of the minimum quantity of training data required to build a reliable model of physical world dynamics, given the above properties, is many exabytes. This data exists, so that is not a problem. The models will be orders of magnitude larger than current LLMs. Even if you solve the computer science and theory problems around representation so that learning and inference is efficient, few people are prepared for the scale of it.
(source: many years doing frontier R&D on these problems)
I guess you need two things to make that happen. First, more specialization among models and an ability to evolve, else you get all instances thinking roughly the same thing, or deer in the headlights where they don't know what of the millions of options they should think about. Second, fewer guardrails; there's only so much you can do by pure thought.
The problem is, idk if we're ready to have millions of distinct, evolving, self-executing models running wild without guardrails. It seems like a contradiction: you can't achieve true cognition from a machine while artificially restricting its boundaries, and you can't lift the boundaries without impacting safety.
I have a pet peeve with the concept of "a genuinely novel discovery or invention", what do you imagine this to be? Can you point me towards a discovery or invention that was "genuinely novel", ever?
I don't think it makes sense conceptually unless you're literally referring to discovering new physical things like elements or something.
Humans are remixers of ideas. That's all we do all the time. Our thoughts and actions are dictated by our environment and memories; everything must necessarily be built up from pre-existing parts.
W Brian Arthur's book "The Nature of Technology" provides a framework for classifying new technology as elemental vs innovative that I find helpful. For example the Huntley-Mcllroy diff operates on the phenomenon that ordered correspondence survives editing. That was an invention (discovery of a natural phenomenon and a means to harness it). Myers diff improves the performance by exploiting the fact that text changes are sparse. That's innovation. A python app using libdiff, that's engineering. And then you might say in terms of "descendants": invention > innovation > engineering. But it's just a perspective.
Suno is transformer-based; in a way it's a heavily modified LLM.
You can't get Suno to do anything that's not in its training data. It is physically incapable of inventing a new musical genre. No matter how detailed the instructions you give it, and even if you cheat and provide it with actual MP3 examples of what you want it to create, it is impossible.
The same goes for LLMs and invention generally, which is why they've made no important scientific discoveries.
You can learn a lot by playing with Suno.
https://news.ycombinator.com/item?id=46094037
Genuinely novel discovery or invention?
Einstein’s theory of relativity springs to mind, which is deeply counter-intuitive and relies on the interaction of forces unknowable to our basic Newtonian senses.
There’s an argument that it’s all turtles (someone told him about universes, he read about gravity, etc), but there are novel maths and novel types of math that arise around and for such theories which would indicate an objective positive expansion of understanding and concept volume.
Einstein was heavily inspired by Mach: https://en.wikipedia.org/wiki/Mach%27s_principle
Nah - Poincare & Lorentz did quite a bit of groundwork on relativity and its implications before Einstein put it all together.
Novel things can be incremental. I don't think LLMs can do that either, at least I've never seen one do it.
Whether it is text or an image, it is just bits for a computer. A token can represent anything.
Sure, but don't conflate the representation format with the structure of what's being represented.
Everything is bits to a computer, but text training data captures the flattened, after-the-fact residue of baseline human thought: Someone's written description of how something works. (At best!)
A world model would need to capture the underlying causal, spatial, and temporal structure of reality itself -- the thing itself, that which generates those descriptions.
You can tokenize an image just as easily as a sentence, sure, but a pile of images and text won't give you a relation between the system and the world. A world model, in theory, can. I mean, we ought to be sufficient proof of this, in a sense...
It’s worth noting how our human relationship or understanding of our world model changed as our tools to inspect and describe our world advanced.
So when we think about capturing any underlying structure of reality itself, we are constrained by the tools at hand.
The capability of the tool forms the description which grants the level of understanding.
Was Alphago's move 37 original?
In the last step of training LLMs, reinforcement learning from verified rewards, LLMs are trained to maximize the probability of solving problems using their own output, depending on a reward signal akin to winning in Go. It's not just imitating human written text.
Fwiw, I agree that world models and some kind of learning from interacting with physical reality, rather than massive amounts of digitized gym environments is likely necessary for a breakthrough for AGI.
The term LLM is confusing your point because VLMs belong to the same bin according to Yann.
Using the term autoregressive models instead might help.
why LLMs (transformers trained on multimodal token sequences, potentially containing spatiotemporal information) can't be a world model?
https://medium.com/state-of-the-art-technology/world-models-...
> One major critique LeCun raises is that LLMs operate only in the realm of language, which is a simple, discrete space compared to the continuous, complex physical world we live in. LLMs can solve math problems or answer trivia because such tasks reduce to pattern completion on text, but they lack any meaningful grounding in physical reality. LeCun points out a striking paradox: we now have language models that can pass the bar exam, solve equations, and compute integrals, yet “where is our domestic robot? Where is a robot that’s as good as a cat in the physical world?” Even a house cat effortlessly navigates the 3D world and manipulates objects — abilities that current AI notably lacks. As LeCun observes, “We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”
But they don't only operate on language? They operate on token sequences, which can be images, coordinates, time, language, etc.
It’s an interesting observation, but I think you have it backwards. The examples you give are all using discrete symbols to represent something real and communicating this description to other entities. I would argue that all your examples are languages.
Whats the first L stand for? Thats not just vestogial, their model of the world is formed almost exclusively from language rather than a range of things contributing significantly like for humans.
The biggest thing thats missing is actual feedback to their decisions. They have no "idea of that because transformers and embeddings dont model that yet. And langiage descriptions and image representations of feedback arent enough. They are too disjointed. It needs more
How is a Linear stream of symbols able to capture the relationships of a real world?
It's like the people who are so hyped up about voice controlled computers. Like you get a linear stream of symbols is a huge downgrade in signals, right? I don't want computer interaction to be yet more simplified and worsened.
Compare with domain experts who do real, complicated work with computers, like animators, 3D modelers, CAD, etc. A mouse with six degrees of freedom, and a strong training in hotkeys to command actions and modes, and a good mental model of how everything is working, and these people are dramatically more productive at manipulating data than anyone else.
Imagine trying to talk a computer through nudging a bunch of vertexes through 3D space while flexibly managing modes of "drag" on connected vertexes. It would be terrible. And no, you would not replace that with a sentence of "Bot, I want you to nudge out the elbow of that model" because that does NOT do the same thing at all. An expert being able to fluidly make their idea reality in real time is just not even remotely close to the instead "Project Manager/mediocre implementer" relationship you get prompting any sort of generative model. The models aren't even built to contain specific "Style", so they certainly won't be opinionated enough to have artistic vision, and a strong understanding of what does and does not work in the right context, or how to navigate "My boss wants something stupid that doesn't work and he's a dumb person so how do I convince him to stop the dumb idea and make him think that was his idea?"
>We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.
https://en.wikipedia.org/wiki/Moravec%27s_paradox
All the things we look at as "Smart" seem to be the things we struggle with, not what is objectively difficult, if that can even be defined.
I really hate the world model terminology, but the actual low level gripe between LeCunn and autoregressive LLMs as they stand now is the fact that the loss function needs to reconstruct the entirety of the input. Anything less than pixel perfect reconstruction on images is penalized. Token by token reconstruction also is biased towards that same level of granularity.
The density of information in the spatiotemporal world is very very great, and a technique is needed to compress that down effectively. JEPAs are a promising technique towards that direction, but if you're not reconstructing text or images, it's a bit harder for humans to immediately grok whether the model is learning something effectively.
I think that very soon we will see JEPA based language models, but their key domain may very well be in robotics where machines really need to experience and reason about the physical the world differently than a purely text based world.
Isn't the Sora video model a ViT with spatiotemporal inputs (so they've found a way to compress that down), but at the same time LeCunn wouldn't consider that a world model?
There will be no "unlocking of AGI" until we develop a new science capable of artificial comprehension. Comprehension is the cornucopia that produces everything we are, given raw stimulus an entire communicating Universe is generated with a plethora of highly advanceds predator/prey characters in an infinitely complex dynamic, and human science and technology have no lead how to artificially make sense of that in a simultaneous unifying whole. That's comprehension.
Ironically, your comment is practically incomprehensible.
These two comments above me capture Slashdot in the early 2000s.
Honestly, how do people who know so little have this much confidence to post here?
You must be new here
A lot more justifiable than say, Thinking Machines at least. But we will "see".
World models and vision seems like a great use case for robotics which I can imagine that being the main driver of AMI.
> LLMs are fundamentally capped because they only learn from static text -- human communications about the world -- rather than from the world itself, which is why they can remix existing ideas but find it all but impossible to produce genuinely novel discoveries or inventions.
No hate, but this is just your opinion.
The definition of "text" here is extremely broad – an SVG is text, but it's also an image format. It's not incomprehensible to imagine how an AI model trained on lots of SVG "text" might build internal models to help it "visualise" SVGs in the same way you might visualise objects in your mind when you read a description of them.
The human brain only has electrical signals for IO, yet we can learn and reason about the world just fine. I don't see why the same wouldn't be possible with textual IO.
> But this is not an applied AI company.
There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
It could be a management issue, though, and I sincerely wish we will see more competition, but from what I quoted above, it does not seem like it.
Understanding world through videos (mentioned in the article), is just what video models have already done, and they are getting pretty good (see Seedance, Kling, Sora .. etc). So I'm not quite sure how what he proposed would work.
"and we didn't see anything" is not justified at all.
Meta absolutely has (or at least had) a word class industry AI lab and has published a ton of great work and open source models (granted their LLM open source stuff failed to keep up with chinese models in 2024/2025 ; their other open source stuff for thins like segmentation don't get enough credit though). Yann's main role was Chief AI Scientist, not any sort of product role, and as far as I can tell he did a great job building up and leading a research group within Meta.
He deserved a lot of credit for pushing Meta to very open to publishing research and open sourcing models trained on large scale data.
Just as one example, Meta (together with NYU) just published "Beyond Language Modeling: An Exploration of Multimodal Pretraining" (https://arxiv.org/pdf/2603.03276) which has a ton of large-experiment backed insights.
Yann did seem to end up with a bit of an inflated ego, but I still consider him a great research lead. Context: I did a PhD focused on AI, and Meta's group had a similar pedigree as Google AI/Deepmind as far as places to go do an internship or go to after graduation.
I wasn't criticising his scientific contribution at all, that's why I started my comment by appraising what he did.
Creating a startup has to be about a product. When you raise 1B, investors are expecting returns, not papers.
>> but he had access to many more resources in Meta, and we didn't see anything
> I wasn't criticising his scientific contribution at all, that's why I started my comment by appraising what he did.
You were criticising his output at Facebook, though, but he was in the research group at facebook, not a product group, so it seems like we did actually see lots of things?
they are not expecting returns at 1B+, just for some one to pay more than they paid six months ago
> There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
That's true for 99% of the scientists, but dismissing their opinion based on them not having done world shattering / ground breaking research is probably not the way to go.
> I sincerely wish we will see more competition
I really wish we don't, science isn't markets.
> Understanding world through videos
The word "understanding" is doing a lot of heavy lifting here. I find myself prompting again and again for corrections on an image or a summary and "it" still does not "understand" and keeps doing the same thing over and over again.
Do not keep bad results in context. You have to purge them to prevent them from effecting the next output. LLMs deceptively capable, but they don’t respond like a person. You can’t count on implicit context. You can’t count on parts of the implicit context having more weight than others.
Most folks get paid a lot more in a corporate job than tinkering at home - using the 'follow the money' logic it would make sense they would produce their most inspired works as 9-5 full stack engineers.
But often passion and freedom to explore are often more important than resources
llama models pushed the envelope for a while, and having them "open-weight" allowed a lot of tinkering. I would say that most of fine tuned evolved from work on top of llama models.
Llama wasn’t Yann LeCun’s work and he was openly critical of LLMs, so it’s not very relevant in this context.
Source: himself https://x.com/ylecun/status/1993840625142436160 (“I never worked on any Llama.”) and a million previous reports and tweets from him.
> My only contribution was to push for Llama 2 to be open sourced.
Quite a big contribution in practice.
Sure, but I don't that's relevant in a startup with 1B VC money either. Meta can afford to (attempt to) commoditize their complement.
That's such a terrible take.
For a hot minute Meta had a top 3 LLM and open sourced the whole thing, even with LeCunn's reservations around the technology.
At the same time Meta spat out huge breakthroughs in:
- 3d model generation
- Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.
- A whole new class of world modeling techniques (JEPAs)
- SAM (Segment anything)
> - Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.
If it was a breakthrough, why did Meta acquire Wang and his company? I'm genuinely curious.
> we didn't see anything.
Is it a troll? Even if we just ignore Llama, Meta invented and released so many foundational research and open source code. I would say that the computer vision field would be years behind if Meta didn't publish some core research like DETR or MAE.
Did he work on those vision models?
You should ignore Llama because by his own admission,
>My only contribution was to push for Llama 2 to be open sourced.
He was suffocated by the corporate aspect Meta I suspect.
Your take is brutal but spot on
I can’t reconcile this dichotomy: most of the landmark deep learning papers were developed with what, by today’s standards, were almost ridiculously small training budgets — from Transformers to dropout, and so on.
So I keep wondering: if his idea is really that good — and I genuinely hope it is — why hasn’t it led to anything truly groundbreaking yet? It can’t just be a matter of needing more data or more researchers. You tell me :-D
Its a matter of needing more time, which is a resource even SV VCs are scared to throw around. Look at the timeline of all these advancements and how long it took
Lecun introduced backprop for deep learning back in 1989 Hinton published about contrastive divergance in next token prediction in 2002 Alexnet was 2012 Word2vec was 2013 Seq2seq was 2014 AiAYN was 2017 UnicornAI was 2019 Instructgpt was 2022
This makes alot of people think that things are just accelerating and they can be along for the ride. But its the years and years of foundational research that allows this to be done. That toll has to be paid for the successsors of LLMs to be able to reason properly and operate in the world the way humans do. That sowing wont happen as fast as the reaping did. Lecun was to plant those seeds, the others who onky was to eat the fruit dont get that they have to wait
If his ideas had real substance, we would have seen substantial results by now. He introduced I-JEPA in 2023, so almost three years ago at this point.
If he still hasn’t produced anything truly meaningful after all these years at Meta, when is that supposed to happen? Yann LeCun has been at Facebook/Meta since December 2013.
Your chronological sequence is interesting, but it refers to a time when the number of researchers and the amount of compute available were a tiny fraction of what they are today.
Yann LeCun seeks $5B+ valuation for world model startup AMI (Amilabs).
He has hired LeBrun to the helm as CEO.
AMI has also hired LeFunde as CFO and LeTune as head of post-training.
They’re also considering hiring LeMune as Head of Growth and LePrune to lead inference efficiency.
https://techcrunch.com/2025/12/19/yann-lecun-confirms-his-ne...
Why didn't they just call it LeLabs?
I was thinking the same, are all people he hires LeSomething like those working at Bolson Construction having -son as a suffix.
First grinding LEetcode, now having to have 'Le' in the name?
I have no chance in AI industry...
Bolson-ass hiring policy.
This couldn't have happened sooner, for 2 reasons.
1) the world has become a bit too focused on LLMs (although I agree that the benefits & new horizons that LLMs bring are real). We need research on other types of models to continue.
2) I almost wrote "Europe needs some aces". Although I'm European, my attitude is not at all that one of competition. This is not a card game. What Europe DOES need is an ATTRACTIVE WORKPLACE, so that talent that is useful for AI can also find a place to work here, not only overseas!
Regardless of your opinion of Yann or his views on auto regressive models being "sufficient" for what most would describe as AGI or ASI, this is probably a good thing for Europe. We need more well capitalized labs that aren't US or China centric and while I do like Mistral, they just haven't been keeping up on the frontier of model performance and seem like they've sort of pivoted into being integration specialists and consultants for EU corporations. That's fine and they've got to make money, but fully ceding the research front is not a good way to keep the EU competitive.
LeCun's technical approach with AMI will likely be based on JEPA, which is also a very different approach than most US-based or Chinese AI labs are taking.
If you're looking to learn about JEPA, LeCun's vision document "A Path Towards Autonomous Machine Intelligence" is long but sketches out a very comprehensive vision of AI research: https://openreview.net/pdf?id=BZ5a1r-kVsf
Training JEPA models within reach, even for startups. For example, we're a 3-person startup who trained a health timeseries JEPA. There are JEPA models for computer vision and (even) for LLMs.
You don't need a $1B seed round to do interesting things here. We need more interesting, orthogonal ideas in AI. So I think it's good we're going to have a heavyweight lab in Europe alongside the US and China.
Have you published anything about your health time series model? Sounds interesting!
Sure! Here’s a description: https://www.empirical.health/blog/wearable-foundation-model-...
Thanks! This is very neat.
BTW, I went to your website looking for this, but didn't find your blog. I do now see that it's linked in the footer, but I was looking for it in the hamburger menu.
Thanks! We need to re-do the top navigation / hamburger menu -- we've added a bunch of new things in the past few months, and it badly needs to be re-organized.
This is very cool work! I have a quick follow-up: in the biomarker prediction task, what horizon (ie. how far into the future) did you set for the predictions? Prediction is hard beyond an hour, so it'd be impressive if your model handles that.
The prediction task is set up as predicting the next measured biomarkers based on a week of wearable data. So it's not necessarily predicting into the future, but predicting dataset Y given dataset X.
The specific biomarkers being predicted are the ones most relevant to heart health, like cholesterol or HbA1c. These tend to be more stable from hour to hour -- they may vary on a timescale of weeks as you modify your diet or take medications.
oh nice, i actually used you guys for some labs a few months ago. Glad you're competing with function & superpower
Very interesting. I am keenly interested in this space and coincidentally had my blood drawn this morning.
That said, have you considered that “Measure 100+ biomarkers with a single blood draw” combined with "heart health is a solved problem” reads a lot like Theranos?
FWIW, the single blood draw is 6-8 vials -- so we're not claiming to get 100 biomarkers from a single drop. The point of that is mostly that it just takes one appointment / is convenient.
Appreciate your work! Healthcare is a regulated industry. Everything (Research, proposals, FDA submissions, Compliance docs, Accreditation Standards, etc.) is documented and follows a process, which means there's a lot of thesis. You can't sneak in anything unverified or unreliable. Why does healthcare need a JEPA\World model?
Regulation is quickly catching up to modern AI techniques; for the most part, the approach is to verify outputs rather than process. For example, Utah's pilot to let AI prescribe medications has doctors check the first N prescriptions of each medication. Medicare is starting to pay for AI-enabled care, but tying payment to objective biomarkers like cholesterol or blood pressure actually got better.
I've been working to understand the potential uses for JEPA. Outside of video, has anyone made a list of any type (geared towards dummies like me)?
There seem to be other news articles mentioning that they are setting up in Singapore as their base. https://www.straitstimes.com/business/ai-godfather-raises-1-...
Hm, Singapour looks more like "one of their base"; they will have offices in Paris, Montréal, Singapour and New York (according to both this article and the interview Yann Le Cun did this morning on France Inter, the most listened radio in France).
Of course, each relevant newspaper on those areas highlight that it's coming to their place, but it really seems to be distributed.
All your base are belong to Yann LeCun.
Probably just a satellite office.
Might be to be close to some of Yann's collaborators like Xavier Bresson at NUS
That's a Singaporian newspaper, though; not sure if it's objectively their main base, or just one of them
Which would be a good idea, as a European. I'd hate to see the investment go to waste on taxes that are spent on stupid shit anyway. Should go into R&D not fighting bureaucracy.
"Show me the incentive and I will show you the outcome."
Almost certainly the IP will be held in Singapore for tax reasons.
> they are setting up in Singapore as their base
Europe in general has been tightening up their rules / taxes / laws around startups / companies especially tech and remote.
It's been less friendly. these days.
Yann Le Cun litteraly said this morning on the radio in France that it is headquarted in Paris and will pay taxes in France. Go figure…
No he said something like “well yes, only for the parts of profits made in France”
Why would it be any other way?
For such companies, France also offers generous R&D tax credits (Crédit Impôt Recherche): companies can recover roughly 30% of eligible R&D expenses incurred in France as a tax credit, which can eventually be refunded (in cash) if the company has no taxable profit.
Is that alongside 100% of R&D expenses amortized in taxes when a company has taxable profit covering them?
Yes indeed, if the company is profitable.
Doesn’t he live in New York himself? Although not sure if that matters depending on his role
There will be no corporate taxes for a long time, so alls good.
This is a singaporean news article from a singporean company[0] (Had to look it up)
As such, They are more likely to talk about singapore news and exaggerate the claims.
Singapore isn't the Key location. From what I am seeing online, France is the major location.
Singapore is just one of the more satellite like offices. They have many offices around the world it seems.
[0]: https://www.sgpbusiness.com/company/Sph-Media-Limited
While I’d love there to be a European frontier model, I do very much enjoy mistral. For the price and speed it outperforms any other model for my use cases (language learning related formatting, non-code non-research).
Partner in a fund that wrote a small check into this — I have no private knowledge of the deal - while I agree that one’s opinion on auto regressive models doesn’t matter, I think the fact of whether or not the auto regressive models work matters a lot, and particularly so in LeCun’s case.
What’s different about investing in this than investing in say a young researcher’s startup, or Ilya’s superintelligence? In both those cases, if a model architecture isn’t working out, I believe they will pivot. In YL’s case, I’m not sure that is true.
In that light, this bet is a bet on YL’s current view of the world. If his view is accurate, this is very good for Europe. If inaccurate, then this is sort of a nothing-burger; company will likely exit for roughly the investment amount - that money would not have gone to smaller European startups anyway - it’s a wash.
FWIW, I don’t think the original complaint about auto-regression “errors exist, errors always multiply under sequential token choice, ergo errors are endemic and this architecture sucks” is intellectually that compelling. Here: “world model errors exist, world model errors will always multiply under sequential token choice, ergo world model errors are endemic and this architecture sucks.” See what I did there?
On the other hand, we have a lot of unused training tokens in videos, I’d like very much to talk to a model with excellent ‘world’ knowledge and frontier textual capabilities, and I hope this goes well. Either way, as you say, Europe needs a frontier model company and this could be it.
I don't think it's "regardless", your opinion on LeCun being right should be highly correlated to your opinion on whether this is good for Europe.
If you think that LLMs are sufficient and RSI is imminent (<1 year), this is horrible for Europe. It is a distracting boondoggle exactly at the wrong time.
It's sufficient to think that there is a chance that they will not be, however, for there to be a non-zero value to fund other approaches.
And even if you think the chance is zero, unless you also think there is a zero chance they will be capable of pivoting quickly, it might still be beneficial.
I think his views are largely flawed, but chances are there will still be lots of useful science coming out of it as well. Even if current architectures can achieve AGI, it does not mean there can't also be better, cheaper, more effective ways of doing the same things, and so exploring the space more broadly can still be of significant value.
I think LeCun has been so consistently wrong and boneheaded for basically all of the AI boom, that this is much, much more likely to be bad than good for Europe. Probably one of the worst people to give that much money to that can even raise it in the field.
LeCun was stubbornly 'wrong and boneheaded' in the 80s, but turned out to be right. His contention now is that LLMs don't truly understand the physical world - I don't think we know enough yet to say whether he is wrong.
Could you please elaborate on what he was wrong about?
Whenever I see claims about AGI being reachable through large language models, it reminds me of the miasma theory of disease. Many respectable medical professionals were convinced this was true, and they viewed the entire world through this lens. They interpreted data in ways that aligned with a miasmatic view.
Of course now we know this was delusional and it seems almost funny in retrospect. I feel the same way when I hear that 'just scale language models' suddenly created something that's true AGI, indistinguishable from human intelligence.
The miasma theory of disease, though wrong, made lots of predictions that proved useful and productive. Swamps smell bad, so drain them; malaria decreases. Excrement in the street smells bad, so build sewage systems; cholera decreases. Florence Nightingale implemented sanitary improvements in hospitals inspired by miasma theory that improved outcomes.
It was empirical and, though ultimately wrong, useful. Apply as you will to theories of learning.
> Whenever I see claims about AGI being reachable through large language models, it reminds me of the miasma theory of disease.
Whenever I see people think the model architecture matters much, I think they have a magical view of AI. Progress comes from high quality data, the models are good as they are now. Of course you can still improve the models, but you get much more upside from data, or even better - from interactive environments. The path to AGI is not based on pure thinking, it's based on scaling interaction.
To remain in the same miasma theory of disease analogy, if you think architecture is the key, then look at how humans dealt with pandemics... Black Death in the 14th century killed half of Europe, and none could think of the germ theory of disease. Think about it - it was as desperate a situation as it gets, and none had the simple spark to keep hygiene.
The fact is we are also not smart from the brain alone, we are smart from our experience. Interaction and environment are the scaffolds of intelligence, not the model. For example 1B users do more for an AI company than a better model, they act like human in the loop curators of LLM work.
It's unintuitive to me that architecture doesn't matter - deep learning models, for all their impressive capabilities, are still deficient compared to human learners as far as generalisation, online learning, representational simplicity and data efficiency are concerned.
Just because RNNs and Transformers both work with enormous datasets doesn't mean that architecture/algorithm is irrelevant, it just suggests that they share underlying primitives. But those primitives may not be the right ones for 'AGI'.
If I'm understanding you, it seems like you're struck by hindsight bias. No one knew the miasma theory was wrong... it could have been right! Only with hindsight can we say it was wrong. Seems like we're in the same situation with LLMs and AGI.
The miasma theory of disease was "not even wrong" in the sense that it was formulated before we even had the modern scientific method to define the criteria for a theory in the first place. And it was sort of accidentally correct in that some non-infectious diseases are caused by airborne toxins.
> Only with hindsight can we say it was wrong
It really depends what you mean by 'we'. Laymen? Maybe. But people said it was wrong at the time with perfectly good reasoning. It might not have been accessible to the average person, but that's hardly to say that only hindsight could reveal the correct answer.
If model arch doesn't matter much how come transformers changed everything?
Luck. RNNs can do it just as good, Mamba, S4, etc - for a given budget of compute and data. The larger the model the less architecture makes a difference. It will learn in any of the 10,000 variations that have been tried, and come about 10-15% close to the best. What you need is a data loop, or a data source of exceptional quality and size, data has more leverage. Architecture games reflect more on efficiency, some method can be 10x more efficient than another.
That's not how I read the transformer stuff around the time it was coming out: they had concrete hypotheses that made sense, not just random attempts at striking it lucky. In other words, they called their shots in advance.
I'm not aware that we have notably different data sources before or after transformers, so what confounding event are you suggesting transformers 'lucked' in to being contemporaneous with?
Also, why are we seeing diminishing returns if only the data matters. Are we running out of data?
The premise is wrong, we are not seeing diminishing returns. By basically any metric that has a ratio scale, AI progress is accelerating, not slowing down.
For example?
> Of course you can still improve the models, but you get much more upside from data, or even better - from interactive environments.
I'm on the contrary believe that the hunt for better data is an attempt to climb the local hill and be stuck there without reaching the global maximum. Interactive environments are good, they can help, but it is just one of possible ways to learn about causality. Is it the best way? I don't think so, it is the easier way: just throw money at the problem and eventually you'll get something that you'll claim to be the goal you chased all this time. And yes, it will have something in it you will be able to call "causal inference" in your marketing.
But current models are notoriously difficult to teach. They eat enormous amount of training data, a human needs much less. They eat enormous amount of energy to train, a human needs much less. It means that the very approach is deficient. It should be possible to do the same with the tiny fraction of data and money.
> The fact is we are also not smart from the brain alone, we are smart from our experience. Interaction and environment are the scaffolds of intelligence, not the model.
Well, I learned English almost all the way to B2 by reading books. I was too lazy to use a dictionary most of the time, so it was not interactive: I didn't interact even with dictionary, I was just reading books. How many books I've read to get to B2? ~10 or so. Well, I read a lot of English in Internet too, and watched some movies. But lets multiply 10 books by 10. Strictly speaking it was not B2, I was almost completely unable to produce English and my pronunciation was not just bad, it was worse. Even now I stumble sometimes on words I cannot pronounce. Like I know the words and I mentally constructed a sentence with it, but I cannot say it, because I don't know how. So to pass B2 I spent some time practicing speech, listening and writing. And learning some stupid topic like "travel" to have a vocabulary to talk about them in length.
How many books does LLM need to consume to get to B2 in a language unknown to it? How many audio records it needs to consume? Life wouldn't be enough for me to read and/or listen so much.
If there was a human who needed to consume as much information as LLM to learn, they would be the stupidest person in all the history of the humanity.
Are you asking how many books a large language model would need to read to learn a new language if it was only trained on a different language? probably just 1 (the dictionary)
Just because you raise 1 billion dollars to do X doesn't mean you can't pivot and do Y if it is in the best interest of your mission.
I won't comment on Yann LeCun or his current technical strategy, but if you can avoid sunk cost fallacy and pivot nimbly I don't think it is bad for Europe at all. It is "1 billion dollars for an AI research lab", not "1 billion dollars to do X".
It's been 6 months away for 5 years now. In that time we've seen relatively mild incremental changes, not any qualitative ones. It's probably not 6 months away.
Yeah. I feel like that like many projects the last 20% take 80% of time, and imho we are not in the last 20%
Sure LLMs are getting better and better, and at least for me more and more useful, and more and more correct. Arguably better than humans at many tasks yet terribly lacking behind in some others.
Coding wise, one of the things it does “best”, it still has many issues: For me still some of the biggest issues are still lack of initiative and lack of reliable memory. When I do use it to write code the first manifests for me by often sticking to a suboptimal yet overly complex approach quite often. And lack of memory in that I have to keep reminding it of edge cases (else it often breaks functionality), or to stop reinventing the wheel instead of using functions/classes already implemented in the project.
All that can be mitigated by careful prompting, but no matter the claim about information recall accuracy I still find that even with that information in the prompt it is quite unreliable.
And more generally the simple fact that when you talk to one the only way to “store” these memories is externally (ie not by updating the weights), is kinda like dealing with someone that can’t retain memories and has to keep writing things down to even get a small chance to cope. I get that updating the weights is possible in theory but just not practical, still.
It's 6 months away the same way coding is apparently "solved" now.
I think we - in last few months - are very close to, if not already at, the point where "coding" is solved. That doesn't mean that software design or software engineering is solved, but it does mean that a SOTA model like GPT 5.4 or Opus 4.6 has a good chance of being able to code up a working version of whatever you specify, with reason.
What's still missing is the general reasoning ability to plan what to build or how to attack novel problems - how to assess the consequences of deciding to build something a given way, and I doubt that auto-regressively trained LLMs is the way to get there, but there is a huge swathe of apps that are so boilerplate in nature that this isn't the limitation.
I think that LeCun is on the right track to AGI with JEPA - hardly a unique insight, but significant to now have a well funded lab pursuing this approach. Whether they are successful, or timely, will depend if this startup executes as a blue skies research lab, or in more of an urgent engineering mode. I think at this point most of the things needed for AGI are more engineering challenges rather than what I'd consider as research problems.
Sure, Claude and other SOTA LLMs do generate about 90% of my code but I feel like we are not closer to solving the last 10% than we were a year ago in the days of Claude 3.7. It can pretty reliably get 90% there and then I can either keep prompting it to get the rest done or just do it manually which is quite often faster.
Reminds me of how cold fusion reactors are only 5 years away for decades now
Cold fusion reactors haven't produced usable intermediate results. LLMs have.
LLMs produce slop far to often to say they are in any way better than cold fusion in terms of usable results. "AI" kind of is the cold fusion of tech. We've always been 5 or 10 years away from "AGI" and likely always will be.
But I swear this time is different! Just give me another 6 months!
And another 6 trillion dollars :^)
> RSI
Wait, we have another acronym to track. Is this the same/different than AGI and/or ASI?
Some people should definitely be getting Repetitive Strain Injury from all the hyping up of LLMs.
Recursive self improvement. It's when AI speeds up the development of the next AI.
Recursive Self Improvement
> fully ceding the research front is not a good way to keep the EU competitive
Tech is ultimately a red herring as far as what's needed to keep the EU competitive. The EU has a trillion dollar hole[0] to fill if they want to replace US military presence, and current net import over 50% of their energy. Unfortunately the current situation in Iran is not helping either of these as they constrains energy further and risks requiring military intervention.
0. https://www.wsj.com/world/europe/europes-1-trillion-race-to-...
Hard disagree, military might isn't going to secure anybody into the future, modern society and our economies will only get more vulnerable as time goes on and large wars or engagements will just push economies closer to collapse. And without a solid modern economy to back up the military, modern military will fall apart.
Right, they really need a military industrial complex to be "competitive" :eyeroll. Are you suggesting regressing to the stone age?
Europe doesn't want to be reliant (understandably) on the US military for defense, because if they are, as Trump has demonstrated, they will be pressured to make concessions not in their interests.
The need for a military is tightly coupled with the EU's need for energy. You can see this in the immediate impact that the war in Iran has had on Germany's natural gas prices [0]. But already unable to defend itself from Russia, EU countries are in a tough spot since they can't really afford to expend military resources defending their energy needs, and yet also don't have the energy independence to ignore these military engagements without risk. Meanwhile Russia has spend the last 4 years transition to a wartime economy and is getting hungry for expanded resource acquisition.
The world hasn't fundamentally changed since the stone age: humans need resources to survive and if there aren't enough people for those resources then violence will decide who has access the them.
0. https://tradingeconomics.com/commodity/germany-natural-gas-t...
33% of the business in a seed round is nuts
can you elaborate more, also isn't this necessary for a Lab that wants to compete with highly funded entities (like OpenAI, Anthropic)?
> Regardless of your opinion of Yann or his views on auto regressive models being "sufficient" for what most would describe as AGI or ASI
My main concern with Lecunn are the amount of times he has repeatedly told people software is open source when it’s license directly violates the open source definition.
As an American here in Berlin, I, too welcome this. I would love for there to be many large well capitalized companies here for me to work at.
Is it good? This will almost certainly fail. Not because Yann or Europe, but because these sort of hyper-hyped projects fail. SSI and Thinking Machines haven’t lived to the hype.
Erm, ... OpenAI has hyped when it started and it took 6 years to take off. It's way to early to declare the SSI and Thinking Machines have failed.
They took money and haven't released anything. How are they doing?
To be fair to SSI, they were very explicit about their plan: "we are going to take money and not release anything until we one-shot superintelligence."
If you invested in that you knew what you were getting yourself into!
I didn't really know who he was, so I went and found his wikipedia, which is written like either he wrote it himself to stroke his ego, or someone who likes him wrote it to stroke his ego:
> He is the Jacob T. Schwartz Professor of Computer Science at the Courant Institute of Mathematical Sciences at New York University. He served as Chief AI Scientist at Meta Platforms before leaving to work on his own startup company.
That entire sentence before the remarks about him service at Meta could have been axed, its weird to me when people compare themselves to someone else who is well known. It's the most Kanye West thing you can do. Mind you the more I read about him, the more I discovered he is in fact egotistical. Good luck having a serious engineering team with someone who is egotistical.
You underestimate academia. Any academic that reads these two sentences only focuses on the first one: He has a named chair at Courant. In Germany, being a a Prof is added to your ID card/passport and becomes part of your official name, like knighthood in other countries.
No true regarding the IDs, only PhD titles can be added. Not job descriptions. Source: academia person in Germany.
It seems Germans add their PhD titles even to their nicknames. :)
It's not comparing him to anyone. He has an endowed professorship. This is standard in academia, and you give the name because a) it's prestigious for the recipient and b) it strokes the ego of the donor.
Right: no-one cares about the Lucasian Chair of Mathematics https://en.wikipedia.org/wiki/Lucasian_Professor_of_Mathemat... because of Henry Lucas, it's the other way around.
https://cims.nyu.edu/dynamic/news/1441/
This is just the official name of a chair at NYU. I'm not even sure Jacob T. Schwartz is more well known than Yann LeCun
Yann is definitely more well-known outside of academia. Inside academia, it's going to depend a lot on your specific background and how old you are.
That’s not a comparison to another person. That’s his job title. It is not uncommon for universities to have distinguished chairs within departments named after a notable person—in this case, the founder of NYU’s Department of Computer Science.
Eh, that paragraph reads perfectly normal to me.
Either you have not read enough Wikipedia pages, or you have too much to complain about. (Or both.)
https://archive.is/20260310070651/https://www.ft.com/content...
Link does not work, goes into loop at verify human check with some weird redirect
Looks like you appended the original URL to the end
Probably related to the reasoning behind: https://arstechnica.com/tech-policy/2026/02/wikipedia-bans-a...
Or you're using Cloudflare DNS.
I may be using CF DNS 1.1.1.1, for a while if so, and only seeing the issue today. It definitely seems specific to me at this point.
Have they changed something on their end?
Huh, it's working for me (on Firefox).
I feel like I'm the only one not getting the world models hype. We've been talking about them for decades now, and all of it is still theoretical. Meanwhile LLMs and text foundation models showed up, proved to be insanely effective, took over the industry, and people are still going "nah LLMs aren't it, world models will be the gold standard, just wait."
I bet LLMs and world models will merge. World models essentially try to predict the future, with or without actions taken. LLMs with tokenized image input can also be made to predict the future image tokens. It's a very valuable supervised learning signal aside from pre-training and various forms of RL.
I think "world models" is the wrong thing to focus on when contrasting the "animal intelligence" approach (which is what LeCun is striving for) with LLMs, especially since "world model" means different things to different people. Some people would call the internal abstractions/representations that an LLM learns during training a "world model" (of sorts).
The fundamental problem with today's LLMs that will prevent them from achieving human level intelligence, and creativity, is that they are trained to predict training set continuations, which creates two very major limitations:
1) They are fundamentally a COPYING technology, not a learning or creative one. Of course, as we can see, copying in this fashion will get you an extremely long way, especially since it's deep patterns (not surface level text) being copied and recombined in novel ways. But, not all the way to AGI.
2) They are not grounded, therefore they are going to hallucinate.
The animal intelligence approach, the path to AGI, is also predictive, but what you predict is the external world, the future, not training set continuations. When your predictions are wrong (per perceptual feedback) you take this as a learning signal to update your predictions to do better next time a similar situation arises. This is fundamentally a LEARNING architecture, not a COPYING one. You are learning about the real world, not auto-regressively copying the actions that someone else took (training set continuations).
Since the animal is also acting in the external world that it is predicting, and learning about, this means that it is learning the external effects of it's own actions, i.e. it is learning how to DO things - how to achieve given outcomes. When put together with reasoning/planning, this allows it to plan a sequence of actions that should achieve a given external result ("goal").
Since the animal is predicting the real world, based on perceptual inputs from the real world, this means that it's predictions are grounded in reality, which is necessary to prevent hallucinations.
So, to come back to "world models", yes an animal intelligence/AGI built this way will learn a model of how the world works - how it evolves, and how it reacts (how to control it), but this behavioral model has little in common with the internal generative abstractions that an LLM will have learnt, and it is confusing to use the same name "world model" to refer to them both.
RL on LLMs has changed things. LLMs are not stuck in continuation predicting territory any more.
Models build up this big knowledge base by predicting continuations. But then their RL stage gives rewards for completing problems successfully. This requires learning and generalisation to do well, and indeed RL marked a turning point in LLM performance.
A year after RL was made to work, LLMs can now operate in agent harnesses over 100s of tool calls to complete non-trivial tasks. They can recover from their own mistakes. They can write 1000s of lines of code that works. I think it’s no longer fair to categorise LLMs as just continuation-predictors.
Thanks for saying this. It never ceases to amaze me how many people still talk about LLMs like it’s 2023, completely ignoring the RLVR revolution that gave us models like Opus that can one-shot huge chunks of works-first-time code for novel use cases. Modern LLMs aren’t just trained to guess the next token, they are trained to solve tasks.
I attended a talk from Yann LeCun, and he always had a strong opinion about auto-regressive models. Its nice to see someone not just chasing hype and doing more research.
AI is developing backwards. The simplest organisms eat and find food. More complex ones can smell and sense tremors. After several steps in evolution comes vision and complex thought.
AIs that can't smell, can't feel hunger, can't desire -- I do not think it can understand the world the way organic life does.
Seems like it's the second largest seed round anywhere after Thinking Machines Labs? https://news.crunchbase.com/venture/biggest-seed-round-ai-th...
That article is from June 2025 so may be out of date, and the definition of "seed round" is a bit fuzzy.
Thinking Machines looks half-dead already.
The giant seed round proves investors were willing to fund Mira Murati, not that the company had built anything durable.
Within months, it had already lost cofounder Andrew Tulloch to Meta, then cofounders Barret Zoph and Luke Metz plus researcher Sam Schoenholz to OpenAI; WIRED also reported that at least three other researchers left. At that point, citing it as evidence of real competitive momentum feels weak.
Was just a grift
Shock, gasp.
That being sad, Yann LeCun's twitter reposts are below average IQ.
Do you have a recent example ?
Archive: https://archive.md/5eZWq
The startup is Advanced Machine Intelligence Labs: https://amilabs.xyz/
As someone in the tech twitter sphere this is yann and his ideas performing a suplex on LLM based companies. It is completely unfathomable to start an ai research company… Only sell off 20% and have 1 billion for screwing around for a few years.
I liken this to watching a godzilla esque movie. Just grab some popcorn and enjoy the ride.
Why world model? To emulate how we became sentient?
A "world" is just senses. In a way the context is one sense. A digital only world is still a world.
I think more success is in a model having high level needs and aspirations that are borne from lower level needs. Model architecture also needs to shift to multiple autonomous systems that interact, in the same ways our brains work - there's a lot under the surface inside our heads, it's not just "us" in there.
We only interact with our environment because of our low level needs, which are primarily: food, water. Secondary: mating. Tertiary: social/tribal credit (which can enable food, water and mating).
Because if you have an explicit world model you can optimize against it.
It sounds like you are imagining tacking a world model onto an LLM. That's one approach but not what LeCun advocates for.
I have no faith in anyone doing AI to accomplish anything (especially relative to how much money they spend) except John Carmack. People should be trying to throw money at him
This feels like more justified investment as it’s try to move the needle. Hope he succeeds
Wasn't there some recent argument that world models won't achieve AGI either due to overlooking the normative framework, fundamental symmetries of the world purely from data and collapse in multi-step reasoning? JEPA is sacrificing fidelity for abstract representation yet how does that help in the real world where fidelity is the most important point? It's like relying on differential equations yet soon finding out they only cover minuscule amount of real world problems and almost all interesting problems are unsolvable by them.
A fair amount of negative comments here, but Yann might very well be the person who brings the Bell Labs culture back to life. It’s been badly missing, and not just in Europe.
I wish him luck.
Recently all papers are about LLM, it brings up fatigue.
As GPT is almost reaching its limit, new architecture could bring out new discovery.
At least some of that money should definitely go towards improving his powerpoint slides on JEPA related work :)
Europe becoming really attractive right now!
That's between 1 and 10 training runs on a large foundational model, depending on pricing discounts and how much they manage to optimize it. I priced this out last night on AWS, which is admittedly expensive, but models have also gotten larger.
He couldn't achieve at least parity with LLMs during his days at Meta (and having at his disposal billions in resources most probably) but he'll succeed now? What is the pitch?
The pitch isnt to try to squeeze money out of a product like altman does. Its to lay the groundwork for the next evolution in AI. Llms were built on decades of work and theyve hit their limits. We'll need to invest alot of time building foundations without getting any tangible yeild for the next step to work. Get too greedy and youll be stuck
What use is it to understand the physical world if all investments are misallocated to the virtual world? Perhaps the AI will detect that there is a housing shortage and politicians will finally believe it because AI said so?
Or is it to accelerate Skynet?
Does anyone have a sense of how funding like this is typically allocated? how much tends to go toward compute/training versus researchers, infrastructure, and general operations?
Meta's greatest loss of the decade
impressive that the round was 100% oversubscribed but to be expected when it's the prof that trained a good chunk of the current AI founders.
I raised $1 to understand your physical world.
https://archive.is/TEwfi
There's been a few very interesting JEPA publications from LeCun recently, particularly the leJEPA paper which claims to simplify a lot of training headaches for that class of models.
JEPAs also strike me as being a bit more akin to human intelligence, where for example, most children are very capable of locomotion and making basic drawings, but unable to make pixel level reconstructions of mental images (!!).
One thing I want to point out is that very LeCunn type techniques demonstrating label free training such as JEAs like DINO and JEPAs have been converging on performance of models that require large amounts of labeled data.
Alexandr Wang is a billionaire who made his wealth through a data labeling company and basically kicked LeCunn out.
Overall this will be good for AI and good for open source.
Alternative free to read article: https://sifted.eu/articles/yann-lecun-ami-labs-meta-funding-...
It’s 4.7B actually, he confirmed it here https://x.com/ylecun/status/2031331124450931058?s=46
That seems to be the valuation, not how much they raised afaik.
More research on more models = more betta
Yann LeCun said a number of things that are very dubious, like autoregressive LLMs are a dead end, LLMs do not have an internal world model, and this morning https://www.youtube.com/watch?v=AFi1TPiB058 (in french) that an IA cannot find a strategy to preserve itself against the will of its creator.
As a french, I wish him good luck anyway, I'm all for exploring different avenues of achieving AGI.
Looks like they'll be hiring on in Montreal in addition to Paris (and NYC and Signapore): https://jobs.ashbyhq.com/ami
I hope they grow that office like crazy. This would be really good for Canada. We have (or have had) the AI talent here (though maybe less so overall in Montreal than in Toronto/Waterloo and Vancouver and Edmonton).
And I hope Carney is promoting the crap out of this and making it worth their while to build that office out.
I don't really do Python or large scale learning etc, so don't see a path for myself to apply there but I hope this sparks some employment growth here in Canada. Smart choice to go with bilingual Montreal.
WE HAVE RAISED A BILLION DOLLORS
but you don’t even have a product
/cape
I just saw a post from Yann mentioning that AMI Labs is hiring too!
If he's right (that LLMs cannot achieve AGI, but what he's working on can, and does), this would be huge for AI and humanity at large.
Hope it puts to bed the "Europe can't innovate" crowd too.
I'm still just so surprised any time I encounter people who think AI will be overall good for humanity
I pretty strongly think it will only benefit the rich and powerful while further oppressing and devaluing everyone else. I tend to think this is an obvious outcome and it would be obviously very bad (for most of us)
So I wonder if you just think you will be one of the few who benefit at the expense of others, or do you truly believe AI will benefit all of humanity?
> So I wonder if you just think you will be one of the few who benefit at the expense of others
It's not a zero sum game, IMO. It will benefit some, be neutral for others, negative for others.
For instance, improved productivity could be good (and doesn't have to result in layoffs, Jevon's paradox will come into play, IMO, with increased demand). Easier/better/faster scientific research could be good too. Not everyone would benefit from those, but not everyone has to for it to be generally good.
Autonomous AI-powered drone swarms could be bad, or could result in a Mutually Assured Destruction stalemate.
> improved productivity could be good (and doesn't have to result in layoffs
It already has resulted in layoffs and one of the weakest job markets we've seen in ages
Executives could not have used it as an excuse for layoffs faster, they practically tripped over themselves trying to use it as an excuse to lay people off
>It's not a zero sum game, IMO. It will benefit some, be neutral for others, negative for others.
This is literally a description of a zero sum game
No, a zero sum game would require for the "winners" to take it from the "losers", and there is a limited amount to go around. If there is a majority of "winners" by expanding, some neutral, some negative, that is not a zero sum game.
> No, a zero sum game would require for the "winners" to take it from the "losers"
You’re so close to getting it and I’m rooting for you
If, for even 1s, they get in a position which is threatening, in any way, Big Tech AI (mostly US based if not all), they will be raided by international finance to be dismantled and poached hardcore with some massive US "investment funds" (which looks more and more as "weaponized" international finance!!). Only china is very immune to international finance. Those funds have tens of thousands of billions of $, basically, in a world of money, there is near zero resistance.
I don't see a world where they become threatening and the employees don't become rich from investors flooding in.
Where have you been in the last 2 decades?
Don’t think that’s a fair interpretation of what I said.
Liquid money rich? No.
Can get pulled for big tech packages? Also no, for most of the employees.
AFAIK, big tech didn’t aggressively poach OpenAI-like talent, they did spend 10M+ pay packages but it was for a select few research scientists. Some folks left and came but it boiled down to culture mostly.
Once again, US companies and VCs are in this seed round. Just like Mistral with their seed round.
Europe again missing out, until AMI reaches a much higher valuation with an obvious use case in robotics.
Either AMI reaches over $100B+ valuation (likely) or it becomes a Thinking Machines Lab with investors questioning its valuation. (very unlikely since world models has a use-case in vision and robotics)
> Europe again missing out
I can't read the article, but American investors investing into European companies, isn't US the one missing out here? Or does "Europe" "win" when European investors invest in US companies? How does that work in your head?
>isn't US the one missing out here?
Why would the US miss out here? The US invests in something = the US owns part of something.
This isn't a zero sum game.
> Why would the US miss out here?
Personally I don't believe anyone is missing out on anything here.
But rvz earlier claimed that Europe is missing out, because US investors are investing in a European company. That's kind of surprising to me, so asking if they also believe that the US is "missing out" whenever European investors invest in US companies, or if that sentiment only goes one way.
It is well enough to attract worthy talents & produce interesting outcomes.
This could have been 1000 seed rounds. We are creating technological deserts by going all-in on AI and star personalities.
There's seems to be no little shortage of capital in the global market.
Because for these investors the opportunity cost of this is higher than other startups.
I agree with you; there should be more diversity in investments in EU startups, but ¯\_(ツ)_/¯ not my money.
Not based on true valuation unless h-index has become a valuation metric lol
Academics don’t always make great entrepeneurs
Here you can see why it is so hard to compete as European startup with US startups - abysmal access to money. Investment of 1B USD in Europe is glorified as largest seed ever, but in USA it is another Tuesday.
A billion seed is not an every day event anywhere.
Not at all. A quick google turns up evidence of 4. There may be more but I think probably not many.
For a foundation AI lab with a world famous AI researcher at the helm though, it's not so impressive. Won't even touch the sides of the hardware costs they'd need to be anywhere near competitive
Europeans have free healthcare and retirement. They consider putting their money with long term benefits not just become CEO on Tuesday and declare bankruptcy on Wednesday.
It is not free, we just pay taxes.
Retirement is the worst. You are basically forced to pay into a unsustainable system ( at least in Germany ). It already has to be subsidized by taxes .
Exactly. State retirement in Europes is not free nor great. We pay extra in taxes for it and it's only great for the present day retirees, not for those paying into the system right now who will retire into the future. It's the same as US social security, it's not some extra perk that Europeans have over Americans.
Top tier scientists aren't gonna be swayed by European state retirement systems.
Free healthcare and retirement ?
It is an universal system but definitely not free . In Germany you pay on average 17.5% of your salary for healthcare insurance and 18.6% for retirement . However contribution caps exists . 70k for healthcare and 100k for retirement .
„free“
A startup getting 1B net worth is so rare that such companies are called unicorns.
As the other commenter pointed out, this is 1B seed.
actually, they raised $1.03 billion at a $3.5 billion valuation.
Yes, the faster they get used to the thought that loosing a billion is not a big deal, the better.
Adds up : We are seeing a clear exodus of both capital and talent from the US - with the current US administration’s shift toward cronyism - and the EU stands as the most compelling alternative with a uniform market of 500 million people and the last major federation truly committed to the rule of law.
"Exodus of capital" as if OpenAI didn't just raise 115b
That's a bonfire of capital into a gaping hole in the ground with zero chance outside of "military pork" and "overcharging the taxpayer" to ever make their money back. The brain capital loss here is what's going to spook investors.
You lost me at “uniform”…