RE: YAN LECUN FROM META OF THE RAILS AGAIN
(cc: @TheAiGrid)
Unfortunately, and I think I understand these matters as well as anyone living, Yan’s wrong, (and he’s been wrong frequently) because he doesn’t quite understand any of the following:
1) That LLMs are reverse engineering *the mind’s data structure* by brute force production of representation, and that it can be incrementally engineered by brute force further, but evolving those capabilities from embodiment upward was a failure by previous generations – at least until tesla’s driving. Which IS transferrable to human embodiment – eventually.
2) He doesn’t understand that the first principle of the universe we call evolution is produced by *continuous recursive disambiguation* of disorder into order by discovery of ‘cooperation’ or ‘agreement’ between states of energy.
And that consequently the rule of universal grammar is continuous recursive disambiguation to an identity which we can agree upon and therefore acknowledge transfer of meaning.
And *the structure of all text ‘out there’ follows the same grammar*. Meaning that *there IS a data structure of representation out there in the text.*
But, there is NOT the algorithm for disambiguating that information enough and recursively enough for way-finding – meaning *logical navigation*.
3) So LLM’s are solving the interface problem of converting universal grammar into *representation* – and that’s enough of a first step.
4) the next step is *recursive wayfinding, which we call logic*, which again, is continuous recursive disambiguation into identity (consistent, correspondent, coherent),
5) the next step is *adversarial competition* between such identities – while leaving room for *imagination* which today we call *hallucination* because the software is not recognizing the difference between identity and hallucination.
6) and the next step is *embodiment* which means real world concrete operations: *operational possibility*.
7) and the last step is *ethics and morality* which again, follows the same rule of continuous recursive disambiguation by adversarial competition.
8) And we are closer to production *neuromorphic computers*, with millions to billions of neural-micro-column level of computation of patterns and relations than we were just five years ago. Neuromorphic computers using light are going to end the cost and heat problem.
The reason my generation failed to achieve what the LLMs have achieved, is that like everyone else we wanted to start with embodiment and work up not realizing that this was the most inefficient means of representation.
Now, I work on the ethics, and it’s certainly possible and frankly we’re ready to train a model.
It will require AGENTS to perform these functions. But that’s all. We may be far off – largely because of the cost of energy. But we are not far off in understanding the work that must be done.
Cheers
Curt Doolittle
The Natural Law Institute