From my position as someone who has long experience with both, and survived the AI winters, the LLM community is quite short on an operational understanding of the brain, because, believe it or not, it’s quite simple, and the steps to create AGI are easily explicable.
LLM’s are a brute force method of backwardly adapting what every previous generation of AI researchers tried to produced bottom up.
The availability of such vast data on the internet, plus the invention of video cards and now custom AI boards and processors has achieved the top down means of evolving AGI without the predicted need for (cheaper, faster, more adaptive) hardware neuromorphic computing that corresponds to the structure of the brain with limited local memory within many tiny processors imitating neural micro columns each competing to produce coherence and a resulting world model, that would then use a language model to describe.
But getting that set of steps necessary for LLMs to evolve out and in front of people, and providing sufficient explanation of cognition to do so, is impossible given the throng of information and the instant reward of constantly adapting these models through trial and error. 😉
Reply addressees: @Josh_Ebner
Source date (UTC): 2023-09-17 04:07:06 UTC
Original post: https://twitter.com/i/web/status/1703259257301344257
Replying to: https://twitter.com/i/web/status/1262433328533299200
Leave a Reply