Imprecise presumption. At present there is an inversion between the scope of information available and the number of recursive adversarial processes that self eliminate error.
This is possible to overcome by a division of knowledge, more layers of attention, more parallel attempts, and more adversarial competition between them for Identity consistency, correspondence, operational possibility and even rational choice and reciprocity.
In other words we’re climbing a Pareto power curve where each incremental step in decreasing error bars is qualitatively more difficult.
The ‘simple version’ is that LLMS solve the input and output problem, but they have quite far to go in imitating the brain’s massively parallel competition at all levels from facets to objects to spaces to borders, to places to locations, to episodes, to predictions from sets of episodes,wayfinding between present episode and desired episode, to adversarily competition between those routes.
What’s been amazing is just how great the tools are at generalization. That’s the easy part. the hard part is analysis by adversarial competition. Which means we probably have to convert to neuromorphic hardware (many tiny cores) updating continuously from large collections of traditional cores by costly updates we call ‘training’.
Cheers
CD
Reply addressees: @Danil_KV
Source date (UTC): 2024-08-17 22:54:39 UTC
Original post: https://twitter.com/i/web/status/1824942950452719616
Replying to: https://twitter.com/i/web/status/1824905597210550623
Leave a Reply