For the audience, this is exactly what your brain does by adversarial predictions by auto-association from the episode that’s the subject of your attention.
Now, the difference is that we can also deconstruct by wayfinding, at least within the limits of our working memory. And we can use language(including symbols and numbers) and then writing to extend our working memory.
So the problem that LLMS face, that they are using advrsarial prediction but not wayfinding as falsification. But that will emerge as cause and effect is incrementally aded to the model.
Eventually it will be able to reason rather than just predict.
Reply addressees: @Yampeleg
Source date (UTC): 2023-09-11 18:06:59 UTC
Original post: https://twitter.com/i/web/status/1701296291924324352
Replying to: https://twitter.com/i/web/status/1701290397148856349
Leave a Reply