Man is the measure of all things perceivable by man. The question is, can that which is perceivable be reduced to language when even humans struggle with it. Yet what is available to auto association in humans is far beyond that demonstrated by the LLMs.
Source date (UTC): 2025-02-11 06:43:03 UTC
Original post: https://twitter.com/i/web/status/1889203479404794061
Reply addressees: @SCTempo @dwarkesh_sp
Replying to: https://twitter.com/i/web/status/1889199816489783305
IN REPLY TO:
Unknown author
THE TRANSFORMER LIMITATION IN NEUROSCIENTIFIC PROSE:
Correct. Or stated in neuroscience, (apologies if this is too dense to easily interpret), the prompt (language) invokes a set of relations (text equivalent of episodic memories) but it’s network (auto-associative memory) of referents is of lower resolution than that of humans (facets, objects, spaces, places, locations, actors, generalizations, sequences, abstractions, causal relations, valences) is limited to those in the language in the prompt (word-world model) and not the human intuitionistic model (sense-perception-embodiment world model) where abstractions (first principles, logical associations) from the entire corpus of extant and yet unstated or unknown abstractions (causal relations, valences) and first principles (logical relations in sense-perception world model) are associated at levels from neural microcolums to regions to networks to a continuous stream of network adaptations.
As such the world model of language (word-world model) is one of low precision, is absent embodiment, spatio-temporal, and operational word models (precision) necessary for pattern identification (logical association) of that which is yet UNSTATED in language in sufficient density as to cause association with the model (word-world model) produced in the LLM by the prompt.
I work on this issue and this is why the prompt must include the logical relation you’re asking the LLM to consider because it cannot make that connection alone.
Now, I see this as a scaling problem on one hand, meaning one of the necessity of embodiment, spatio-temporal, and operational abstractions in the model, and that the attention must be recursive (wayfinding) in order to cover the field of associations that the hierarchical temporal memory of the brain so easily performs.
On the other hand whether this problem is solvable within the LLM model by increases in the emergence we’ve seen of late is hard for me to predict. In the meantime we are left with prompts and traditional pseudocode or software (chain of thought) to control that which it cannot on it’s own, as it’s still limited to the equivalent of a synthesizing search engine otherwise.
Cheers
Curt Doolittle
NLI
Original post: https://x.com/i/web/status/1889199816489783305
Leave a Reply