However, given that LLMs are not AIs, but search mechanisms, that doesn’t mean we can’t produce AIs that convert speech into world models, predictions, permutations, wayfinding, and novel sequences of operations that ARE possible and ARE creative, which can then use language models to DESCRIBE those new possibilities.
The info isn’t in language.
The language produces episodes (contexts) which we can test against contexts.
There is a difference between linguistic consistency (what LLMs search for) and the identity, logical consistency, operational possibility, external correspondence, and cumulative coherence of all referents within necessary for ‘reason’.