IMO:
1) They are exceptional synthetic search engines. In other words – pattern matchers.
2) They imitate the human language facility. In that sense they are amazing. But they don’t imitate the neocortical spatial faculty, (this is LeCunn’s argument) or the hippocampal episodic memory formation, or the frontal cortex recursive wayfinding (testing an idea). They are bad reasoners because reasoning requires reduction to episodic steps, and wayfinding by recursion. And recursion is expensive.
3) Our company’s governance layer( a wrapper around the LLM) can however determine whether a claim or assertion is true/false, ethical/not, possible/no, warrantable/not, liability-producing/not. But it does so by breaking down the problem and recursively testing each step.
4) IMO leCunn is only half right in that we need a world model, but he is wrong, in that linguistic reduction isn’t a necessary property. It’s that you need an LLM (sematic store) for hypothesis generation (auto-association), the equivalent of a router (prefrontal cortex) to manage the ‘reasoning’ process (wayfinding) and to maintain states (episodes), a spatial model to test operational possibility, the llm linguistic model as the input output protocol.
This means we’re just early and very demanding of one revolutionary insight, which is the attention model that mimic the human language facility, and the N-Dimensional sematic manifold as memory.
CD
Runcible
NLI
Source date (UTC): 2026-02-10 19:35:56 UTC
Original post: https://twitter.com/i/web/status/2021307165441732671
Leave a Reply