Chris.
Excellent work. I think you’ve created proper categories and measures. Well done.
A thought that might take you further.
I think you touch on the fundamental problem but not the solution to it, which is the overinvestment in the presumption that mathematics and programming tell us much about intelligence – they don’t. They tell us about permutability of small grammars. (paradigm, dimensions, vocabular(references to state – nouns), operations (references to change – verbs), logic (tests of consistency in dimensions humanly testable), and syntax.)
They hold this focus because of the ease of closure by internal means in these domains. The fallacy of the importance of ease of testing consistency and closure in ‘simple’ fields. Somewhat analogous to the Ludic Fallacy in statistics. In economics we are terribly aware of these limits and fallacies, and in law we ignore them entirely becuse of a presumed near impossibility of closure.
This is why the LLM producers like their progenitors are stuck in the “correlation trap”.
So the only way out of that is to understand how to achieve closure by external rather than internal means. And that is a far harder problem.
(Hence why I and my organization worked on closure in high dimensional spaces rather than in math and programming.)
If we solved closure (we have) then your time frame would be rapidly accelerated. Because LLMS would gradually converge on truth, ethics, possibility, rather than correlation without convergence to anything other than normativity.
Source date (UTC): 2025-10-18 18:12:01 UTC
Original post: https://twitter.com/i/web/status/1979611442002407739
Leave a Reply