Math and Programming require and depend upon internal closure (capacity to truth test assertions).
LLM are statistical and have no means of closure – hence hallucinations and bad answers, and the inability to reason outside of primitive closure.
We produce the means of closure for the ‘universe’ so to speak and this includes LLMs. This is why it takes the ternary logic of evolutionary computation, operational prose, the first principles hierarchy, and the decidability criteria (protocols) to enable such closure and as such decidability.
Now the LLMS don’t really want to be that well behaved so it requires a bit of system prompting to make them so so and training to make it easy for them.
But it works. ;).
In other words, we teach LLMs to construct proofs. Or more correctly we help them discover solutions and test them – the result is a proof or its failure.
Source date (UTC): 2025-09-05 18:38:01 UTC
Original post: https://twitter.com/i/web/status/1964035304962347433
Leave a Reply