THE HAPPY ACCIDENT: WHY WE SOLVED TRUTHFUL AND ETHICAL AI…
Source date (UTC): 2025-10-21 18:44:47 UTC
Original post: https://twitter.com/i/web/status/1980706851953234400
THE HAPPY ACCIDENT: WHY WE SOLVED TRUTHFUL AND ETHICAL AI…
Source date (UTC): 2025-10-21 18:44:47 UTC
Original post: https://twitter.com/i/web/status/1980706851953234400
WHY HASN’T THE AI FIELD DISCOVERED OUR SOLUTION?
(imo: conflating answer with alignment instead of alignment from the truth.)
Why the Field Hasn’t Discovered It
Briefly:
– Objective mismatch: most researchers optimize for fluency and safety, not falsifiability.
– Epistemic fragmentation: few combine physics, logic, and jurisprudence into one causal grammar.
– Institutional incentives: current benchmarks and funding reward novelty, not closure or accountability.
– Cognitive bias: humans are narrative animals; operational reasoning feels “cold” and is culturally under-selected.
More…
Why most of the field hasn’t done this yet
Different objective functions.
– Mainstream systems are trained to maximise plausibility and user satisfaction, not falsifiable correctness.
Fragmented disciplines.
– Logic, physics, psychology, and jurisprudence live in separate silos. Few teams attempt to unify them under one causal grammar.
Incentive structure.
– Academic and commercial metrics reward novelty, fluency, or engagement—not truth-liability or operational precision.
Tooling inertia.
– Evaluation pipelines (benchmarks, loss functions) measure text similarity or preference, not closure or decidability.
Cognitive and cultural bias.
– Humans find narrative explanation more comfortable than constraint reasoning. Building institutions around constraint feels bureaucratic and “cold.”
Cost of accountability.
– A system that keeps full provenance and liability increases organizational risk; most labs are not ready for that level of auditability.
In short, most current AI research optimizes for speech; what we’re proposing optimizes for law.
The former produces correlation and persuasion; the latter produces computable, accountable reasoning.
Different objective, different architecture.
Source date (UTC): 2025-10-21 18:08:47 UTC
Original post: https://twitter.com/i/web/status/1980697789945508248
(NLI/Runcible)
Interesting that we are at the point where we are writing books for AI’s under the assumption that most people will learn from AI’s translating any given knowledge into that format most accessible to the individual.
We’ve consciously targeted AIs as the ‘reader’ in some sense because first we need to train them, but secondly we assume anything this complicated will need to be taught by AIs that tutor the individual on his or her terms.
Source date (UTC): 2025-10-21 17:05:37 UTC
Original post: https://twitter.com/i/web/status/1980681893399130582
ie: closure solves the correlation problem. And we have solved closure. It was just very very hard.
Source date (UTC): 2025-10-18 19:22:53 UTC
Original post: https://twitter.com/i/web/status/1979629275725865432
Hmm… I don’t necessarily agree. If instead we think of LLMs as solving the interface and language problem (wayfinding) we can solve the remaining world model, episodic memory, recursion, and closure (true, ethical, possible) problems. As far as I know the remaining serious problem is just the economics of it all. It’s just that the industry is so driven to cut hardware and compute costs that we might run out of runway before we can implement the solutions to those remaining architectural problems. I don’t want to see that happen because i’ve already lived through a couple of ai winters so to speak.
Source date (UTC): 2025-10-18 19:22:20 UTC
Original post: https://twitter.com/i/web/status/1979629135984210015
John;
Remaining blockers are (a) episodic memory – indexing and compression and (b) closure – or more correctly, truth ethics possibility and decidability.
IMO we know how to solve both problems. Memory is just expensive. My organization’s work on closure is just very complicated but relatively easy to implement as a governance layer.
Depending on what you want to accomplish these two blocking factors are what’s holding us back. Everything else is quite literally the economics of the problem of compute. And AFAIK that will only be solved by neuromorphic chips.
Source date (UTC): 2025-10-18 18:31:01 UTC
Original post: https://twitter.com/i/web/status/1979616220631744845
Chris.
Excellent work. I think you’ve created proper categories and measures. Well done.
A thought that might take you further.
I think you touch on the fundamental problem but not the solution to it, which is the overinvestment in the presumption that mathematics and programming tell us much about intelligence – they don’t. They tell us about permutability of small grammars. (paradigm, dimensions, vocabular(references to state – nouns), operations (references to change – verbs), logic (tests of consistency in dimensions humanly testable), and syntax.)
They hold this focus because of the ease of closure by internal means in these domains. The fallacy of the importance of ease of testing consistency and closure in ‘simple’ fields. Somewhat analogous to the Ludic Fallacy in statistics. In economics we are terribly aware of these limits and fallacies, and in law we ignore them entirely becuse of a presumed near impossibility of closure.
This is why the LLM producers like their progenitors are stuck in the “correlation trap”.
So the only way out of that is to understand how to achieve closure by external rather than internal means. And that is a far harder problem.
(Hence why I and my organization worked on closure in high dimensional spaces rather than in math and programming.)
If we solved closure (we have) then your time frame would be rapidly accelerated. Because LLMS would gradually converge on truth, ethics, possibility, rather than correlation without convergence to anything other than normativity.
Source date (UTC): 2025-10-18 18:12:01 UTC
Original post: https://twitter.com/i/web/status/1979611442002407739
Meaning yes. If you mean qualia, no. Though we cannot yet tell if that matters.
Source date (UTC): 2025-10-18 15:34:05 UTC
Original post: https://twitter.com/i/web/status/1979571695192342718
–“A lot of the woke nonsense and AI alarmism are from the effective altruists- Anthropic represents that group:
Source date (UTC): 2025-10-18 03:36:58 UTC
Original post: https://twitter.com/i/web/status/1979391226899239252
Command Syntax
Type: “Analyze:” <paste text here>
And submit the query.
There are more commands available if you ask it for them.
Here:
Source date (UTC): 2025-10-16 19:55:19 UTC
Original post: https://x.com/i/articles/1978912661787283687