Category: AI, Computation, and Technology

  • (NLI/Runcible) Explaining our AI a bit: Courts don’t use numbers (cardinal measu

    (NLI/Runcible)
    Explaining our AI a bit:
    Courts don’t use numbers (cardinal measures) for decidability – only restitution and punishment. They use pass-fail criteria or a hierarchy of pass-fail criteria. While we, in Runcible, may output something numerical, everything is indexed with words: by natural indexing. And any numbers we use are nothing but representations of the approximate delta between them, for ease of understanding.


    Source date (UTC): 2025-10-30 21:23:45 UTC

    Original post: https://twitter.com/i/web/status/1984008345704133018

  • ALIGNMENT WITHOUT A PRIOR DEFINITION OF TRUTH ONLY PRODUCES AGREEMENT, NOT CORRE

    ALIGNMENT WITHOUT A PRIOR DEFINITION OF TRUTH ONLY PRODUCES AGREEMENT, NOT CORRECTNESS.

    1. Why current practice conflates truth and alignment

    Training signal: Most models learn from human preference data. The model is rewarded when humans like the answer, not when the answer corresponds to reality.

    Objective function: Reinforcement-learning fine-tuning minimizes disagreement with raters. That measures social alignment (politeness, tone, consensus) rather than epistemic alignment (accurate mapping to the world).

    Evaluation: Benchmarks such as multiple-choice accuracy or human-evaluation surveys treat “close enough” as success. There is no ground-truth audit trail or falsification step.

    Cultural bias: Most institutions currently regard “safe and pleasant output” as a higher-value product than “provably true output that may be uncomfortable.”

    So alignment, in practice, has come to mean “avoid conflict and offense while sounding credible.”

    2. What it means to optimise for truth first

    If you separate the goals:

    – Truth is a world-to-model mapping.
    – Alignment is a model-to-human mapping.
    – You can only align safely after you know the model’s map is true.

    3. How to do it operationally
    Truth layer first
    – Define testable protocols for each domain (physics, biology, economics, law).
    – Evaluate outputs against these external references automatically.
    -Alignment layer second
    – Take only verified-true outputs as training material for alignment.
    – Optimise style, tone, or prioritisation without touching the truth constraint.
    Audit trail
    – Every claim carries metadata: sources, falsification status, revision history.
    – Alignment never overrides a falsified item; it only moderates its presentation.

    Governance
    – Separate “truth review boards” (scientific verification) from “alignment boards” (ethical and cultural oversight).
    The latter cannot alter the former’s records, only decide how they’re displayed or used.

    4. Practical effect
    Doing this converts alignment from ideological tuning into policy wrapping around a verified epistemic core.

    The system becomes “truth-first, alignment-second”:
    – If the truth layer says a statement is false → it cannot be used for alignment.
    – If it’s undecidable → flag it, don’t optimise on it.
    – If it’s true → alignment may adapt its delivery for audience safety.

    5. In summary
    Current AI development often treats truth as a subset of alignment (“true enough for people to accept”).

    Our approach reverses that: alignment must be a subset of truth (“acceptable ways to deliver what is true”).

    That inversion is what allows reasoning to stay trustworthy.


    Source date (UTC): 2025-10-21 19:03:19 UTC

    Original post: https://twitter.com/i/web/status/1980711515369177177

  • THE DEATH THREAT TO MICROSOFT, GOOGLE AND APPLE Look, back in ’04 I understood t

    THE DEATH THREAT TO MICROSOFT, GOOGLE AND APPLE
    Look, back in ’04 I understood the future of the computer interface, and by 2012 set out to produce it.

    REASONING
    1) The browser is a superior operating system vs the operating systems. (Google’s failure to take advantage of it.)
    2) The file system centric operating system is inferior to the task or process based system that contains files if necessary – this allows context to lead data not the other way around. (Microsoft’s failure)
    3) The interface that functions as a map and store of programs, projects, processes, tasks, and contexts so that every work product exists in a context is superior in organization and utility to an AI, as well as a human. (Microsoft, Google, and Apple’s failure)
    4) The AI-First ‘shell’ or ‘user interface’, especially when trained on (given rules for) following process and policy and innovation where necessary is superior to human memory retention and discipline and innovation where necessary given the normal distribution of users.
    5) The AI-first capacity to assimilate hundreds of causal dimensions compared to the human capacity for one to five is superior to human abilities.
    6) The AI-First capacity to evaluate deduce, infer, predict, and advise across large scales of data organized as such is superior to human ability.

    RESULT
    We designed Oversing and Runcible for this purpose. But we are seeing OpenAI follow the same incremental reasoning. They were far behind us in that understanding, but because of their success with LLMs they have generated the capital necessary to make it happen.
    This is a death sentence for every other operating system, user interface, and application.

    OUR CURRENT THINKING
    We can solve the two blocking problems for LLMs to develop into AGI/SI.
    1) Episodic memory as index and associative network.
    2) Constraint and closure (truth, reciprocity, possibility, historical evidence) as means of decidability and continuous recursive improvement.
    And the economic:
    3) Incremental (recursive) auto-association and prediction. (which is a cost problem)
    The remaining problem will haunt us:
    4) Neuromorphic computing is necessary to collapse costs. The current state of research is promising but underfunded.


    Source date (UTC): 2025-10-21 18:54:04 UTC

    Original post: https://twitter.com/i/web/status/1980709185978548548

  • THE HAPPY ACCIDENT: WHY WE SOLVED TRUTHFUL AND ETHICAL AI

    THE HAPPY ACCIDENT: WHY WE SOLVED TRUTHFUL AND ETHICAL AI…


    Source date (UTC): 2025-10-21 18:44:47 UTC

    Original post: https://twitter.com/i/web/status/1980706851953234400

  • WHY HASN’T THE AI FIELD DISCOVERED OUR SOLUTION? (imo: conflating answer with al

    WHY HASN’T THE AI FIELD DISCOVERED OUR SOLUTION?
    (imo: conflating answer with alignment instead of alignment from the truth.)

    Why the Field Hasn’t Discovered It
    Briefly:
    – Objective mismatch: most researchers optimize for fluency and safety, not falsifiability.
    – Epistemic fragmentation: few combine physics, logic, and jurisprudence into one causal grammar.
    – Institutional incentives: current benchmarks and funding reward novelty, not closure or accountability.
    – Cognitive bias: humans are narrative animals; operational reasoning feels “cold” and is culturally under-selected.

    More…
    Why most of the field hasn’t done this yet

    Different objective functions.
    – Mainstream systems are trained to maximise plausibility and user satisfaction, not falsifiable correctness.

    Fragmented disciplines.
    – Logic, physics, psychology, and jurisprudence live in separate silos. Few teams attempt to unify them under one causal grammar.

    Incentive structure.
    – Academic and commercial metrics reward novelty, fluency, or engagement—not truth-liability or operational precision.

    Tooling inertia.
    – Evaluation pipelines (benchmarks, loss functions) measure text similarity or preference, not closure or decidability.

    Cognitive and cultural bias.
    – Humans find narrative explanation more comfortable than constraint reasoning. Building institutions around constraint feels bureaucratic and “cold.”

    Cost of accountability.
    – A system that keeps full provenance and liability increases organizational risk; most labs are not ready for that level of auditability.

    In short, most current AI research optimizes for speech; what we’re proposing optimizes for law.
    The former produces correlation and persuasion; the latter produces computable, accountable reasoning.
    Different objective, different architecture.


    Source date (UTC): 2025-10-21 18:08:47 UTC

    Original post: https://twitter.com/i/web/status/1980697789945508248

  • (NLI/Runcible) Interesting that we are at the point where we are writing books f

    (NLI/Runcible)
    Interesting that we are at the point where we are writing books for AI’s under the assumption that most people will learn from AI’s translating any given knowledge into that format most accessible to the individual.

    We’ve consciously targeted AIs as the ‘reader’ in some sense because first we need to train them, but secondly we assume anything this complicated will need to be taught by AIs that tutor the individual on his or her terms.


    Source date (UTC): 2025-10-21 17:05:37 UTC

    Original post: https://twitter.com/i/web/status/1980681893399130582

  • ie: closure solves the correlation problem. And we have solved closure. It was j

    ie: closure solves the correlation problem. And we have solved closure. It was just very very hard.


    Source date (UTC): 2025-10-18 19:22:53 UTC

    Original post: https://twitter.com/i/web/status/1979629275725865432

  • Hmm… I don’t necessarily agree. If instead we think of LLMs as solving the int

    Hmm… I don’t necessarily agree. If instead we think of LLMs as solving the interface and language problem (wayfinding) we can solve the remaining world model, episodic memory, recursion, and closure (true, ethical, possible) problems. As far as I know the remaining serious problem is just the economics of it all. It’s just that the industry is so driven to cut hardware and compute costs that we might run out of runway before we can implement the solutions to those remaining architectural problems. I don’t want to see that happen because i’ve already lived through a couple of ai winters so to speak.


    Source date (UTC): 2025-10-18 19:22:20 UTC

    Original post: https://twitter.com/i/web/status/1979629135984210015

  • John; Remaining blockers are (a) episodic memory – indexing and compression and

    John;
    Remaining blockers are (a) episodic memory – indexing and compression and (b) closure – or more correctly, truth ethics possibility and decidability.

    IMO we know how to solve both problems. Memory is just expensive. My organization’s work on closure is just very complicated but relatively easy to implement as a governance layer.

    Depending on what you want to accomplish these two blocking factors are what’s holding us back. Everything else is quite literally the economics of the problem of compute. And AFAIK that will only be solved by neuromorphic chips.


    Source date (UTC): 2025-10-18 18:31:01 UTC

    Original post: https://twitter.com/i/web/status/1979616220631744845

  • Chris. Excellent work. I think you’ve created proper categories and measures. We

    Chris.

    Excellent work. I think you’ve created proper categories and measures. Well done.

    A thought that might take you further.

    I think you touch on the fundamental problem but not the solution to it, which is the overinvestment in the presumption that mathematics and programming tell us much about intelligence – they don’t. They tell us about permutability of small grammars. (paradigm, dimensions, vocabular(references to state – nouns), operations (references to change – verbs), logic (tests of consistency in dimensions humanly testable), and syntax.)

    They hold this focus because of the ease of closure by internal means in these domains. The fallacy of the importance of ease of testing consistency and closure in ‘simple’ fields. Somewhat analogous to the Ludic Fallacy in statistics. In economics we are terribly aware of these limits and fallacies, and in law we ignore them entirely becuse of a presumed near impossibility of closure.

    This is why the LLM producers like their progenitors are stuck in the “correlation trap”.

    So the only way out of that is to understand how to achieve closure by external rather than internal means. And that is a far harder problem.

    (Hence why I and my organization worked on closure in high dimensional spaces rather than in math and programming.)

    If we solved closure (we have) then your time frame would be rapidly accelerated. Because LLMS would gradually converge on truth, ethics, possibility, rather than correlation without convergence to anything other than normativity.


    Source date (UTC): 2025-10-18 18:12:01 UTC

    Original post: https://twitter.com/i/web/status/1979611442002407739