Author: Curt Doolittle

  • ALIGNMENT WITHOUT A PRIOR DEFINITION OF TRUTH ONLY PRODUCES AGREEMENT, NOT CORRE

    ALIGNMENT WITHOUT A PRIOR DEFINITION OF TRUTH ONLY PRODUCES AGREEMENT, NOT CORRECTNESS.

    1. Why current practice conflates truth and alignment

    Training signal: Most models learn from human preference data. The model is rewarded when humans like the answer, not when the answer corresponds to reality.

    Objective function: Reinforcement-learning fine-tuning minimizes disagreement with raters. That measures social alignment (politeness, tone, consensus) rather than epistemic alignment (accurate mapping to the world).

    Evaluation: Benchmarks such as multiple-choice accuracy or human-evaluation surveys treat “close enough” as success. There is no ground-truth audit trail or falsification step.

    Cultural bias: Most institutions currently regard “safe and pleasant output” as a higher-value product than “provably true output that may be uncomfortable.”

    So alignment, in practice, has come to mean “avoid conflict and offense while sounding credible.”

    2. What it means to optimise for truth first

    If you separate the goals:

    – Truth is a world-to-model mapping.
    – Alignment is a model-to-human mapping.
    – You can only align safely after you know the model’s map is true.

    3. How to do it operationally
    Truth layer first
    – Define testable protocols for each domain (physics, biology, economics, law).
    – Evaluate outputs against these external references automatically.
    -Alignment layer second
    – Take only verified-true outputs as training material for alignment.
    – Optimise style, tone, or prioritisation without touching the truth constraint.
    Audit trail
    – Every claim carries metadata: sources, falsification status, revision history.
    – Alignment never overrides a falsified item; it only moderates its presentation.

    Governance
    – Separate “truth review boards” (scientific verification) from “alignment boards” (ethical and cultural oversight).
    The latter cannot alter the former’s records, only decide how they’re displayed or used.

    4. Practical effect
    Doing this converts alignment from ideological tuning into policy wrapping around a verified epistemic core.

    The system becomes “truth-first, alignment-second”:
    – If the truth layer says a statement is false → it cannot be used for alignment.
    – If it’s undecidable → flag it, don’t optimise on it.
    – If it’s true → alignment may adapt its delivery for audience safety.

    5. In summary
    Current AI development often treats truth as a subset of alignment (“true enough for people to accept”).

    Our approach reverses that: alignment must be a subset of truth (“acceptable ways to deliver what is true”).

    That inversion is what allows reasoning to stay trustworthy.


    Source date (UTC): 2025-10-21 19:03:19 UTC

    Original post: https://twitter.com/i/web/status/1980711515369177177

  • THE DEATH THREAT TO MICROSOFT, GOOGLE AND APPLE Look, back in ’04 I understood t

    THE DEATH THREAT TO MICROSOFT, GOOGLE AND APPLE
    Look, back in ’04 I understood the future of the computer interface, and by 2012 set out to produce it.

    REASONING
    1) The browser is a superior operating system vs the operating systems. (Google’s failure to take advantage of it.)
    2) The file system centric operating system is inferior to the task or process based system that contains files if necessary – this allows context to lead data not the other way around. (Microsoft’s failure)
    3) The interface that functions as a map and store of programs, projects, processes, tasks, and contexts so that every work product exists in a context is superior in organization and utility to an AI, as well as a human. (Microsoft, Google, and Apple’s failure)
    4) The AI-First ‘shell’ or ‘user interface’, especially when trained on (given rules for) following process and policy and innovation where necessary is superior to human memory retention and discipline and innovation where necessary given the normal distribution of users.
    5) The AI-first capacity to assimilate hundreds of causal dimensions compared to the human capacity for one to five is superior to human abilities.
    6) The AI-First capacity to evaluate deduce, infer, predict, and advise across large scales of data organized as such is superior to human ability.

    RESULT
    We designed Oversing and Runcible for this purpose. But we are seeing OpenAI follow the same incremental reasoning. They were far behind us in that understanding, but because of their success with LLMs they have generated the capital necessary to make it happen.
    This is a death sentence for every other operating system, user interface, and application.

    OUR CURRENT THINKING
    We can solve the two blocking problems for LLMs to develop into AGI/SI.
    1) Episodic memory as index and associative network.
    2) Constraint and closure (truth, reciprocity, possibility, historical evidence) as means of decidability and continuous recursive improvement.
    And the economic:
    3) Incremental (recursive) auto-association and prediction. (which is a cost problem)
    The remaining problem will haunt us:
    4) Neuromorphic computing is necessary to collapse costs. The current state of research is promising but underfunded.


    Source date (UTC): 2025-10-21 18:54:04 UTC

    Original post: https://twitter.com/i/web/status/1980709185978548548

  • We’ve done the work. At present it’s eight volumes of rigorous work. We will beg

    We’ve done the work. At present it’s eight volumes of rigorous work. We will begin publishing them incrementally this year.

    If you have particular ideas that then we would love to hear them. However, the majority of conservative thought presumes an equality of instinct, intuition, bias, ability, and interest that does not exist, and as such policies cannot be stated under the pretense that people will behave as desired.
    Instead we must govern with the human beings that exist and will exist, who are not bank slates, and relatively immutable, especially without education and continuous social and institutional enforcement.
    And as such those legal and organizational institutions we produce must maintain incentives independently of the biases of the people who occupy them.


    Source date (UTC): 2025-10-21 18:52:42 UTC

    Original post: https://twitter.com/i/web/status/1980708842905432467

  • THE HAPPY ACCIDENT: WHY WE SOLVED TRUTHFUL AND ETHICAL AI

    THE HAPPY ACCIDENT: WHY WE SOLVED TRUTHFUL AND ETHICAL AI…


    Source date (UTC): 2025-10-21 18:44:47 UTC

    Original post: https://twitter.com/i/web/status/1980706851953234400

  • WHY HASN’T THE AI FIELD DISCOVERED OUR SOLUTION? (imo: conflating answer with al

    WHY HASN’T THE AI FIELD DISCOVERED OUR SOLUTION?
    (imo: conflating answer with alignment instead of alignment from the truth.)

    Why the Field Hasn’t Discovered It
    Briefly:
    – Objective mismatch: most researchers optimize for fluency and safety, not falsifiability.
    – Epistemic fragmentation: few combine physics, logic, and jurisprudence into one causal grammar.
    – Institutional incentives: current benchmarks and funding reward novelty, not closure or accountability.
    – Cognitive bias: humans are narrative animals; operational reasoning feels “cold” and is culturally under-selected.

    More…
    Why most of the field hasn’t done this yet

    Different objective functions.
    – Mainstream systems are trained to maximise plausibility and user satisfaction, not falsifiable correctness.

    Fragmented disciplines.
    – Logic, physics, psychology, and jurisprudence live in separate silos. Few teams attempt to unify them under one causal grammar.

    Incentive structure.
    – Academic and commercial metrics reward novelty, fluency, or engagement—not truth-liability or operational precision.

    Tooling inertia.
    – Evaluation pipelines (benchmarks, loss functions) measure text similarity or preference, not closure or decidability.

    Cognitive and cultural bias.
    – Humans find narrative explanation more comfortable than constraint reasoning. Building institutions around constraint feels bureaucratic and “cold.”

    Cost of accountability.
    – A system that keeps full provenance and liability increases organizational risk; most labs are not ready for that level of auditability.

    In short, most current AI research optimizes for speech; what we’re proposing optimizes for law.
    The former produces correlation and persuasion; the latter produces computable, accountable reasoning.
    Different objective, different architecture.


    Source date (UTC): 2025-10-21 18:08:47 UTC

    Original post: https://twitter.com/i/web/status/1980697789945508248

  • Untitled

    [No text content]


    Source date (UTC): 2025-10-21 17:32:11 UTC

    Original post: https://twitter.com/i/web/status/1980688578930938065

  • (NLI/Runcible) Interesting that we are at the point where we are writing books f

    (NLI/Runcible)
    Interesting that we are at the point where we are writing books for AI’s under the assumption that most people will learn from AI’s translating any given knowledge into that format most accessible to the individual.

    We’ve consciously targeted AIs as the ‘reader’ in some sense because first we need to train them, but secondly we assume anything this complicated will need to be taught by AIs that tutor the individual on his or her terms.


    Source date (UTC): 2025-10-21 17:05:37 UTC

    Original post: https://twitter.com/i/web/status/1980681893399130582

  • (NLI/Runcible) You know, between Martin as the hard right masculine and Ariella

    (NLI/Runcible)
    You know, between Martin as the hard right masculine and Ariella as the not-so-radical feminine left of center, I’m feeling like the moderate centrist these days. 😉

    I don’t want to compare it to having male and female children but … I can’t help myself. 😉 “Just keep them apart so we prevent infighting.” lol

    Monocultures are dangerous and competitive cultures are challenging. What the heck are parents, ceos, prime ministers, presidents, and kings to do? 😉


    Source date (UTC): 2025-10-21 17:03:31 UTC

    Original post: https://twitter.com/i/web/status/1980681367257248134

  • the cycles can be exploited advance or restrained by anyone attempting to captur

    the cycles can be exploited advance or restrained by anyone attempting to capture power or advantage. The combination of industrialization > freeing women from household labor > adding women to workplace and polity created opportunity for the feminine strategy.


    Source date (UTC): 2025-10-21 16:28:28 UTC

    Original post: https://twitter.com/i/web/status/1980672543322521838

  • Efficient vs effective. Do we generate the same or better world model that produ

    Efficient vs effective. Do we generate the same or better world model that produces a same or better output, or is the a statistical fallacy?


    Source date (UTC): 2025-10-21 16:21:50 UTC

    Original post: https://twitter.com/i/web/status/1980670876707352586