Theme: Decidability

  • ELON ( @elonmusk ) FYI: The benchmarks are focusing too much on internal closure

    ELON (
    @elonmusk
    )
    FYI: The benchmarks are focusing too much on internal closure which is the easiest domain of computation.

    Our organization has solved the problem of external closure – and it’s a very, very, hard problem that has troubled philosophers and scientists for decades if not millennia.

    We can handle everything from truth to ethics to possibility and from economics to law to the humanities. We make human-free recursively improving AI possible.

    We’re trying to get within a degree of you so we can show you or your team.

    Cheers
    CD
    Runcible Inc,

    http://
    runcible.com
    And
    The Natural Law Institute Inc.


    Source date (UTC): 2025-11-12 01:25:53 UTC

    Original post: https://twitter.com/i/web/status/1988417937297076673

  • Yes, all first principles at all scales conform to the ternary logic – its how w

    Yes, all first principles at all scales conform to the ternary logic – its how we know we found a first principle.


    Source date (UTC): 2025-11-09 05:28:46 UTC

    Original post: https://twitter.com/i/web/status/1987391895635632575

  • HOW RUNCIBLE PREVENTS HALLUCINATION Our method prevents hallucination because it

    HOW RUNCIBLE PREVENTS HALLUCINATION
    Our method prevents hallucination because it forbids its necessary preconditions:
    – Ambiguous language,
    – Unfalsifiable claims,
    – Discretionary interpretation,
    – Untestable metaphysical or moral propositions,
    – And unaccountable speech.
    Hallucination is not just discouraged—it is incompatible with a system that requires decidability by construction. Hallucination cannot occur without violating Natural Law’s invariant requirements. Therefore, our work does not merely reduce hallucination; it renders it epistemically and grammatically impossible.


    Source date (UTC): 2025-11-02 01:10:29 UTC

    Original post: https://twitter.com/i/web/status/1984790182022037888

  • RUNCIBLE PROMISE Runcible provides the missing governance layer for AI — a unive

    RUNCIBLE PROMISE
    Runcible provides the missing governance layer for AI — a universal constraint and closure system that makes outputs decidable, lawful, and insurable.
    It ensures every AI action is testable, truthful, reciprocal, and warrantable, allowing AI to operate responsibly in high-liability markets.
    In short, we made AI governable — and therefore profitable.


    Source date (UTC): 2025-11-02 00:07:43 UTC

    Original post: https://twitter.com/i/web/status/1984774387439223150

  • Why Philosophy and Science Failed AI – and How We Solved the Crisis. The twentie

    Why Philosophy and Science Failed AI – and How We Solved the Crisis.

    The twentieth century left philosophy and science divided by incompatible logics. Each discipline specialized into its own language, methods, and measures — closing internally while losing external commensurability. Physics fractured at quantum–relativistic boundaries; mathematics fragmented after Gödel; logic split between intuitionist, formalist, and constructivist camps; computation inherited those contradictions without resolving them. The same crisis that left the foundations of physics undecidable left the foundations of reasoning itself undecidable.
    Epistemology never recovered from this “failure of philosophy”:
    • Idealism vs. operationalism—truth by correspondence gave way to truth by convention.
    • Logic without measurement—symbolic manipulation divorced from constructability.
    • Science without decidability—empiricism treated as description rather than operational test.
    • Computation without causality—machines that simulate inference without grounding in reality.
    The twentieth century produced a fragmentation in the foundations of knowledge. Each discipline secured local precision at the cost of universal coherence.
    1. Philosophy retreated from realism into linguistics and phenomenology—substituting interpretation for operation.
    2. Mathematics lost its claim to completeness under Gödel’s proofs, leaving logic detached from constructability.
    3. Physics divided its causal model into relativistic and quantum domains—coherence replaced by probabilistic description.
    4. Epistemology ceased to test truth by performance, relying instead on consensus and convention.
    5. Computation, born from these same incomplete logics, replicated their error: syntax without semantics, reasoning without grounding, prediction without decidability.
    The result was what we call the century of unanchored formalism. Each field closed internally, but none could close externally. The sciences became silos of incompatible grammars—mathematical, logical, linguistic, statistical—without a shared measure of truth. This created a vacuum in which computation could simulate intelligence without ever possessing understanding.
    While each field escaped falsification by narrowing its domain; none rebuilt the universal grammar needed for cross-domain coherence. Artificial intelligence merely inherits this unfinished project. The current correlation-based architectures represent the culmination of that philosophical retreat: statistically fluent yet epistemically blind. It substitutes correlation for causation, probability for truth, and approximation for decidability. Scaling parameters improves fluency, not reliability. The result is a system that can describe but cannot testify. It speaks without knowing. The result is an intelligence that appears to reason but cannot testify.
    The consequence of that century-long fracture is the modern research environment itself: siloed, specialized, and self-referential. Each field perfected its own internal grammar while abandoning external coherence. The result is an academy fluent in the language of correlation but incapable of grounding it in operational reality. This is why mathematics became “mathiness,” logic became wordplay, and programming became simulation without semantics. These are not minor academic quirks—they are inherited pathologies that now define artificial intelligence. The same philosophical errors that left physics incomplete have left computation undecidable.
    Our work begins where philosophy, epistemology, and the scientific method stopped:
    • Restoring operationalism as the universal test of meaning.
    • Establishing commensurability across disciplines through shared units of measurement.
    • Re-embedding logic, mathematics, and computation within the physical constraints of reality.
    • Producing decidable intelligence — systems that can warrant truth, not merely simulate it.
    In short, where the twentieth century produced precision without coherence, Runcible restores coherence without sacrificing precision — completing the unification of reasoning, science, and computation that modern philosophy abandoned.
    That’s why our work is difficult — because it requires completing the project that philosophy, epistemology, and science abandoned: restoring the operational foundations of decidability, truth, and reciprocity across all domains, from physics to computation.


    Source date (UTC): 2025-11-02 00:00:42 UTC

    Original post: https://x.com/i/articles/1984772619732992138

  • (NLI/Runcible) Explaining our AI a bit: Courts don’t use numbers (cardinal measu

    (NLI/Runcible)
    Explaining our AI a bit:
    Courts don’t use numbers (cardinal measures) for decidability – only restitution and punishment. They use pass-fail criteria or a hierarchy of pass-fail criteria. While we, in Runcible, may output something numerical, everything is indexed with words: by natural indexing. And any numbers we use are nothing but representations of the approximate delta between them, for ease of understanding.


    Source date (UTC): 2025-10-30 21:23:45 UTC

    Original post: https://twitter.com/i/web/status/1984008345704133018

  • ie: closure solves the correlation problem. And we have solved closure. It was j

    ie: closure solves the correlation problem. And we have solved closure. It was just very very hard.


    Source date (UTC): 2025-10-18 19:22:53 UTC

    Original post: https://twitter.com/i/web/status/1979629275725865432

  • John; Remaining blockers are (a) episodic memory – indexing and compression and

    John;
    Remaining blockers are (a) episodic memory – indexing and compression and (b) closure – or more correctly, truth ethics possibility and decidability.

    IMO we know how to solve both problems. Memory is just expensive. My organization’s work on closure is just very complicated but relatively easy to implement as a governance layer.

    Depending on what you want to accomplish these two blocking factors are what’s holding us back. Everything else is quite literally the economics of the problem of compute. And AFAIK that will only be solved by neuromorphic chips.


    Source date (UTC): 2025-10-18 18:31:01 UTC

    Original post: https://twitter.com/i/web/status/1979616220631744845

  • “Intentional accounts of history describe what men wished; incentive accounts de

    –“Intentional accounts of history describe what men wished; incentive accounts describe what was possible. Civilization endures only when its intentions remain computable within its constraints.”–


    Source date (UTC): 2025-10-14 19:19:14 UTC

    Original post: https://twitter.com/i/web/status/1978178806725906432

  • Q: How Does Doolittle’s Closure Work? –“In mathematics, closure is achieved by

    Q: How Does Doolittle’s Closure Work?

    –“In mathematics, closure is achieved by syntactic rule enforcement. In Natural Law protocol, closure is achieved by semantic rule enforcement—every term is grounded in reality via operational definition. Hence the human conversational domain acquires the same self-referential decidability that math or physics possess, but with empirical rather than symbolic grounding.”–


    Source date (UTC): 2025-10-12 22:58:27 UTC

    Original post: https://twitter.com/i/web/status/1977509195730858077