Theme: Grammar

  • Why “Native Semantic Form” Matters – We Use The LLM’s Grammar, We Don’t ‘math it

    Why “Native Semantic Form” Matters – We Use The LLM’s Grammar, We Don’t ‘math it’.

    LLM producers often think: “If it’s serious, it belongs in a database with schemas.”

    But natural langauge has a schema. We just narrow it into operational prose.

    So our strategy is different: we exploit that most institutional knowledge already exists as semantically structured text:
    • policies, contracts, statutes, guidelines, SOPs
    • case narratives, incident reports, clinical notes
    • argumentation, exceptions, defeaters, precedence
    • definitions and scope conditions
    Relational databases excel at extensional facts (rows/columns). They are poor at intensional structure (exceptions, precedence, defeaters, conditional obligations, scope clauses), unless you re-encode everything into a bespoke logic layer.
    Runcible’s strategy is:
    • Keep normative/semantic artifacts in their native linguistic structure.
    • Compile them into tests and constraints rather than flattening them into relational calculus.
    • Use the LLM as a semantic compiler that can map text into claim graphs + proof obligations.
    • Use the governance layer to force typed closure and prevent rhetorical completion.
    This is the key “why it works” that labs miss: we are not askinging the model to “be moral”; we are using it to compile institutional semantics into computable checks.
    Apparenly our use of morality and truth is confusing. Except, all language that is of value to humans that can be used by machines is in fact either both truthful, ethical-moral, possible, and liable or it isn’t.

    So the foundation of everything … is ethics. Yes. Really.

    So we start with ethics and build a governance layer.
    That way we ‘cleans’ the world model of everything that isn’t true, ethical, moral, possible, and liable.

    For some reason that set of ideas seems counter-intuitive to people – even people in the field.


    Source date (UTC): 2025-12-31 19:17:28 UTC

    Original post: https://x.com/i/articles/2006444612521713737

  • (Runcible Update) Today we created the first pass at Runcible Reality Descriptio

    (Runcible Update)
    Today we created the first pass at Runcible Reality Description Language (RDL). But what does that mean?

    Think of the evolutionary sequences of:

    … 0) Grammars of Paradigms: embodiment > narration > mythology > philosophy > empiricism > science > operationalism

    … 1) Reduction (expressibility): Mathematical Reducibility > Programmatic Reducibility > Operational Reducibility > Verbal Reducibility > Artificial Neural Network Reducibility

    … 2) Closure: Intuitive/Embodied Closure: Basic survival-based instincts or heuristics (e.g., immediate sensory feedback or trial-and-error in pre-literate societies), providing rudimentary decidability without formal structure. > Logical Closure: Affirmative justification through internal consistency (aligning with early philosophy or positiva tests), but vulnerable to unfalsifiable assumptions. > Empirical Closure: Falsification via external correspondence (e.g., scientific method), adding verifiability but still limited to observable phenomena. > Operational Closure: Constructive procedures that demand actionable, repeatable steps (e.g., operationalism), ensuring computability but potentially ignoring ethics or reciprocity. > Reciprocal/Adversarial Closure: Full integration of necessity, sufficiency, reciprocity/symmetry, and coherence under adversarial testing, yielding auditable, ethical, and liability-enforcing decisions (as in RDL and Runcible’s “closure layer”).

    … 3) Tests: Justification (Positiva) > Falsification (Negativa) > Adversarialism (constructive logic and empirical correspondence)

    … 4) Programming: Sequential Programming > Functional Programming > Object Oriented Programming > Reality Description Language (RDL)

    “RDL is an operational grammar, reliant on adversarial survival, by both operational construction and empirical correspondence, for testing the computability, ethics, and testifiability of any statement in human language.”–
    RDL Formalized our work in a way that closed the gaps in LLM computability due to it’s tendency to drift.
    This is revolutionary in no small part because until we solved the method and the grammars it was thought not possible.

    Cheers


    Source date (UTC): 2025-12-21 23:29:27 UTC

    Original post: https://twitter.com/i/web/status/2002884149854752989

  • (Runcible Update) Today we created the first pass at Runcible Reality Descriptio

    (Runcible Update)
    Today we created the first pass at Runcible Reality Description Language (RDL). But what does that mean?

    Think of the evolutionary sequences of:

    … 0) Grammars: embodiment > narration > mythology > philosophy > empiricism > science > operationalism

    … 2) Reduction (expressability): Mathematical Reducibility > Programmatic Reducibility > Operational Reducibility > Verbal Reducibility > Artificial Neural Network Reducibility

    … 1) Tests: Justification (Positiva) > Falsification (Negativa) > Adversarialism (constructive logic and empirical correspondence)

    … 3) Programming: Sequential Programming > Functional Programming > Object Oriented Programming > Reality Description Language (RDL)

    –“RDL is an operational grammar, reliant on adversarial survival, by both operational construction and empirical correspondence, for testing the computability, ethics, and testifiability of any statement in human language.”–

    RDL Formalized our work in a way that closed the gaps in LLM computability due to it’s tendency to drift.

    This is revolutionary in no small part because until we solved the method and the grammars it was thought not possible.

    Cheers


    Source date (UTC): 2025-12-21 22:15:09 UTC

    Original post: https://twitter.com/i/web/status/2002865449474859158

  • “Runcible is a change in the grammar of power: from “who persuades” to “who pass

    –“Runcible is a change in the grammar of power: from “who persuades” to “who passes a test and bears the cost.””–

    Today was a profound day in AI.


    Source date (UTC): 2025-12-21 17:25:32 UTC

    Original post: https://twitter.com/i/web/status/2002792567013069232

  • That’s not true. The operations available in set a, and consequential set b are

    That’s not true. The operations available in set a, and consequential set b are different, but they are directionally the same. There is no difference between the logic of spin (charge), the logic of mass accumulation, and the logic of cooperative accumulation. We may be discussing different operations but as capital it’s the same.

    One of the ways brad and I test a first principle is whether it satisfies the ternary logic’s demand for capital accumulation, loss, equilibrium. Just as we test if for composability at it’s emergent scale (new operations available), it’s constructability from the first principles of the prior scale, and the constructability of the first principles of the subsequent scale.

    This the test of consistency and coherence of the first principles at all scales. when we discover those rules we know we have correctly specified the first principles.


    Source date (UTC): 2025-12-15 23:32:23 UTC

    Original post: https://twitter.com/i/web/status/2000710558828650555

  • Well, I know. I’m agreeing. The point I’m making is that the way I use symbols r

    Well, I know. I’m agreeing. The point I’m making is that the way I use symbols requires the domain (Scale) of what I’m discussing. This appears inconsistent but it isn’t. I would need to explain the use of the symbols in each case. If I did that then the pattern would be obvious.
    The scale <, > and dependency <-, ->, and capital +,-,=,!= are meta symbols that require the user to ‘do work’. And I am too inconsistent, I agree. Where we disagree is the capital symbols, which is the same as the ternary logic triangle (or diamond if we include !=.
    For some reason that doesn’t make sense to you because you interpret it as inconsistent. I haven’t figured out your interpretation – usually it’s more literal than I mean it.


    Source date (UTC): 2025-12-15 23:24:48 UTC

    Original post: https://twitter.com/i/web/status/2000708649824714992

  • Yann LeCun’s wrong. LLM’s solve the language faculty. That doesn’t mean they sol

    Yann LeCun’s wrong. LLM’s solve the language faculty. That doesn’t mean they solve all faculties. It means the compression of the language model is insufficient for dense physical world models. It doesn’t mean they can’t compress descriptions of physical world models.
    i think he’s rather silly, honestly.
    Criticism is not equal to insight or solution. Its a means of getting attention without insight or solution.


    Source date (UTC): 2025-12-15 03:22:11 UTC

    Original post: https://twitter.com/i/web/status/2000406002764747191

  • @MattPirkowski : Yes. Well done. Mathematics is a constrained grammar within the

    @MattPirkowski
    : Yes. Well done. Mathematics is a constrained grammar within the domain of the universal grammar of all language that is subjectively testable via internal consistency and externally demonstrable by empirical evidence.


    Source date (UTC): 2025-12-07 00:28:50 UTC

    Original post: https://twitter.com/i/web/status/1997463273231565240

  • What the Runcible certificate-producing layer actually does Our certificate laye

    What the Runcible certificate-producing layer actually does

    Our certificate layer does the following:
    1. Apply normative grammars (in YAML-form)
    2. Run explicit tests
    3. Invoke retrieval (Truth Corpus)
    4. Invoke the LLM as a descriptive world model
    5. Produce a justified, warrantable decision
    6. Emit a certificate
    7. Store that certificate as a solved problem
    8. Feed solved problems back to training modules (descriptive updates only)
    This is a closed-loop institutional system, not a normative substrate.
    We are doing with AI what a legal system does with judicial opinions:
    • produce judgments,
    • record them,
    • incorporate them into a body of precedent,
    • and improve future interpretation.
    None of that embeds normativity into the substrate.
    All of that embeds
    vocabulary, world knowledge, and example structure into the substrate.
    This distinction is necessary.


    Source date (UTC): 2025-12-03 20:16:34 UTC

    Original post: https://x.com/i/articles/1996312628063613362

  • Its Chomsky. It means every addition increases disambiguation. This is true for

    Its Chomsky. It means every addition increases disambiguation. This is true for every state and operation from the quantum background through to language at the other. Its the law of negative entropy.


    Source date (UTC): 2025-11-29 03:37:06 UTC

    Original post: https://twitter.com/i/web/status/1994611552209817964