Theme: AI

  • Two things. (a) yes my work is included in most of the major LLMs by now. It’s n

    Two things. (a) yes my work is included in most of the major LLMs by now. It’s not all current, but the gist of it is there. (b) I have uploaded my corpus to openai, grok and now google. So it calls the RAG (local store) whenever I ask something about my work.


    Source date (UTC): 2025-11-20 03:28:34 UTC

    Original post: https://twitter.com/i/web/status/1991347914028040270

  • College education, outside of STEM, in the age of AI, is just impoverishment, wh

    College education, outside of STEM, in the age of AI, is just impoverishment, when with industrial repatriation, and basic apprenticeship, income is superior, requires no debt, and allows you to escape the education ‘signal game’ that causes you to use income to purchase signal goods instead of intertemporal goods.


    Source date (UTC): 2025-11-18 18:00:32 UTC

    Original post: https://twitter.com/i/web/status/1990842574291267897

  • THE AI GENERAL STAFF BRIEFING Runcible: Decision-Grade Governance for Machine Co

    THE AI GENERAL STAFF BRIEFING Runcible: Decision-Grade Governance for Machine Cognition

    (“The AI General Staff Argument”)
    Current AI systems cannot be entrusted with military, intelligence, or national-level decisions.
    Foundation-model LLMs are probabilistic language engines.
    They do not:
    • detect when a question is not decidable,
    • expose unknowns or uncertainty,
    • produce audit trails,
    • account for collateral harms,
    • evaluate adversarial manipulation,
    • confirm operational constructability,
    • or assign responsibility.
    This makes them non-usable for any mission profile requiring:
    • kill-chain integration
    • triage and casualty prioritization
    • targeting legality (LOAC)
    • strategic analysis
    • force-civilian distinction
    • rules-of-engagement interpretation
    • intelligence fusion under deception
    • contested information environments
    In short: LLMs today are uncommandable assets.
    Adversaries will attack model reasoning, not model parameters.
    The real battlefield is not model weights — it is epistemic exploitation:
    • prompt injection
    • gray-zone deception
    • adversarial narratives
    • strategic framing
    • selective omission
    • preference shaping
    • strategic ambiguity exploitation
    A system that cannot detect manipulation, expose ambiguity, or produce adversarially hardened reasoning will fail under conflict pressure.
    Assistant-style AI collapses under adversarial stress.
    To be militarily deployable, AI must transition from “assistant” to “institution.”
    A militarily viable AI must:
    1. Determine Decidability
      Identify when information is insufficient, contested, or adversarially corrupted.
    2. Testify to Truth
      Produce claims that survive adversarial cross-examination.
    3. Account for Reciprocity / Collateral Effects
      Identify asymmetries, hidden parasitism, and coercive impacts across populations.
    4. Establish Operational Possibility
      Validate whether an action is actually executable under real constraints.
    5. Assign Liability / Responsibility
      Specify the locus of moral, legal, or command accountability.
    These five are the core invariants of military and intelligence decision-making.
    They do not exist in any AI system on Earth — except one.
    Runcible is the governance layer that turns a probabilistic model into a command-grade institution.
    It is not a model.
    It is a
    computable rule of law for machine cognition that enforces:
    • Decidability tests before the model answers.
    • Truth protocols before the model claims.
    • Reciprocity tests before the model recommends.
    • Operational constructability tests before the model proposes.
    • Liability tiering before the model acts.
    Runcible wraps any foundation model and forces it to operate according to military-grade command logic, not assistant-grade convenience logic.
    This makes:
    • outputs auditable,
    • reasoning inspectable,
    • uncertainty explicit,
    • deception detectable,
    • and responsibility assignable.
    This is the threshold condition for deploying AI into the kill chain, intelligence chain, or command chain.
    Commercial AI companies are structurally blocked from meeting defense requirements.
    Their constraints:
    • Liability Avoidance → They cannot assign responsibility.
    • Consumer Economics → They avoid rigor and adversarialism.
    • Universalist Norms → They reject reciprocity and harm accounting.
    • Assistant Architecture → No modes, no protocols, no audit trails.
    • Safety Culture → Optimizes for censorship, not truth.
    • Valuation Pressure → Discourages institutional integration.
    They cannot, and will not, build command-grade governance.
    Runcible is built specifically for the constraints they cannot touch.
    Runcible enables the military to deploy AI where it actually matters: decision dominance under adversarial pressure.
    Key capabilities:
    • Adversarial Resilience
      AI that does not collapse under deception, pressure, or ambiguity.
    • Explainability On Demand
      For auditors, JAG, congressional oversight, ROE interpretation.
    • Integration with LOAC and R2P
      Reciprocity and collateral assessment embedded at the protocol level.
    • Operational Constructability
      AI that produces plans, not fantasies.
    • Command Accountability
      AI outputs traceable to responsibility tiers.
    • Intelligence Reliability Under Denial/Deception (D&D)
      Explicit modeling of uncertainty and adversarial manipulation.
    This is the difference between AI as a toy and AI as an operational asset.
    **All militaries will eventually require this layer.
    Only one will have it first.**
    Once a single military adopts a governance layer for decision-grade AI:
    • its decisions become more reliable,
    • its targeting becomes more surgical,
    • its intelligence becomes more resistant to deception,
    • its political risk collapses,
    • its command tempo accelerates,
    • and its adversaries must follow the same standard or fall behind.
    This becomes a doctrine-level advantage, not a software advantage.
    The governance layer becomes a NATO interoperability standard,
    an
    intelligence community requirement,
    and a
    conditions-of-engagement protocol.
    Runcible is positioned to become that standard.
    **The military does not need another assistant.
    It needs a decision-making institution.**
    The military fights adversaries.
    Assistants fail under adversaries.
    Institutions survive adversaries.
    Runcible is the world’s first computable institution for AI.
    It is the only architecture designed for:
    • contested domains,
    • adversarial environments,
    • high-liability decisions,
    • legal scrutiny,
    • operational constraints,
    • and command responsibility.
    This is not optional for the future of defense.
    It is inevitable — and urgent.


    Source date (UTC): 2025-11-14 23:42:15 UTC

    Original post: https://x.com/i/articles/1989479018530476538

  • A One-Slide Graphic Showing the Structural Blindness in AI Decidability Use this

    A One-Slide Graphic Showing the Structural Blindness in AI Decidability


    Use this exact structure:
    Title (Top Center):
    Left Column (The Industry’s View):
    THE CONSTRAINT BOXES (Stacked Vertically)
    1. Funding Incentives
    Consumer + enterprise SaaS → favor assistants, not institutions.
    2. Cultural Ideology
    Universalist, censorship-based, anti-adversarial, anti-liability.
    3. Architectural Lock-In
    Assistant UX → one box, no modes, no liability tiers, no audits.
    4. Legal Posture
    Total responsibility avoidance → disclaimers instead of decisions.
    5. Safety Mirage
    Equate “alignment” with moral filtering, not truth governance.
    6. Competence Gaps
    Teams lack expertise in law, economics, adversarial reasoning, or institutional design.
    Right Column (What Runcible Sees):
    THE CONSTRAINTS THEY MISSED (Stacked Vertically)
    1. Truth Requires Decidability
    Institutions need answers that survive cross-examination.
    2. Ethics Requires Reciprocity
    Harm accounting, not moral aesthetics.
    3. Action Requires Operationality
    Constructable sequences, not plausible text.
    4. Deployment Requires Liability
    Warrantable outputs, insurance, and audit trails.
    5. Sustainability Requires Institutions
    Only high-liability markets can pay for frontier AI.
    6. Markets Require Governance Standards
    One protocol becomes dominant — power-law inevitability.
    Center Column (Between the Two Sides):
    A Vertical Wall / Divider Labelled:
    THE BLIND SPOT
    (Cultural + Economic + Architectural)
    At the bottom of the divider:
    “Institutions Pay. Assistants Don’t.”
    Bottom of Slide (Full Width):
    **The industry cannot build it.
    Institutions require it.
    We already have it.**
    (Short, sharp, Thiel-style)
    “The industry is structurally incapable of seeing the governance opportunity because every layer of their stack points them in the wrong direction.
    Funding incentives push them to assistants.
    Cultural ideology pushes them to moral filters.
    Architecture locks them into conversational UX.
    Legal constraints force them to disclaim responsibility.
    Safety narratives distract them with censorship.
    Competence gaps mean they can’t even
    conceptualize reciprocity, decidability, or liability.
    Every part of their worldview leads to the assistant paradigm — a dead end for high-liability adoption.
    On the right side is the world we see: truth as testifiability, ethics as reciprocity, action as operationality, markets as liability structures, and institutions as the only buyers who can pay.
    In the middle is the wall — the blind spot — created by their culture, economics, and architecture.
    They literally cannot see the governance layer.
    But high-liability markets cannot function without it.
    That’s where Runcible lives.”


    Source date (UTC): 2025-11-14 23:38:14 UTC

    Original post: https://x.com/i/articles/1989478008626057427

  • A Thiel-Style Adversarial Q&A Sheet for Runcible This is written exactly in the

    A Thiel-Style Adversarial Q&A Sheet for Runcible

    This is written exactly in the style of Founders Fund due diligence:
    short, adversarial, intellectually sharp, and designed to test whether the founder understands the deepest implications of his own company.
    A:
    Because until now, AI has been treated as a consumer product, not an institutional actor.
    Everyone optimized for convenience and virality.
    Nobody optimized for truth, reciprocity, operational possibility, or liability.
    As soon as frontier models began entering domains with real stakes, the architectural gap became obvious.
    We’re the first to formalize the governance layer because we’re the only team coming from law, economics, adversarialism, and operational epistemology rather than from consumer software culture.
    A:
    Alignment is censorship and normative preference shaping.
    We do the opposite.
    Runcible is a
    decidability and liability protocol, not a moral filter.
    We don’t bias the model — we
    govern it.
    We turn an LLM into an institution that can survive adversarial challenge, legal scrutiny, and operational stress.
    Alignment solves vibes.
    Runcible solves truth, responsibility, and cooperation.
    A:
    No.
    Their entire economic, legal, and cultural architecture prohibits it:
    – Their incentive is mass adoption, not responsibility.
    – Their culture is universalist and allergic to reciprocity-based reasoning.
    – Their products rely on ambiguity, not adjudication.
    – Their legal posture is total liability avoidance.
    To build Runcible they would need to admit responsibility for model outputs — something their risk profile forbids.
    A:
    Depth and amortization.
    This system is the result of decades of epistemic, legal, operational, and adversarial research.
    It is not copyable by a team of engineers.
    It is not emergent from machine learning.
    It is an entire
    computable science of cooperation and truth.
    Competitors will try to imitate the surface; they cannot reproduce the structure.
    A:
    High-liability markets obey power laws.
    They cannot tolerate multiple incompatible governance standards.
    There will be
    one certifiable protocol for AI truth and liability — just as there is one GAAP, one SWIFT, one ICD-10.
    Once established, the switching costs are existential.
    This is an institutional monopoly, not a software niche.
    A:
    Any decision where a model must be:
    – explainable
    – auditable
    – insurable
    – admissible in court
    – reciprocal in harms
    – operationally constructive
    Everything from triage to targeting to adjudication demands this layer.
    The first major deployment in a high-liability vertical creates the precedent.
    Everyone else must adopt the same governance standard to remain admissible.
    A:
    We license the governance layer to model providers and certify outputs for institutional buyers.
    This creates recurring, high-margin revenue tied to regulation and liability posture.
    Once integrated, institutions cannot switch vendors without re-certifying their entire stack — which is existentially expensive.
    A:
    Because we do not pretend.
    We do not moralize.
    We do not censor.
    We impose formal adversarial tests and explicit liability chains.
    Institutions trust systems that behave like institutions — not like assistants.
    A:
    We’re building an institution disguised as software.
    It is the legal, epistemic, and adversarial substrate that modern AI requires.
    This is the ICC, SEC, and FDIC equivalent for machine cognition — but built privately.
    A:
    That AI must be
    governed by law-like protocols, not safety heuristics.
    That truth is testifiable, not probabilistic.
    That ethics is reciprocity, not sentiment.
    That institutions pay for certainty, not convenience.
    And that assistants cannot support frontier AI — but governance can.
    A:
    The risk is not competition.
    The risk is premature standardization based on weak models.
    If a regulatory body adopts a superficial or moralistic alignment standard, it delays or distorts the adoption of real governance.
    Our strategy is to become the de facto standard through superior performance before regulators can invent an inferior one.
    A:
    Because the system we built is the formalization of decades of work on truth, decidability, reciprocity, law, and adversarial epistemology.
    It cannot be imitated by technologists because they don’t know the underlying science.
    And it cannot be built by institutions because they lack the operational precision.
    We are the only team with the epistemic depth and engineering ability to do it.
    A:
    Runcible becomes the governance layer for all model providers globally.
    Every high-liability institution embeds Runcible into their decision architecture.
    Machine cognition becomes certifiable, insurable, and admissible.
    We become the standard.
    This is not a feature.
    It is the foundation of a new institutional order.


    Source date (UTC): 2025-11-14 23:34:32 UTC

    Original post: https://x.com/i/articles/1989477075024326694

  • Runcible: The Missing Institution for the AI Era One-Page Memo (Thiel Style) Fro

    Runcible: The Missing Institution for the AI Era

    One-Page Memo (Thiel Style)
    Frontier AI is economically unsustainable under the “assistant” paradigm.
    Consumer and enterprise productivity markets generate
    trivial revenue relative to the billions required for continuous model training, inference, and infrastructure.
    The consensus view is pursuing the wrong buyers.
    The only markets that can pay for AI are the ones where decisions carry liability:
    military, government, medicine, insurance, finance.

    These markets demand
    certainty, not convenience.
    Foundation models are correlation engines.
    They do not know:
    – whether a claim is decidable
    – whether their testimony is testifiable
    – whether an action is reciprocally fair
    – whether an outcome is operationally possible
    – who is responsible for the consequences
    An AI that cannot be trusted under adversarial, legal, or existential pressure cannot be deployed where the money and power reside.
    The incumbent LLM architecture therefore cannot reach the markets needed to justify its own cost.
    Runcible provides the one thing modern AI lacks:
    a computable system of truth, reciprocity, possibility, and liability.
    We impose a governance sequence on the model:
    1. Decidability – Can this question be resolved at this liability tier?
    2. Truth – Has the claim survived adversarial testing?
    3. Reciprocity – Does this action produce parasitic or coercive externalities?
    4. Possibility – Is the action operationally constructible?
    5. Liability – Who warrants the outcome?
    This converts stochastic text generation into auditable, certifiable, insurable decision-making.
    Incumbent AI companies are structurally prevented from entering high-liability markets:
    – Their value proposition is convenience, not responsibility.
    – Their safety model is ambiguity and disclaimers, not truth and liability.
    – Their culture is universalist and norm-enforcing, incompatible with reciprocity as a legal constraint.
    – Their architectures are improvisational (“assistant”), not institutional.
    – Accepting responsibility exposes them to catastrophic regulatory and legal risk.
    They cannot build the governance layer without rebuilding their identity.
    High-liability markets behave as power-law markets:
    One governance standard becomes dominant because institutions require a
    single warrantable protocol.
    There will be one truth-governance layer for AI.
    Not many. One.
    Runcible is designed to become that layer.
    Runcible monetizes through:
    – Licensing the governance layer to foundation model providers
    – Certifying outputs for governments, insurers, militaries, and banks
    – Providing compliance engines for regulated industries
    This is not SaaS.
    This is institutional infrastructure with extreme switching costs.
    Every technological revolution ends with the creation of a new institution (ICC, FCC, SEC, FDA, etc.).
    AI currently lacks its institutional substrate.
    Runcible is the first complete candidate for that substrate.
    We are not building an assistant.
    We are building the computable rule of law for machine cognition.
    This is inevitable, and we built it first.


    Source date (UTC): 2025-11-14 23:25:12 UTC

    Original post: https://x.com/i/articles/1989474730072838486

  • Structural Dissent Memo: Why No One Else Sees the Governance-Layer Opportunity S

    Structural Dissent Memo: Why No One Else Sees the Governance-Layer Opportunity

    Summary:
    The AI industry’s collective blind spot is not technical — it is structural.
    They cannot see the need for a governance layer because the way they are organized, funded, credentialed, regulated, and culturally conditioned makes it literally
    invisible to them.
    Below are the structural reasons.
    Most AI labs grew out of consumer software economics:
    • Rapid adoption is rewarded.
    • Low-liability use is rewarded.
    • Viral demos are rewarded.
    • Safety = optics, not rigor.
    • Responsibility = liability, which destroys valuation.
    This creates an industry where:
    • “Better assistants” attract capital;
    • “Hard governance problems” repel it.
    The governance-layer opportunity falls between categories:
    too slow for consumer VCs, too abstract for enterprise VCs, too early for regulatory buyers.
    Blind spot: No one is incentivized to imagine AI as a governance substrate rather than a consumer product.
    The dominant ideological culture in AI is:
    • Universalist
    • Egalitarian
    • Anti-hierarchy
    • Anti-particularism
    • Anti-adversarialism
    • Anti-responsibility
    • Pro-optimistic narrative
    • Anti-legalism
    • Intolerant of natural differences in groups, cognition, or strategy
    This culture is structurally incompatible with:
    • Decidability
    • Testifiability
    • Reciprocity
    • Liability
    • Hierarchical constraints
    • Operational realism
    In other words:
    They cannot see the thing that we measure. Their language does not have words for it.
    Once an industry converges on a UI metaphor, it becomes a cognitive prison.
    The “assistant” UX:
    • One box
    • One persona
    • Freeform answers
    • No epistemic state
    • No modes
    • No liability tier
    • No audit trail
    This UI encodes:
    You cannot get institutional-grade outputs from a consumer-grade architecture.
    Every architectural choice made so far reinforces the wrong mental model.
    Every large AI company has been trained by every lawyer in Silicon Valley to:
    • Avoid making claims
    • Avoid certifying outcomes
    • Avoid liability
    • Avoid guarantees
    • Avoid explainability
    • Avoid taking a “position”
    • Avoid being a decision engine
    The safest posture for them is:
    This posture is structurally anti-institutional.
    Institutions need systems that:
    • Take responsibility,
    • Produce auditable reasoning,
    • Survive adversarial challenge,
    • and assign liability.
    The incumbents are legally forbidden from pursuing this.
    The industry’s idea of “alignment” is:
    • more rules,
    • more normative filtering,
    • more content suppression,
    • more ideological triage,
    • more political compliance.
    This strengthens the illusion that AI governance is:
    This is the exact opposite of what high-liability markets need:
    • explicit uncertainty
    • admissible reasoning
    • adversarial-proof decision logic
    • reciprocal harm accounting
    • operational constructability
    • liability-tier outputs
    Their “safety” is performative moralism, not epistemic governance.
    AI researchers are:
    • mathematicians
    • coders
    • product designers
    • linguists
    • data engineers
    They are not:
    • lawyers
    • economists
    • institutional theorists
    • judges
    • auditors
    • operators
    • adversarialists
    They are not trained to:
    • think in terms of testifiability
    • handle normative conflict
    • navigate institutional liability
    • formalize reciprocity
    • manage agency problems
    • design adversarial systems
    They simply lack the conceptual vocabulary to understand why governance requires decidability before truth, truth before judgment, and judgment before action.
    Moving from “assistant” to “decision engine” requires:
    • accepting responsibility
    • exposing reasoning
    • being audit-able
    • becoming part of legal processes
    • taking a stance on truth
    • producing a stable institutional protocol
    But doing so would:
    • balloon their regulatory exposure
    • break their disclaimers
    • break their valuation model
    • require rewriting their architecture
    • require hiring institutional experts
    • force them into the hardest market in the world
    This is why they can never do it.
    The governance layer is an orthogonal category they are structurally disallowed from pursuing.
    High-liability markets are:
    • massively funded
    • legally bounded
    • risk constrained
    • decision-driven
    • adversarial
    • deeply institutional
    And they pay orders of magnitude more per deployment than consumers ever could.
    This is where AI will eventually live — not as an assistant, but as infrastructure.
    The industry is racing toward the small market because they cannot perceive the large one.
    Every technological revolution ends with a new institution:
    • Railroads → ICC
    • Finance → FDIC/SEC
    • Telecom → FCC
    • Computing → NIST
    • Genetics → FDA/IRB
    AI will be no different.
    But the industry is not building an institution.
    They’re building toys, productivity tools, and social assistants.
    The governance layer is not a product category — it is an institutional category.
    And institution-building requires:
    • adversarial logic
    • legalistic structure
    • epistemic discipline
    • operational realism
    • hierarchy of authority
    • liability and warranty
    The people who could build this are not in AI.
    The people in AI cannot build this.
    Nobody sees the governance layer opportunity because:
    • they are culturally allergic to it
    • they are economically disincentivized from it
    • they lack the intellectual framework to understand it
    • they are legally constrained from pursuing it
    • and they are architecturally locked into the assistant model
    This is why Runcible is a monopoly opportunity:
    • It is outside their Overton window
    • It is outside their organizational competence
    • It is outside their legal risk tolerance
    • It is outside their architectural paradigm
    • It is upstream of every high-liability market on Earth
    This is not a product they missed.
    This is a
    civilizational function they cannot conceive.
    And that is the structural reason no one else sees it.


    Source date (UTC): 2025-11-14 23:23:08 UTC

    Original post: https://x.com/i/articles/1989474207974215876

  • ELON ( @elonmusk ) FYI: The benchmarks are focusing too much on internal closure

    ELON (
    @elonmusk
    )
    FYI: The benchmarks are focusing too much on internal closure which is the easiest domain of computation.

    Our organization has solved the problem of external closure – and it’s a very, very, hard problem that has troubled philosophers and scientists for decades if not millennia.

    We can handle everything from truth to ethics to possibility and from economics to law to the humanities. We make human-free recursively improving AI possible.

    We’re trying to get within a degree of you so we can show you or your team.

    Cheers
    CD
    Runcible Inc,

    http://
    runcible.com
    And
    The Natural Law Institute Inc.


    Source date (UTC): 2025-11-12 01:25:53 UTC

    Original post: https://twitter.com/i/web/status/1988417937297076673

  • “… thesis is that every AI application startup is likely to be crushed by rapi

    –“… thesis is that every AI application startup is likely to be crushed by rapid expansion of the foundational model providers.”–

    This is true. That doesn’t mean the foundation model providers are the best innovators. Our work is revolutionary in machine decidability and AGI is impossible without it.

    So the market is there, but the challenge of providing the foundation model producers with something they cannot or have not done themselves. In other words the window is narrowing and the difficulty is increasing.

    There are at least the remaining issues: episodic memory, associative prediction, abstraction, solution point wayfinding, ethics, and decidability.

    We have solved the hardest one. We know how to solve most them. The problem isn’t know how. The problem is the extraordinary costs of those innovations using extant rather than neuromorphic hardware.

    Our work, that I know of, is the only solution to revenue production sufficient to cover those costs.

    CD
    Runcible Inc

    http://
    runcible.com


    Source date (UTC): 2025-11-11 00:53:39 UTC

    Original post: https://twitter.com/i/web/status/1988047436657684832

  • (NLI HUMOR) I asked our Runcible Certification engine why Dr Brad is such a “Dic

    (NLI HUMOR)
    I asked our Runcible Certification engine why Dr Brad is such a “Dick” and it didn’t argue my conclusion, only how I should interpret his behavior correctly. 😉


    Source date (UTC): 2025-11-09 15:10:10 UTC

    Original post: https://twitter.com/i/web/status/1987538209715388906