Author: Curt Doolittle

  • Reduction: “We convert high dimensionality that is only probabilistically determ

    Reduction:
    “We convert high dimensionality that is only probabilistically determinable, into low dimensionality that is operationally determinable.”


    Source date (UTC): 2025-08-25 19:39:50 UTC

    Original post: https://twitter.com/i/web/status/1960064597463060993

  • Alignment: Imagine if your physics or law books were written by the average vote

    Alignment: Imagine if your physics or law books were written by the average voter. OMG… We do truth reciprocity and possibility. 😉


    Source date (UTC): 2025-08-25 19:33:19 UTC

    Original post: https://twitter.com/i/web/status/1960062956143772067

  • Ukraine historically spead east into today’s southern russia – as evidenced by l

    Ukraine historically spead east into today’s southern russia – as evidenced by language and culture. Russians pushed south on the one hand, then the soviets immigrated ethnic russians into the don river valley – because (a) coal mins (b) oil and gas (c) river transport to the black sea on one hand and moscow on the other. So the east of ukraine was ‘russified’ in the same way agrarian blacks were moved to the norther cities by the Johnson administration (for the same reasons of ‘democratic-socialist colonization). Until 2014 we were (I was living there) considering dividing the country to rid itself of what the rest of the population considered ‘russians, lower class, and gangsters’ instead of the usual agrarian ukrainian population.
    At present everyone I know over there is willing to die to the last soul in order to prevent capture by Russia. WHy? Because look at poland that integrated with europe vs belarus that maintained integration with Russia. You’d have to be nuts.


    Source date (UTC): 2025-08-25 19:05:25 UTC

    Original post: https://twitter.com/i/web/status/1960055933641601036

  • Patton wasn’t wrong. —“The difficulty in understanding the Russian is that we

    Patton wasn’t wrong.

    —“The difficulty in understanding the Russian is that we do not take cognizance of the fact that he is not a European, but an Asiatic, and therefore thinks deviously. … In addition to his other Asiatic characteristics, the Russian has no regard for human life and is an all‑out *on of a *itch, barbarian, and chronic drunk.”— Patton

    –“Russia was backward. They were serfs to the boyars, then serfs to the communist party. They never developed a middle class majority so never adopted middle class ethics and morals – nor demanded them in politics. So while they may claim to be Orthodox, are they in fact Christian? Because ethically they are more akin to Islamists, and politically mora akin to Asians.”– Anon

    (I relay this as someone who loves Russian people in personal and social life, and certainly in the art, science, and technology fields – but I’m exasperated by their destitute failure in political life. And their lower classes are … exactly what patton claimed. : )


    Source date (UTC): 2025-08-25 18:23:22 UTC

    Original post: https://twitter.com/i/web/status/1960045351181992250

  • CurtGPT is modified to solve one problem, which is to illustrate the veracity of

    CurtGPT is modified to solve one problem, which is to illustrate the veracity of the truth checklist from the prompt and the books alone – without any training. If I have time this week or next I will create a more general version but it depends on getting all these ‘articles’ done and up on a website for external review.


    Source date (UTC): 2025-08-25 18:22:16 UTC

    Original post: https://twitter.com/i/web/status/1960045076631191645

  • Why It Works by Simple Analogy: Mazes and Roads “Think of intelligence as naviga

    Why It Works by Simple Analogy: Mazes and Roads


    “Think of intelligence as navigation. The world of possibilities is a maze — or better, a network of roads.
    At the top, you have highways — these are the causal relations, the efficient routes that reliably connect starting point to destination. Beneath them are secondary and tertiary roads — slower but still usable. Then you’ve got gravel roads, hedge roads, and finally cowpaths and goat trails. That’s the space of correlations: infinite, but mostly noise.
    Now, without rules, an AI just wanders down every cowpath, burning energy. That’s the correlation trap. It confuses plausibility with truth — like chasing rumors of shortcuts instead of sticking to a verified map.
    But with our system, we impose constraints. Think of them as toll booths and road rules. The model is forced to prune away trails that can’t be computed or tested. That’s operationalization and computability — every turn has to be executable and warrantable.
    Once you enforce those rules, the field of view narrows. Instead of a giant maze of cowpaths, you have a clear map of usable roads. That’s reducibility and commensurability — everything measured in the same units, everything collapsed to a usable form.
    On these roads, drivers follow a traffic code. That’s reciprocity: no cutting across someone else’s land, no head-on collisions. If someone cheats, they’re liable — that’s accountability. These road rules make cooperation possible, and cooperation always produces outsized returns, like carpooling down the highway.
    Now, because we’ve pruned the noise, the system can travel farther, faster, and deeper. That’s the paradox people miss: constraints don’t reduce creativity, they concentrate it. Every constraint is free energy — instead of burning fuel on cowpaths, you’re driving deeper down highways, finding new routes at the edges of lawful space. That’s where true novelty appears.
    And the payoff? You get an audit trail — a GPS trip log of every decision. You get parsimony — the shortest route possible. You get decidability — every intersection has a clear answer. And you get judgment — not just maps, but arrival at destinations.
    This is the difference: We don’t make the car bigger, we make the roads computable. We don’t shrink intelligence — we shrink error. That’s what turns a maze of correlations into a map of causal highways.”
    “Imagine a maze — like the ones we test rats with. That’s the problem of wayfinding, whether physical or cognitive. There are countless possible routes, most of them dead ends. Current AI systems explore that maze by trial and error, powered by brute force. It’s expensive, slow, and most of the energy is wasted on paths that don’t lead anywhere.”
    “Now imagine a dot with a wide cone of vision sweeping across the maze. The wider the cone, the more options the system tries to explore. Without constraints, the field of view is huge, so the model burns compute chasing thousands of irrelevant possibilities. That’s why large language models hallucinate and drift: they are exploring too much correlation without causality.”
    “When we impose constraints — starting with operationalization — the cone narrows. Instead of seeing infinite options, the system only considers the routes that can actually be tested, computed, and warranted. We haven’t reduced its intelligence. We’ve reduced its error. That makes it faster, more efficient, and far more reliable.”
    “Think of the maze not just as random paths, but as a hierarchy of roads:
    • Highways are efficient causal pathways.
    • Secondary and tertiary roads are usable but slower.
    • Gravel roads and hedge roads are costly and unreliable.
    • Cowpaths and trails are endless noise — maybe scenic, but they don’t get you to a destination.
    Without constraints, the model wastes energy wandering down cowpaths and goat trails. With constraints, it stays on the paved routes — and if it discovers a new trail that really leads somewhere, the rule is that it must connect back into the causal road network.”
    “Constraints don’t limit creativity — they concentrate it. By pruning wasted exploration, they free energy to drive deeper down the causal highways. That’s where true novelty appears: not in random noise, but at the edge of lawful recombination. Every constraint is free energy, turned from error into discovery.”
    “So our system doesn’t just make the model smaller, it makes it decidable, computable, and warrantable. We don’t shrink intelligence — we shrink error. And that’s what transforms a maze of correlations into a map of causal highways.”


    Source date (UTC): 2025-08-25 18:02:44 UTC

    Original post: https://x.com/i/articles/1960040161104011732

  • Glossary of Helpful Terms Part I – Single Slide for Presentation Part II – Gloss

    Glossary of Helpful Terms

    • Part I – Single Slide for Presentation
    • Part II – Glossary Outline: Narrative
    • Part III – Glossary Text
    Content (clustered terms):
    Foundations:
    Causality • Computability • Operationalization • Commensurability • Reducibility • Constructive Logic • Dimensionality
    Learning:
    Evolutionary Computation • Acquisition • Demonstrated Interests • Constraint • Compression • Convergence • Equilibrium
    Cooperation:
    Truth/Testifiability • Reciprocity • Cooperation • Sovereignty • Incentives • Accountability
    Decision:
    Decidability • Parsimony • Judgment • Discretion vs. Automation
    Strategy:
    Audit Trail • Constraint Architecture • Alignment by Reciprocity • Correlation Trap • Scaling Law Inversion • Moat by Constraint
    Closing Line at Bottom:
    “We don’t make the model bigger — we make it decidable, computable, and warrantable. That’s the bridge over the correlation trap to AGI, and it’s the moat around the companies who adopt it.”
    This way the slide works as a visual index. You control the pace in speech, and the audience sees that you have a complete system. The handout then fills in the definitions.
    (Open with their pain, name the trap, introduce your frame)
    • Correlation Trap – Scaling correlation without causality; current LLMs plateau in accuracy, reliability, and interpretability.
    • Plausibility vs. Testifiability – Today’s outputs are plausible strings, not testifiable claims.
    • Scaling Law Inversion – Brute-force parameter growth produces diminishing returns; efficiency requires a new approach.
    • Liability – Enterprises can’t adopt hallucination-prone systems in regulated or mission-critical environments.
    (Show the foundation that makes escape possible)
    • Causality (First Principles) – Move from patterns to cause–effect relations.
    • Computability – Every claim must reduce to a finite, executable procedure.
    • Operationalization – Expressing claims as actionable sequences.
    • Commensurability – All measures must be comparable on a common scale.
    • Reducibility – Collapse complexity into testable dependencies.
    • Constructive Logic – Logic by adversarial test, not subjective preference.
    • Dimensionality – All measures exist as relations in space; LLM embeddings are dimensions too.
    (Connect to evolutionary computation — familiar and universal)
    • Evolutionary Computation – Variation + selection + retention = learning.
    • Acquisition – All behavior reduces to pursuit of acquisition.
    • Demonstrated Interests – Costly, observable signals of real value.
    • Constraint – Limit behavior to channel toward reciprocity and truth.
    • Compression – Minimal sufficient representations yield parsimony.
    • Convergence – Alignment toward stable causal relations.
    • Equilibrium – Stable cooperative equilibria, not unstable correlations.
    (Shift from technical foundation to social/enterprise value)
    • Truth / Testifiability – Verifiable testimony across all dimensions.
    • Reciprocity – Only actions/statements others could return are permissible.
    • Cooperation – Reciprocal alignment produces outsized returns.
    • Sovereignty – Agents retain self-determination in demonstrated interests.
    • Incentives – The structure that drives cooperation and compliance.
    • Accountability – Outputs are warrantable, not just useful.
    (Show how this produces usable outputs — not just words)
    • Decidability – Resolving claims without discretion; satisfying infallibility.
    • Parsimony – Minimal elements for reliable resolution.
    • Judgment – The transition from reasoning to action.
    • Discretion vs. Automation – Humans required today; computability removes that dependency.
    (Land on the payoff: efficiency, moat, risk reduction)
    • Audit Trail – Every output carries its proof path.
    • Constraint Architecture – Middleware enforcing reciprocity, truth, decidability.
    • Alignment by Reciprocity – Preference alignment is fragile; reciprocity is universal.
    • Scaling Law Inversion – Smaller, constrained models outperform giants.
    • Moat by Constraint – Competitors can’t copy outputs without replicating the entire framework.
    “We don’t make the model bigger — we make it decidable, computable, and warrantable. That’s the bridge over the correlation trap to AGI, and it’s the moat around the companies who adopt it.”
    Causality (First Principles)
    Definition: Modeling the cause–effect structure of phenomena rather than surface correlations.
    Why it matters: Escapes the “correlation trap” that limits current LLMs, enabling reliable reasoning and judgment.
    Computability
    Definition: The property that every claim, rule, or decision can be expressed as a finite, executable procedure with a determinate outcome.
    Why it matters: Ensures outputs are actionable, testable, and scalable into automated systems without human patching.
    Operationalization
    Definition: Expressing claims, rules, or hypotheses as executable sequences of actions.
    Why it matters: Makes outputs testable and reproducible, turning vague text into computable logic.
    Commensurability
    Definition: Ensuring all measures and claims can be compared on a common scale.
    Why it matters: Enables consistent evaluation of outputs, preventing hidden biases or incommensurable trade-offs.
    Reducibility
    Definition: Collapsing complexity into simpler, testable dependencies.
    Why it matters: Drives interpretability and efficiency, lowering compute costs while improving reliability.
    Constructive Logic
    Definition: Logic built from adversarial resolution (tests of truth and reciprocity), not subjective preference.
    Why it matters: Produces outputs that are decidable, auditable, and legally defensible.
    Dimensionality
    Definition: Every measure or representation exists in relational dimensions.
    Why it matters: Connects directly to embeddings and vector spaces familiar to ML engineers.
    Testifiability vs. Plausibility
    Definition: Testifiability requires outputs to be verifiable by evidence; plausibility only requires surface-level coherence.
    Why it matters: Sharp contrast with today’s LLMs, highlighting why your approach is enterprise-ready.
    Evolutionary Computation
    Definition: Learning as variation, selection, and retention—nature’s optimization process.
    Why it matters: Provides a universal, scalable method of discovering solutions without brute force scaling.
    Acquisition
    Definition: All behavior is reducible to the pursuit of acquisition (resources, time, energy, information).
    Why it matters: Provides a unified grammar for modeling human and machine decisions.
    Demonstrated Interests
    Definition: Costly, observable signals of value that reveal true preferences.
    Why it matters: Grounds AI outputs in measurable reality, reducing hallucinations and false claims.
    Compression
    Definition: Reducing data or representations to minimal sufficient dimensions.
    Why it matters: Produces parsimony, lowering model size and inference costs while retaining truth.
    Convergence
    Definition: Alignment of representations toward stable, causally true relations.
    Why it matters: Prevents drift and ensures outputs get more accurate with use.
    Constraint
    Definition: Limits placed on behavior to channel search toward reciprocity/truth.
    Why it matters: Engineers understand constraint satisfaction; investors see defensibility.
    Equilibrium
    Definition: Convergence to stable cooperative equilibria instead of unstable correlations.
    Why it matters: Connects to game theory, markets, and strategy — resonates with both execs and VCs.
    Truth / Testifiability
    Definition: Satisfaction of the demand for verifiable testimony across dimensions of evidence.
    Why it matters: Creates outputs that can be trusted, audited, and defended in enterprise/legal settings.
    Reciprocity
    Definition: Constraint that only actions/statements that others could do in return are permissible.
    Why it matters: Prevents parasitic, biased, or exploitative outputs—critical for alignment.
    Cooperation
    Definition: Outsized returns from reciprocal alignment of interests.
    Why it matters: Core to scalable human–AI collaboration and multi-agent systems.
    Liability
    Definition: Costs and consequences when errors, hallucinations, or deceit occur.
    Why it matters: Reduces enterprise risk and regulatory exposure.
    Sovereignty
    Definition: The right of agents to self-determination in their demonstrated interests.
    Why it matters: Explains alignment as preserving agency, not enforcing sameness.
    Incentives
    Definition: Structures that drive agents to comply with reciprocity and cooperation.
    Why it matters: Investors think in incentives; this shows the mechanism is grounded.
    Decidability
    Definition: Resolving statements without discretion; satisfaction of demand for infallibility.
    Why it matters: Moves models from “suggestions” to
    judgments, enabling automated decision pipelines.
    Parsimony
    Definition: Using the minimum necessary elements for reliable resolution.
    Why it matters: Increases speed, lowers compute, and boosts generalization.
    Judgment
    Definition: Transition from reasoning to actionable decision.
    Why it matters: Enables adoption in domains where outputs must directly inform action.
    Discretion vs. Automation
    Definition: Current models require human discretion; computable decidability reduces that burden.
    Why it matters: Clarifies “will this replace humans or just assist?”
    Accountability
    Definition: Outputs aren’t just useful, they are warrantable.
    Why it matters: Key for regulated industries — finance, law, healthcare.
    Audit Trail
    Definition: Every output carries a traceable chain of causal reasoning.
    Why it matters: Creates interpretability, accountability, and compliance advantages.
    Constraint Architecture
    Definition: Middleware layer that enforces natural law (reciprocity, truth, decidability) on outputs.
    Why it matters: Differentiates from competitors — turns LLMs from stochastic parrots into causal engines.
    Alignment by Reciprocity
    Definition: Aligning models by reciprocal constraints, not subjective preference tuning.
    Why it matters: Scales alignment universally across cultures, domains, and industries.
    Correlation Trap
    Definition: The industry blind spot of scaling correlation without causality.
    Why it matters: One phrase that crystallizes the problem you solve.
    Scaling Law Inversion
    Definition: Replacing brute-force scaling with constraint-guided convergence for efficiency.
    Why it matters: Challenges the orthodoxy — smaller models can outperform giants.
    Moat by Constraint
    Definition: Competitive defensibility created by embedding universal constraints.
    Why it matters: VCs see a technical moat that can’t be easily copied by rivals.


    Source date (UTC): 2025-08-25 17:44:33 UTC

    Original post: https://x.com/i/articles/1960035585239957928

  • It works though, right? You’ve been brain-draining for a day or two, and you dro

    It works though, right? You’ve been brain-draining for a day or two, and you drop in the eggs benedict, and it’s like a recovery session. 😉

    hugs btw. 😉


    Source date (UTC): 2025-08-25 17:36:08 UTC

    Original post: https://twitter.com/i/web/status/1960033466768380364

  • Great observation. But, is it intentional? Or is it like a tort, you’ve transmit

    Great observation. But, is it intentional? Or is it like a tort, you’ve transmitted a falsehood whether knowingly or not? How many people transmit lies without knowing they’re lying? How much of discourse by that measure consists of lying? We did not evolve to tell the truth – we evolved to negotiate. Truth and Lying are only valuable in the context of that negotiation. 🙁


    Source date (UTC): 2025-08-25 17:35:22 UTC

    Original post: https://twitter.com/i/web/status/1960033273075450330

  • (Diary) Taking a break for eggs benedict. 😉 Thinking about how challenging it’s

    (Diary)
    Taking a break for eggs benedict. 😉 Thinking about how challenging it’s been to write our work in multiple frames from libertarian philosophy, to analytic philosophy, to law, to science, operationalism, and now, the current AI technology stack.

    But tech stack is more accessible than every other frame – at least for the target VC / Tech Exec / Techie populations.

    And really, it might sound crazy, but in technology terms, we’ve produced a specification for a descriptive equivalent of an object oriented programming language, that all of existence can be reduced to, and if it cannot, we cannot claim it is true, ethical, or possible – the equivalent of ‘it will not compile”.

    Now of course, the difference is that our base type is ‘charge’ which is the universe’s equivalent of unknown, 0, -1, and +1. And our possible operations are a combination thereof. So, imagining a class hierarchy, we can compose any possible transformation and state the universe is capable of. Even though, all we really care about is human scale that category of transformation we call an ‘operation’: ‘actions’.

    And when fully understood our ‘specification’ is deceptively simple – as is everything else in the universe once its first principles are understood. The problem is ‘revising’ our thinking to make use of it. Which if done is extraordinarily empowering. (Probably an increase of an SD in demonstrated intelligence.)


    Source date (UTC): 2025-08-25 17:11:27 UTC

    Original post: https://twitter.com/i/web/status/1960027254114992466