Theme: Governance

  • Joe, sorry I missed this yesterday. Crazy busy day. Here is a piece I wrote on G

    Joe, sorry I missed this yesterday. Crazy busy day.

    Here is a piece I wrote on GHW Bush vs D Trump. Basically the same strategy under different conditions. Bush during a time of peace, and Trump during a time of pending war.

    We Needed GHW Bush Gently in the 1990s, but Voting Him Out Demanded Trump Harshly in the 2020s

    https://x.com/curtdoolittle/status/1977475118747296072…


    Source date (UTC): 2025-10-12 20:44:19 UTC

    Original post: https://twitter.com/i/web/status/1977475442232979934

  • We Needed GHW Bush Gently in the 1990s, but Voting Him Out Demanded Trump Harshl

    We Needed GHW Bush Gently in the 1990s, but Voting Him Out Demanded Trump Harshly in the 2020s

    G.H.W. Bush (Bush 41) is often seen by foreign-policy scholars as one of the more deft U.S. presidents in navigating the transition from a bipolar Cold War world to a new “unipolar plus” era. But his vision was more nuanced than a simple handoff of U.S. primacy or a retreat. Let me reconstruct what we do know about his aims, constraints, and the mechanisms he attempted, and then point toward where the gaps and tensions lie (which is useful for diagnosing why later decades haven’t matched that vision).
    Here’s a sketch of Bush’s foreign-policy architecture in the twilight of the Cold War and the immediate post–Soviet collapse:
    1. “Europe Whole and Free” / Integration of the East
      Early in his presidency (May 1989, in Mainz), he articulated a vision of a united Europe liberated from the Iron Curtain.


      That meant support for democratic transitions in Eastern Europe, but within frameworks of security and institutional alignments (e.g., NATO, European cooperation).


      The idea was not to abandon strategic interest in Europe, but to shift from containment to integrating the former Soviet satellites into a liberal, rules-based order.

    2. A “New World Order” built on rule of law, collective security, and multilateral legitimacy
      In speeches (notably his post–Gulf War address to Congress), he spoke of how from the “troubled times” of the Gulf crisis, a new world order might emerge—one less dominated by raw power, more by cooperation, legal norms, and restraint of aggression.


      In practice, this meant working through coalitions (e.g. in the Gulf War) and embedding U.S. actions in UN or allied legitimacy.

    3. Managing the Soviet collapse (and its dangers) prudently
      Bush and his team were acutely aware that a disorderly dissolution of the USSR, especially with nukes in disparate republics, would pose huge risks.


      So he favored
      careful sequencing, assurances to Gorbachev, and support to successor states under multilateral oversight (e.g., guarantees, credit, economic assistance) rather than abrupt collapse.


      For example, he sought to have financial institutions (IMF, World Bank) partner in “special associations” with the USSR to provide technical and economic support.


      He also signed the Freedom (Support) Act in 1992 to channel U.S. aid, democracy promotion, economic assistance, and nonproliferation measures to the newly independent republics.

    4. Preventing regional hegemonies, promoting stability, and limiting conflict
      With the Soviet “enemy” fading, Bush looked to use U.S. power to prevent instability or emerging regional contenders from disrupting global order.


      He also anticipated using U.S. capabilities for humanitarian interventions, peacekeeping, enforcement of norms (as in Iraq/Kuwait), and internal order maintenance—rather than purely ideological confrontation.

    5. Preserving U.S. strength at home as the basis for global standing
      Bush also repeatedly emphasized that to lead abroad, America must remain strong at home—economically, militarily, socially.


      He pushed (with varying success) for fiscal responsibility, controlling deficits, and ensuring the military and energy posture were sustainable.

    Bush’s vision had to be operationalized through diplomacy, institutions, and incremental steps. Some key levers:
    So, he didn’t envision a U.S. abdication of leadership; rather, a transformation of its role: still central, but less domineering and more embedded in multilateral frameworks.
    Your sense that this vision couldn’t simply be “declared” and implemented is correct. There were deep structural challenges. Some key frictions:
    • Domestic constraints: The U.S. political system resisted generous foreign commitments when there was domestic pushback (budget deficits, public fatigue, tax resistance). Bush himself struggled with deficit pressures.

    • Allied skepticism / adjustment: European and other allies had their own agendas, and some resisted expanded U.S. leverage via institutions they perceived as U.S.-led.
    • Russian retrenchment / revisionism: The hope of bringing Russia into the new order as a junior partner foundered when Russia later rejected aspects of the post–Cold War structures.
    • Institutional inertia: Multilateral organizations (UN, IMF, World Bank) and alliances were built for a different era; reorienting them is slow and contested.
    • Ambiguity of power transition: While Bush hoped to manage the transition gracefully, others (inside and outside the U.S.) saw opportunity to contest U.S. primacy, or to push local agendas unaligned with his vision.
    • Unmet expectations and backlash: Where transitions or democratizations failed (e.g. in Russia, parts of Eastern Europe, the Middle East), disillusionment and backlash undermined the moral credibility of the U.S. model.
    • Overstretch risks: Trying to do too many roles (defender, enforcer, democratizer) risks dilution, inconsistency, or collapse under strain.
    Additionally, later administrations diverged in style, priorities, or doctrine (e.g. more unilateralism, preemption, or retrenchment), which broke the thread of Bush’s more cautious and consensus‐based posture.
    I want to nuance your phrasing: Bush did not appear to aim for a U.S. withdrawal or abdication of global leadership. Instead, his is better described as a recalibration of how that leadership is exercised. He wanted less coercive imposition, more legitimacy, more multilateral anchoring—not a reduction to a passive role. Many commentators characterize him as a conservative realist who sought to maintain U.S. primacy, but to do so in a sustainable, multilateral, legally grounded fashion.

    So the “redistribution” you describe would not be a dilution of U.S. role per se, but more a rebalancing among states and institutions to share responsibilities under U.S. strategic direction.
    Given that you see a kind of breakdown happening now (e.g. turbulence under Trump), it’s worth isolating where the Bush template failed or was undermined—and how one might revive it now (or reimagine it). Some reflections:
    • Fragility of continuity: The world order needs strong, consistent commitment across administrations. When populist, isolationist, or transactional shifts occur, the institutional and normative scaffolding erodes.
    • New challenges (multipolarity, nonstate actors, digital domains, climate, pandemics) complicate the old Westphalian order in ways Bush did not fully prefigure.
    • Power diffusion: The rise of China, regional powers, transnational networks, and nonstate actors makes commanding leadership harder to sustain.
    • Legitimacy crisis: Many parts of the world now view U.S. global leadership ambivalently or negatively (seen as hubris or neo-imperial). So the moral/institutional legitimacy component is more fragile.
    • Disconnect between vision and mechanism: Bush had good ideas about multilateralism, but lacked wholly coherent, robust institutional innovations to embed them permanently. Too much rested on personalities, ad hoc coalitions, and discretionary U.S. leverage.
    To revive a Bush-style approach (or its spirit) today, some possible strategies could include:
    • Strengthening global governance hybrids (alliances, regional bodies, legitimized coalitions) as burden-sharing mechanisms.
    • Embedding conditionality and accountability in aid, support, trade, and norms enforcement (so that redistribution of power is not a free ride).
    • Designing durable interoperability and institution-building that can outlast any one U.S. administration (so that shifts in U.S. politics don’t collapse the structure).
    • Adapting to nontraditional domains—cyber, climate, pandemics—so that leadership is less about military dominance and more about problem-solving cooperation.
    • Cultivating strategic humility: using U.S. power not to dominate but to catalyze, enforce rules, and enable others, while resisting unilateral overreach.
    • Reasserting a legitimacy narrative (norms, values, reciprocity) to counter the narrative of U.S. self-interest dominance.
    Here is a map of (A) what Bush 41 intended, (B) what actually got built (≈1990–2010), and (C) where/why the architecture fractured afterward (≈2010–2025). I’ll keep the causal chains explicit and minimize narrative.
    • Intent (1989–91). End containment; integrate the East into a liberal, rules-based order—anchored by NATO/EC (later EU) and CSCE/OSCE—rather than spheres of influence. Bush’s Mainz speech (“Europe whole and free”) framed the objective; the 2+4 process settled Germany; OSCE’s Paris Charter put norms in writing.

    • Mechanisms. 2+4 Treaty (full German sovereignty); Rome ’91 NATO Strategic Concept (from “forward defense” to “cooperative security”); CSCE→OSCE institutionalization.

    • Built (1990–2010). NATO enlargement (1999/2004); EU enlargement (2004/2007); OSCE norms became baseline language for elections, borders, minority rights—imperfect but real.

    • Fractures (2010–2025). Russian revanchism and open war (2014/2022) rejected the Paris-Charter logic; NATO/EU enlargement became flashpoints; OSCE instruments eroded in practice. Diagnosis: integration without durable Russia buy-in + security guarantees not matched to political economy outcomes. (Speculative but warranted inference.)

    • Intent. Use UN-anchored coalitions to enforce the prohibition on aggression; legitimacy first, force as last resort. Bush’s 6 Mar 1991 address cast the Gulf coalition as the template.

    • Mechanisms. UN SCR 678 authorized “all necessary means” after Iraq’s refusal to withdraw from Kuwait. Demonstration effect: law-backed coercion works when major powers align.

    • Built. 1990s saw repeated recourse to mandates/coalitions; NATO’s 1991/1999 concepts balanced UN primacy with alliance autonomy—already a seam.

    • Fractures. Kosovo without UNSC authorization (1999) and Iraq (2003) split legitimacy from action; later, UNSC paralysis over Syria/Ukraine normalized great-power veto politics. Diagnosis: the template required sustained great-power comity that proved non-durable.

    • Intent. Prevent a chaotic dissolution and loose-nukes; sequence assistance; embed reforms via IFIs; keep arms-control momentum.

    • Mechanisms. START I (deep strategic cuts); early economic/technical assistance; Freedom Support Act to stabilize successor states.

    • Built. Massive reductions in deployed strategic warheads; Nunn–Lugar (beyond Bush but continuous with his frame) removed/secured arsenals; partial economic stabilization.

    • Fractures. Arms-control architecture decayed (later treaty exits/suspensions); Russian resentment of 1990s outcomes metastasized; Western macro-assistance conditioned on reforms landed unevenly. Diagnosis: security de-risking succeeded; political-economic integration failed to self-stabilize.
    • Intent. Maintain U.S. primacy but exercise it through coalitions, institutions, and restraint; avoid regime-change maximalism; stop where legitimacy ends (Iraq ’91).

    • Mechanisms. Deliberate multilateralism; limited aims (Panama/Kuwait as policing, not occupation doctrine); alliance consultation; economic statecraft.

    • Built. 1990s coalition habit stuck; WTO launch (1995) and EU/NATO growth created a lattice for “burden-sharing.”
    • Fractures. Post-9/11 doctrine shift toward preemption and social-engineering campaigns; domestic political cycles made the “legitimacy first” heuristic non-stationary. Diagnosis: doctrine drift broke the Bush constraint discipline.
    • Intent. Fiscal prudence + competitive economy to underwrite credible global posture. Bush emphasized this explicitly post-Gulf.

    • Built. 1990s peace-dividend + tech boom masked strategic under-investment in resilience.
    • Fractures. 2008 financial crisis, deindustrialization, and domestic polarization degraded consensus for costly global commitments; retrenchment/whiplash followed. Diagnosis: domestic capacity is a rate-limiter on any grand strategy.
    1. Continuity problem (political time). The architecture required multi-administration constancy; instead, we alternated between maximalism and retrenchment.
    2. Legitimacy elasticity. The system depended on UNSC/coalition legitimacy; once key precedents broke, rivals priced in non-compliance.

    3. Integration asymmetry. Security integration outpaced political-economic convergence, especially with Russia; expectations outran deliverables.

    4. Power diffusion. Rise of China, empowered mid-powers, and non-state actors created a “many vetoes” world the early-90s tools didn’t anticipate.
    5. Institutional inertia. UN/IFIs/OSCE/NATO adapted, but slowly; mandates and enforcement lagged novel domains (cyber, supply chains, gray-zone coercion).
    • Re-tie force to law: Recenter coalitions around explicit legal thresholds (territorial aggression, WMD transfer, maritime blockade) with pre-negotiated response menus.
    • Build “coalitions as institutions”: Make ad-hoc coalitions persistent (secretariat, funding, readiness), so legitimacy survives U.S. electoral cycles.
    • Balance enlargement with settlement offers: Pair security guarantees with bounded political settlements and economic corridors that make acceptance Pareto-superior—to reduce revanchist incentives.
    • Economic statecraft with reciprocity: Condition access (trade, capital, technology) on verifiable compliance with security/economic rules, enforced by automatic, multilateral penalties.
    • Domestic base first: Tie foreign-policy commitments to domestic productivity and fiscal rules so external promises remain credible.
    • OSCE-plus in the East; Indo-Pacific OSCE-analog: Put verifiable border/incident mechanisms where escalation risks are highest.
    President Trump is pursuing a coercive re-balancing—less Bush-style “legitimacy-first multilateralism,” more hard-edged leverage—aimed at:
    • (i) forcing allied burden-sharing,
    • (ii) price-disciplining adversaries with tariffs/finance/tech controls, and
    • (iii) tightening denial-and-deterrence against Iran/Russia/China while keeping U.S. optionality on escalation. Mechanisms and evidence follow.
    • Mechanism. Make U.S. security guarantees contingent on allies’ spend/effort; secure a headline spending rule; keep Article-5 language deliberately ambiguous ex-ante, then pocket a “win” ex-post.
    • What happened. NATO leaders adopted a 5% of GDP defense-spending target (with some carve-outs for resilience), publicly framed as a Trump victory in “equalizing” the burden.

    • Strategic effect. Moves Europe toward self-funding containment of Russia and partial backstopping of Mideast spillovers (Iran proxies), reducing U.S. hegemonic load while preserving U.S. veto/leadership. (This is the closest structural rhyme with Bush’s “burden-sharing,” but achieved via threat leverage rather than consensual design.)

    • Mechanism. A 10% universal baseline tariff under IEEPA, plus “reciprocal” surcharges on deficit partners; threaten step-ups (incl. 100% on China) to coerce behavior on trade/tech/energy alignment with Russia policy.

    • Strategic effect. Convert U.S. market access into a continuous bargaining instrument against China (supply-chain, rare earths, oil-sanctions leakage) and a disciplining device for allies; shift costs from U.S. treasury to foreign producers/consumers until counter-retaliation binds.

    • Mechanism. Provide weapons with conditions (“how will they be used?”), favor pay-for-weapons deals, and keep escalation ladders under U.S. control (e.g., Tomahawks contingent on targeting assurances).

    • Strategic effect. Preserve Ukrainian deterrent/gains while capping escalation risk with Russia; increase European cost-share; retain negotiation leverage with Moscow. (Coercive management, not Bush-era liberal enlargement.)

    • Mechanism. Push UN/E3 “snapback” to restore multilateral sanctions; rebuild embargoes (arms/drones/missiles), rally EU measures, and raise regional force posture; hold open a “deal if deterrable” option.

    • Strategic effect. Re-multilateralize pressure on Tehran (a Bush-like move in method, but with higher coercive baseline) while signaling readiness to strike if thresholds crossed.

    • Mechanism. Tighten Ukraine’s air defenses and long-range reach selectively; float arms-control openness to cap worst-case spirals; use tariff/energy/financial tools to drain Russian war capacity.

    • Strategic effect. Contain without automatic widening of the war; keep a negotiation vector alive while forcing Moscow to price in NATO/EU rearmament.

    • Mechanism. Threaten 100% tariffs; push EU toward coordinated tariffs tied to Russian-oil purchasing; signal forthcoming software/tech restrictions; weaponize supply-chain chokepoints.

    • Strategic effect. Raise the marginal cost of China’s support to Russia/Iran; force firms and allies to choose resilient, non-PRC-centric configurations even at near-term economic pain.

    • Similarity in end-state: both aim to reduce U.S. net load while keeping the order compatible with U.S. interests.
    • Difference in means: Bush used legal-institutional lock-ins and broad consent; Trump uses tariffs, conditional security guarantees, and public bargaining to compel faster rebalancing.
    • Risk profile: Bush minimized escalation and retaliation; Trump tolerates higher economic and diplomatic volatility to accelerate convergence (NATO 5%, EU snapback on Iran, conditioning of Ukraine aid).

    • Tariff ratchets aimed at China and possibly select EU sectors if alignment lags on Russia oil/tech leakage.

    • Ukraine package with explicit use-conditions, plus EU/NATO cost-share targets tied to delivery schedules.

    • Iran enforcement surge: maritime interdictions, drone/missile part sanctions, and proxy cost-imposition, while leaving a narrow channel for a limited “no-weapon” deal.

    • Market/ally blowback from tariff escalation (supply-chain inflation, countersanctions).

    • Escalation ladders in Ukraine (Tomahawks/long-range) and Persian Gulf incidents under tighter snapback enforcement.

    • European politics: sustaining 5% defense outlays through domestic cycles and fiscal rules.

    RE: Bush
    RE: Trump


    Source date (UTC): 2025-10-12 20:43:02 UTC

    Original post: https://x.com/i/articles/1977475118747296072

  • We are maintaining the separation between NLI as the legal research and activism

    We are maintaining the separation between NLI as the legal research and activism organization, and Runcible as the applied technology organization. Thus we de-politicize runcible while retaining the activism of NLI.

    As such we are merging Reality by Chanting (RBC) into Runcible by repatriating RBC assets from Europe (UK/UA) into the USA and Runcible. And Licensing the research from NLI to Runcible. NLI is prohibited from allowing what’s called ‘capture’ of the IP, and as such converting NLI work from truth, reciprocity, possibility, natural law, and what we call ‘perfect government’ into some corruption of it.

    This integration of NLI IP, with RBC IP, into Runcible, creates two business lines in Runcible, meaning the Runcible Intelligence layer which we license, and the Runcible application front end and Runcible AI backend as a commercial application for running organizations of any scale from personal to business to industrial to government to military.

    Even today we can see how Brandon Hayes (president of NLI) is conducting reform (lawfare) against the proceduralism and bureaucracy endemic to governments, while I and some others are driving Runcible to commercialization. That commercialization will allow revenue via licensing from NLI to RBC to pursue both socio-political and commercial goals.

    The end result w hope is saturation of the population with an information system on one hand and tha revolution in desktop and laptop interfaces what synthesize AI and Business an Government.

    This two pronged approach, we assume, will do more to indirectly educate the public and transform public discourse, than any of our attempts to modify the law.

    Only then will the law be modifiable.

    Only then can we prevent a civil war that is growing from warm to hot faster than we can act.

    Curt Doolittle
    NLI and Runcible.


    Source date (UTC): 2025-10-12 19:16:26 UTC

    Original post: https://twitter.com/i/web/status/1977453324946505785

  • EXPLAINING HEGSETH’S “LAYING DOWN THE LAW” Strategy Hegseth’s approach was delib

    EXPLAINING HEGSETH’S “LAYING DOWN THE LAW”

    Strategy
    Hegseth’s approach was deliberately confrontational and theatrical, leveraging the rare, logistically massive convocation of global military brass—despite the security and travel burdens—to assert raw civilian authority over professional military leadership and create a spectacle that could be televised for political impact.

    By publicly disparaging the attendees’ fitness, promotions, and past performance (e.g., blaming them for failures in Iraq and Afghanistan despite their extensive combat experience), he sought to intimidate and demoralize potential dissenters, drawing an ideological line between “woke” officers (to be fired) and “apolitical, hard-charging” war fighters (to be elevated).

    This litmus-test strategy enables a purge of non-aligned personnel, replacement with loyalists, and a shift away from modern priorities like alliances and cyber threats toward aggressive, punitive violence—ultimately politicizing the apolitical military to advance the MAGA project’s reactionary goals.

    Veterans and analysts have decried this as egotistical, dangerous, and a violation of the civilian-military divide, warning it could erode morale and operational effectiveness.

    CD: Effectively allowing competitors to force out the Clingon-Obama era conversion of the military into a social program to advance the left’s agenda by undermining the military as the last respected branch of government, and primary means of resistance against the enemy: the left.


    Source date (UTC): 2025-10-02 22:02:15 UTC

    Original post: https://twitter.com/i/web/status/1973871175496532124

  • (Told You So) As the wyrm turns, we can see I was right in 2022 on Russia, Ukrai

    (Told You So)
    As the wyrm turns, we can see I was right in 2022 on Russia, Ukraine, and Poland, and was right on Trump’s strategy with Europe. Prediction isn’t hard if you can separate your moral instinct from your predictions, and measure from the data rather than the storytelling. The left doesn’t work by anything other than instinct, and the right is too easy to sway by it.

    Humans are exasperating animals. 😉


    Source date (UTC): 2025-10-02 18:58:28 UTC

    Original post: https://twitter.com/i/web/status/1973824924113576221

  • “Nations do what they absolutely have to do.”— George Friedman @georgefriedman

    –“Nations do what they absolutely have to do.”— George Friedman
    @georgefriedman

    I might argue for a touch more clarity; that they only do what costs them, what they absolutely have to do.


    Source date (UTC): 2025-10-02 18:44:36 UTC

    Original post: https://twitter.com/i/web/status/1973821433756463596

  • Computability: The Constraint of Constraints (Natural Law Core) Civilizations ri

    Computability: The Constraint of Constraints (Natural Law Core)

    Civilizations rise by mastering scale. But scale is entropy. More people, more knowledge, more complexity—each adds friction to coordination and incentive to deception. Without constraint, every system devolves under the weight of its own intricacy.
    Computability is the response to that entropy. It is the only known method of preserving cooperation under scale, time, and diversity—without requiring shared blood, shared gods, or shared illusions. Computability replaces the trust of face-to-face tribes with the trust of formal transformation: the logic of reciprocity embedded in law, policy, and speech.
    As empires fell to corruption, as ideologies collapsed under falsification, and as traditions failed to coordinate strangers across scale, computability remains as the final refinement of the rule of law—one that does not merely punish parasitism, but renders it undecidable, unratifiable, and unsustainable.
    Computation, in this sense, is not mechanical. It is civilizational. It is the logic that allows strangers to cooperate, adversaries to negotiate, and civilizations to persist—not by myth or mandate, but by measurable, decidable, reciprocal transformation.
    The future belongs to the computable.
    Every cooperative order depends on constraint. Every constraint depends on decidability. Every decidability depends on measurement. But every measurement, to constrain, must be computable. Computability is the final convergence of truth, law, and enforcement.
    Throughout history, civilizations have sought means of resolving disputes, managing cooperation, and suppressing parasitism. They have done so by invoking gods, reason, tradition, contract, and consensus. But none of these systems scaled without failure. Each failed not due to lack of sophistication—but due to their indecidability. That is: the inability to reach judgments without discretion.
    Computability ends this ambiguity. It reduces all claims—moral, legal, political—to sequences of observable actions and consequences. It enforces a standard: that nothing may be judged unless it is operationally decidable using shared categories of cost, benefit, harm, and reciprocity. Computability makes law and morality what physics made mechanics: testable.
    Constraint must be:
    • Enforceable (it must be possible to act upon),
    • Decidable (it must be possible to determine application),
    • Computable (it must be possible to decide without discretion).
    Any failure in this chain enables parasitism—via vagueness, evasion, narrative capture, or rent-seeking.
    A process, claim, or system is computable if it can be determined true, false, or undecidable by a finite, operational, non-discretionary sequence of transformations—using only observable, testifiable, and warrantable human actions or consequences.
    This differs from:
    • Turing computability: machine-executability of formal problems.
    • Economic computability: optimization of preferences under constraints.
    • Mathematical computability: decidability of statements within an axiomatic system.
    Computability here is praxeological: it reduces statements to operations, operations to costs, and costs to reciprocal liability.
    Every prior system failed to scale without corruption because its judgments were interpretive, not transformational.
    A judgment is computable iff:
    • All terms are operational (reducible to human action and observable consequences),
    • All claims are testifiable (subject to falsification, adversarial challenge, and demand for warranty),
    • All actions are reciprocally insurable (impose no unaccounted cost on others),
    • All conclusions are non-discretionary (invariant under interpretation, reproducible by others).
    This system forbids interpretation without transformation. It eliminates rent-seeking by removing ambiguity.
    No domain is exempt. Computability makes the human universe decidable not in symbols—but in actions and consequences.
    A computable society prevents interpretive privilege. No elite arbitrates ambiguity. No institution escapes liability. Law becomes a machine for reciprocity.
    Without computability:
    • Trust decays with population size,
    • Law fragments with institutional capture,
    • Morality dilutes with inclusion,
    • Fraud grows with complexity.
    With computability:
    • Constraint scales with information,
    • Trust persists despite anonymity,
    • Morality becomes decidable,
    • Law resists interpretation.
    Computability is the only scalable method of constraint. It transforms the challenge of scale from one of enforcement to one of form.
    Justice becomes a transformation:
    • Input: Demonstrated interest, claim, or act,
    • Process: Operational reduction + adversarial testing,
    • Output: Reciprocal judgment.
    The court becomes a computation machine for the production of non-discretionary outcomes. Justice is no longer argued—it is executed.
    Where interpretation exists, parasitism follows:
    • Bureaucracy self-perpetuates,
    • Judiciary inflates discretion,
    • Legislatures create unfalsifiable law,
    • Media obscures cost.
    Computability strips institutions of ambiguity:
    • Legislation must be operational,
    • Judgment must be reproducible,
    • Testimony must be warrantable.
    To understand computability, we must first trace the full epistemic chain:
    ❖ Naturalism → Causality
    All human judgment presumes the physical world operates under invariant cause and effectcategorical determinism. No claim, no science, no law is possible without this presumption. Naturalism prohibits appeals to supernaturalism, relativism, or constructivism.
    ❖ Realism → Existence
    Existence consists in persistence. A referent is that which remains identifiable across time. Intelligent observers differ only in scale and rate of perception—but referents that persist are real across all intelligences. This persistence enables reference, measurement, and law.
    ❖ Operationalism → Measurability and Testifiability
    A term is meaningful only if it describes an observable operation. We cannot testify to what we cannot describe operationally. Operationalism eliminates ambiguity by reducing language to actions.
    ❖ Instrumentalism → Theory as Tool
    Instrumentalism treats theories as tools for producing reliable transformations—not metaphysical truths. Theories are machines for reducing distance, scale, and time into testable outcomes. Instrumentalism bridges operationalism and testifiability.
    ❖ Testifiability → Truth
    Truth is that which survives adversarial testing under conditions of reciprocity. Testifiability includes falsification, due diligence, and warranty. If a claim cannot survive challenge, it cannot be trusted.
    ❖ Decidability → Judgment
    Decidability is the satisfaction of the demand for infallibility in context—without requiring subjective interpretation. It replaces ambiguity with rule-based conclusion.
    ❖ Computability → Constraint
    Computability is the transformation of claims into operational sequences that require no discretion. It is the execution of constraint.
    Summary Table
    This is the natural law of knowing, judging, and acting. It is the epistemic architecture of computable civilization.
    We are not asking civilizations to abandon tradition, myth, religion, or ideology. These systems provide:
    • Graceful failure under uncertainty or ignorance,
    • Graceful upgrade as knowledge increases.
    But where disputes must be resolved, harm prevented, or punishment imposed—we must judge. And when we judge, we must judge using universal, testifiable, and non-discretionary methods—regardless of the relative ignorance or sophistication of individuals, classes, or polities.
    We do not abolish the sacred. We abolish the unscrupulous.
    Civilizations must resist entropy—both physical and informational. They must constrain error, fraud, and parasitism. But as they scale in size, complexity, and diversity, interpretation becomes too costly and dangerous.
    Only computability allows a civilization to:
    • Scale constraint without centralization,
    • Preserve cooperation without trust,
    • Prevent collapse without tyranny.
    Just as double-entry bookkeeping made commerce trustless and scalable,
    Just as Boolean logic made reasoning mechanizable,
    Computability makes cooperation infallible and executional.
    This is the final condition of any civilization that hopes to resist decay, avoid capture, and transcend its limits.


    Source date (UTC): 2025-09-29 17:45:49 UTC

    Original post: https://x.com/i/articles/1972719477507997879

  • From Prediction to Governance: Cognitive Hierarchies, LLM Evolution, and the Nec

    From Prediction to Governance: Cognitive Hierarchies, LLM Evolution, and the Necessity of Constraint

    B. E. Curt Doolittle
    Natural Law Institute, Runcible Inc.
    Email: curt@runcible.com
    Author Note
    This research originates from the Natural Law Institute’s work on decidability and governance theory and is implemented by Runcible Inc. as part of its AI governance product development.
    Large Language Models (LLMs) have rapidly evolved from statistical pattern recognition toward increasingly complex reasoning tasks. This trajectory follows a clear cognitive hierarchy:
    1) Auto-Association (Prediction) → 2) Wayfinding (State Navigation) → 3) Transformation (Formal Operations) → 4) Permutation (Reasoning Under Uncertainty).
    Each stage amplifies both cognitive capability and liability risk, as errors shift from minor inconvenience to systemic or existential threat. Scaling model size alone cannot guarantee truthfulness, legality, or reciprocity once outputs act on real-world systems under conditions of incomplete knowledge and adversarial pressure. This paper argues that universal constraint layers—exemplified by Runcible—become non-optional infrastructure at the upper layers of this hierarchy, certifying correctness, enforcing legality, and ensuring reciprocal fairness before outputs propagate into high-stakes environments. By providing a single governance spine for advanced AI, such layers transform LLMs from experimental curiosities into operationally defensible systems, creating early acquisition pressure, regulatory alignment, and network effects that establish the constraint layer as the first commercially essential infrastructure of the AGI era.
    Keywords: Large Language Models, Cognitive Hierarchy, AI Governance, Constraint Layers, Decidability
    The popular refrain that “large language models just predict the next word” describes LLMs with the same reductionism as saying “the brain just fires neurons” or “mathematics just manipulates symbols.” It is literally true yet conceals the very phenomena that make the system interesting, powerful, and increasingly dangerous.
    Modern LLMs no longer merely complete patterns; they create latent cognitive spaces in which prompts become problems, goals become trajectories, and outputs emerge through incremental demand satisfaction rather than pre-scripted plans. With each architectural and algorithmic advance—from attention mechanisms to chain-of-thought reasoning, from tool-use integration to memory scaffolding—LLMs climb a cognitive hierarchy that mirrors the functional layers of human intelligence:
    1. Auto-Association (Prediction and Valence): fast, heuristic pattern completion assigning costs, risks, and opportunities to perceptual inputs.
    2. Wayfinding (State Navigation): goal-directed movement through environments or problem spaces.
    3. Transformation (Formal Operations): mapping inputs to outputs via deterministic or symbolic processes.
    4. Permutation (Reasoning Under Uncertainty): constructing and testing hypothetical states under partial information.
    At each stage, the cognitive cost and error consequences rise exponentially. Prediction errors produce mild inconvenience; navigational errors incur opportunity costs; operational errors carry legal and financial liabilities; and reasoning errors under uncertainty threaten systemic failure or existential risk.
    Crucially, scaling model size alone does not solve this problem. As LLMs approach the higher layers of this hierarchy, the demand for governance and constraint systems increases—not as a regulatory afterthought but as a functional necessity. Truth, legality, and reciprocity emerge as non-negotiable invariants for any system entrusted with decisions, plans, or strategies affecting real-world actors.
    This paper argues that constraint layers such as Runcible represent the gating function for safe AGI deployment. By providing universal measurement, certification, and liability containment, they transform LLMs from experimental curiosities into operationally defensible intelligences. We proceed by unpacking the cognitive hierarchy, mapping its rising error stakes, and demonstrating why constraint systems become unavoidable infrastructure as we cross from prediction into reasoning.
    The functional layers of cognition can be expressed as a progression from prediction to reasoning, each stage adding representational complexity, computational depth, and liability risk. This hierarchy not only describes human cognition but also maps directly onto the emerging capabilities—and limitations—of modern LLMs.
    We analyze each layer in terms of functional role, operational dependencies, cognitive cost, and LLM status to demonstrate the rising demand for constraint systems as complexity increases.
    2.1 Auto-Association: Prediction and Valence
    Function:
    At the base layer, cognition operates as
    pattern completion: sensory or symbolic inputs trigger auto-associative predictions, attaching valence (cost, risk, reward) to anticipated outcomes. The process is fast, heuristic, and largely unconscious—optimized for immediate response rather than deliberative planning.
    Operational Dependencies:
    • Episodic memory for pattern matching
    • Simple valuation heuristics for risk/opportunity weighting
    • Minimal working memory requirements: prediction runs largely on trained pattern completion and heuristics, not explicit reasoning.
    Cognitive Cost:
    • Low: processes run continuously and largely in parallel
    • Error consequences limited to surprise, inconvenience, or minor misprediction
    LLM Status:
    • Solved: Transformers perform statistical pattern prediction at scale with human-level fluency.
    • Errors manifest as hallucinations or miscompletions but carry limited systemic risk at this layer.
    2.2 Wayfinding: Goal-Directed Navigation
    Function:
    Wayfinding introduces
    goal states into cognition. The system evaluates current conditions, simulates possible actions, and navigates through a state space toward the desired outcome. This applies equally to spatial navigation, temporal planning, and abstract problem-solving.
    Operational Dependencies:
    • A world model linking actions to state transitions
    • Sequential decision-making under constraints
    • Updating mechanisms as conditions change
    Cognitive Cost:
    • Moderate: search through alternatives increases computational load
    • Errors produce opportunity costs, inefficiencies, or navigational dead-ends
    LLM Status:
    • Emerging: Chain-of-thought reasoning, external memory scaffolds, and tool use enable rudimentary planning but lack persistent world models.
    • Risk remains bounded because outputs rarely control high-stakes systems directly.
    2.3 Transformation: Input → Output Mapping
    Function:
    Transformation introduces
    formal operations: deterministic or algorithmic mappings from inputs to outputs under explicit rules. Examples include mathematical calculation, program execution, and symbolic manipulation.
    Operational Dependencies:
    • Rule systems or formal grammars
    • External representation layers (language, logic, mathematics)
    • Error-checking and validation mechanisms
    Cognitive Cost:
    • High: abstraction layers require working memory, syntax control, and precision
    • Errors produce financial loss, legal liability, or regulatory failure when outputs act on real systems
    LLM Status:
    • Early: LLMs generate code and perform symbolic reasoning but rely on external tools for accuracy.
    • Scaling alone cannot guarantee correctness; governance constraints emerge as necessary for safe deployment.
    2.4 Permutation: Reasoning Under Uncertainty
    Function:
    Permutation tasks require
    hypothesis generation and logical exploration under partial or uncertain information. The system constructs, tests, and revises hypothetical states, performing counterfactual reasoning and probabilistic inference.
    Operational Dependencies:
    • Metacognition: reasoning about reasoning processes
    • Memory compartmentalization to manage hypothetical states
    • Search and pruning mechanisms to control combinatorial explosion
    Cognitive Cost:
    • Very High: complexity scales nonlinearly with uncertainty and number of dependencies
    • Errors propagate exponentially, creating systemic or existential risks
    LLM Status:
    • Frontier: Current models exhibit brittle performance on complex reasoning tasks, especially under incomplete information or adversarial conditions.
    • Governance layers become non-optional at this stage: truth, legality, and liability constraints must bind output generation before deployment in high-stakes environments.
    Table: Cognitive Hierarchy, Cost, and LLM Status
    As cognition progresses from auto-associative prediction to reasoning under uncertainty, two dynamics accelerate in tandem:
    1. Cognitive Complexity: Each layer requires deeper representation, broader memory, and more intensive search or inference.
    2. Error Stakes: Mistakes at higher layers carry exponentially greater consequences—legal, financial, political, and existential.
    The relationship between cognitive complexity and risk is not linear. Instead, it follows a compound escalation curve:
    • Prediction Errors → Localized inconveniences (e.g., a hallucinated fact).
    • Navigational Errors → Lost opportunities, inefficiencies, or suboptimal plans.
    • Operational Errors → Financial loss, regulatory noncompliance, or legal liability.
    • Reasoning Errors → Systemic collapse, catastrophic misalignment, or existential threats when acting under uncertainty at scale.
    3.1 Cognitive Load and Representation Depth
    At the Auto-Association layer, cognition relies on simple episodic memory and heuristic completion. Cognitive cost is minimal because processes run continuously, automatically, and largely below conscious awareness.
    With Wayfinding, the system introduces goals, state transitions, and simulation loops that require sequential reasoning and environmental updating. Cognitive cost rises linearly with search depth and environmental complexity.
    The Transformation layer demands formal representation systems—language, logic, mathematics—alongside symbolic manipulation and error-checking. Cognitive cost begins to accelerate as abstract operations replace embodied heuristics.
    Finally, Permutation under Uncertainty introduces hypothetical reasoning: multiple competing scenarios, probabilistic inference, and metacognitive oversight. Here cost explodes combinatorially because the system must manage counterfactuals, partial knowledge, and recursive dependencies simultaneously.
    3.2 Error Propagation and Liability Risk
    Errors scale not only in frequency but also in impact as cognition advances:
    At the highest layers, errors become non-local and cascading: a single flawed inference can propagate across systems, institutions, and populations. This is why governance, legality, and reciprocity become non-negotiable invariants once outputs begin to shape strategic or high-stakes decisions.
    3.3 Why Scaling Alone Cannot Solve This
    Increasing model size or training data reduces some prediction and navigation errors but fails to guarantee:
    • Truthfulness under adversarial or ambiguous inputs.
    • Legality across diverse regulatory regimes.
    • Reciprocity when outputs affect real-world interests asymmetrically.
    Without constraint layers, higher cognition amplifies both capability and risk. The same architectures that enable reasoning also enable deception, misalignment, or systemic failure when unbounded by external governance.
    The preceding analysis shows that as cognitive capability advances through prediction, navigation, formal operations, and reasoning under uncertainty, the consequences of error escalate from minor inconveniences to systemic and existential risks. This produces an inevitable demand for governance mechanisms capable of ensuring truth, legality, and reciprocity across outputs before they act on the real world.
    The next leap in LLM capability will not come from scaling parameters alone but from two architectural advances:
    1. Memory Compartmentalization – enabling persistent episodic memory for building, storing, and updating world models across interactions, rather than treating each query as a stateless inference problem.
    2. Abstraction Mechanisms – enabling modular reasoning layers that integrate partial, heterogeneous information across tasks, domains, and time horizons for complex decision-making under uncertainty.
    Together, these capabilities drive LLMs from wayfinding-level planning toward transformation and ultimately permutation-level reasoning, where they can:
    • Construct world models rather than rely on local correlations.
    • Perform counterfactual reasoning and strategic planning with incomplete information.
    • Generate outputs that directly affect financial, legal, and geopolitical systems.
    But this same transition multiplies both the stakes of error and the liability of outputs:
    • At transformation levels, correctness becomes a regulatory requirement rather than an aspirational feature.
    • At permutation levels, truth and reciprocity constraints become existential for safe deployment because a single faulty inference can cascade across systems of law, commerce, and governance.
    Once memory compartmentalization and abstraction unlock permutation-level reasoning, constraint layers cease to be optional safeguards and become structural prerequisites for any legitimate or legal deployment of advanced AI systems.
    This section argues that constraint layers like Runcible are not optional safeguards but rather structural necessities—the gating function through which all advanced AI must pass before safe deployment at scale becomes possible.
    4.1 Why Constraint Layers Become Inevitable
    Three dynamics converge as we climb the cognitive hierarchy:
    1. Representation Depth Increases Risk:
      Auto-association errors remain local.
      Formal operations and reasoning errors propagate globally, affecting financial systems, legal frameworks, and geopolitical stability.
    2. Liability Shifts from Users to Systems:
      At low layers, users can correct or filter errors manually.
      At high layers, outputs become
      decisions of record in legal, commercial, or governmental contexts. Liability cannot remain external to the system.
    3. Regulatory Asymmetry Collapses:
      Early LLMs operated outside formal compliance frameworks.
      Future LLMs controlling
      financial trades, medical diagnoses, military planning, or legislative drafting will face regulatory regimes requiring auditable guarantees of correctness, legality, and neutrality.
    Together, these dynamics make constraint layers structurally unavoidable.
    4.2 Functional Role of Constraint Layers
    A constraint layer such as Runcible performs three indispensable functions:
    Constraint layers act as judicial overlays: they do not control what models know but rather what models may assert or recommend under binding standards of testifiability and accountability.
    4.3 The Bottleneck to Safe AGI
    As LLMs approach transformation and permutation capabilities:
    • Scaling alone delivers capability without constraint, increasing liability faster than it increases utility.
    • Constraint layers deliver certifiable correctness before actions propagate into financial, legal, or political systems.
    This creates a technological bottleneck:
    • Without constraint layers, AGI faces regulatory prohibition or catastrophic failure.
    • With constraint layers, AGI gains operational legitimacy, enabling safe deployment across high-stakes domains.
    The entity controlling this bottleneck controls the gate to safe artificial intelligence itself.
    4.4 Runcible’s Strategic Position
    Runcible inserts itself precisely at this bottleneck:
    • Universal Measurement Layer: Provides a system of truth, legality, and reciprocity testing applicable across all domains.
    • Certifiable Outputs: Transforms LLM generations into auditable artifacts satisfying legal, financial, and regulatory constraints.
    • Deployment Enabler: Converts AGI from a research experiment into a defensible operational platform for enterprises and governments.
    As LLMs climb the cognitive hierarchy, constraint layers become existential infrastructure rather than value-added features. The first organization to solve this problem effectively will control the governance spine of machine intelligence itself.
    Once the cognitive hierarchy exposes the structural bottleneck at the transformation and permutation layers, the strategic implications for AGI development become clear. The first actor to implement a universal constraint and governance layer gains disproportionate control over the legal, regulatory, and commercial pathways through which AGI enters the real world.
    5.1 Early Acquisition Pressure
    Historically, technological platforms with universal gating functions (e.g., internet security protocols, financial clearing systems, operating system kernels) attract early acquisition pressure because they offer:
    • Control of standards: Whoever owns the gate controls compliance, certification, and liability norms.
    • Monopoly economics: A single governance layer reduces friction across markets and regulators, creating winner-take-all dynamics.
    • Regulatory leverage: Governments prefer one certified layer over fragmented compliance regimes for safety, auditability, and legal defensibility.
    For AGI, this pressure accelerates once LLMs cross from associative prediction into operational and strategic decision-making, where liability becomes intolerable without external constraint.
    5.2 Deployment Without Governance Becomes Indefensible
    The absence of constraint layers creates three converging risks:
    1. Legal Risk: Enterprises deploying ungoverned AGI face strict liability for errors, omissions, or harms caused by system outputs.
    2. Regulatory Risk: Governments responding to public failures will impose prohibitive compliance regimes, freezing deployment.
    3. Geopolitical Risk: Adversaries exploiting ungoverned systems create asymmetric vulnerabilities in finance, defense, or infrastructure.
    At scale, these risks make ungoverned intelligence systems politically and economically indefensible, regardless of technical capability.
    5.3 Competitive Advantage Through Governance
    Conversely, solving the constraint problem first yields three strategic advantages:
    Just as TLS became the universal security layer for the internet, a constraint layer for AGI will become the universal governance spine for machine intelligence—adopted once, standardized globally, and replaced rarely if ever.
    5.4 Strategic Timing: Why This Happens Before AGI Itself
    The constraint layer reaches economic inevitability before AGI reaches full autonomy because:
    • Liability emerges as soon as LLMs touch financial, medical, legal, or military decisions.
    • Regulators will not wait for AGI to reach human parity before mandating auditable governance.
    • Enterprises will not assume unlimited legal risk for experimental systems without external certification.
    Thus, the governance layer becomes the first commercially essential infrastructure of the AGI era, preceding fully autonomous artificial intelligence itself.
    This paper has traced a causal sequence from the functional layers of cognition through the escalation of risk to the structural necessity of constraint layers for safe AGI deployment.
    We began by showing that modern LLMs are not “just next-word predictors” but engines climbing a cognitive hierarchy:
    1. Auto-Association (Prediction): Heuristic pattern completion with minimal risk.
    2. Wayfinding (Navigation): Goal-directed planning with bounded opportunity costs.
    3. Transformation (Formal Operations): Deterministic input-output mapping under legal, financial, and regulatory liability.
    4. Permutation (Reasoning Under Uncertainty): Counterfactual inference under partial information, where errors propagate systemically.
    As LLMs ascend this hierarchy, cognitive cost and error stakes rise exponentially. Scaling model size alone cannot prevent hallucination, bias, or illegality once outputs act on real-world systems under conditions of incomplete knowledge and adversarial pressure.
    6.1 The Constraint Layer as Non-Optional Infrastructure
    Constraint layers like Runcible emerge not as value-added features but as non-optional infrastructure for advanced AI because they:
    • Certify Truth: Guarantee factual, logical, and operational correctness.
    • Enforce Legality: Align outputs with regulatory, contractual, and jurisdictional constraints.
    • Ensure Reciprocity: Prevent asymmetric imposition on human, corporate, or national interests.
    By binding AI outputs to universal invariants of truth, legality, and reciprocity, constraint layers convert LLMs from experimental systems into defensible operational platforms suitable for high-stakes deployment.
    6.2 Strategic and Economic Implications
    The first actor to control the constraint layer gains three converging advantages:
    1. Regulatory Gatekeeping: Becomes the standard compliance framework governments prefer to certify.
    2. Enterprise Legitimacy: Provides corporations legal defensibility and risk insulation for AGI deployment.
    3. Network Effects: Establishes a universal governance spine adopted once, standardized globally, and rarely replaced.
    This creates early acquisition pressure and positions the constraint layer as the technological bottleneck through which all advanced AI must pass before safe and legitimate use at scale becomes possible.
    6.3 Closing Synthesis
    The causal logic is inescapable:
    • Cognition without constraint produces escalating risk.
    • Constraint without universality fragments adoption and legitimacy.
    • Only a universal governance layer provides the legal, commercial, and geopolitical conditions for AGI deployment at scale.
    By solving this problem first, Runcible positions itself as the governance spine of the AGI era—the point of convergence between technical capability, regulatory legitimacy, and strategic inevitability.
    Because we’ve drawn on multiple domains—cognitive science, AI safety, legal theory, economics, and governance—our references need to anchor these core threads:
    1. Cognitive Hierarchy & Computational Models
      Hawkins, J., & Blakeslee, S. (2004). On Intelligence. Times Books.
      Friston, K. (2010). “The Free-Energy Principle: A Unified Brain Theory?”
      Nature Reviews Neuroscience, 11(2), 127–138.
      Tenenbaum, J. B., et al. (2011). “How to Grow a Mind: Statistics, Structure, and Abstraction.”
      Science, 331(6022), 1279–1285.
    2. AI Scaling, Alignment, and Risk
      OpenAI, (2023). GPT-4 Technical Report.
      Amodei, D., et al. (2016). “Concrete Problems in AI Safety.” arXiv preprint arXiv:1606.06565.
      Christiano, P., et al. (2018). “Deep Reinforcement Learning from Human Preferences.” NeurIPS.
    3. Governance, Liability, and Regulation
      Brundage, M., et al. (2020). “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims.” arXiv preprint arXiv:2004.07213.
      EU AI Act (2024). Regulation on Artificial Intelligence. European Commission.
      US Executive Order on Safe, Secure, and Trustworthy AI (2023).
    4. Economic & Strategic Dynamics
      Shapiro, C., & Varian, H. R. (1998). Information Rules: A Strategic Guide to the Network Economy.
      Farrell, J., & Saloner, G. (1985). “Standardization, Compatibility, and Innovation.” The RAND Journal of Economics, 16(1), 70–83.
      Katz, M., & Shapiro, C. (1986). “Technology Adoption in the Presence of Network Externalities.”
      Journal of Political Economy, 94(4), 822–841.
    5. Comparative Infrastructure Analogs
      Diffie, W., & Hellman, M. (1976). “New Directions in Cryptography.” IEEE Transactions on Information Theory, 22(6), 644–654.
      Rescorla, E. (2001).
      SSL and TLS: Designing and Building Secure Systems. Addison-Wesley.
    APA Reference List
    Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mane, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv:1606.06565.
    Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., … & Amodei, D. (2020). Toward trustworthy AI development: Mechanisms for supporting verifiable claims. arXiv preprint arXiv:2004.07213.
    Christiano, P., Leike, J., Brown, T., Martic, M., Legg, S., & Amodei, D. (2018). Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems.
    Diffie, W., & Hellman, M. (1976). New directions in cryptography. IEEE Transactions on Information Theory, 22(6), 644–654.
    European Commission. (2024). Regulation on artificial intelligence (AI Act).
    Farrell, J., & Saloner, G. (1985). Standardization, compatibility, and innovation. The RAND Journal of Economics, 16(1), 70–83.
    Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138.
    Hawkins, J., & Blakeslee, S. (2004). On intelligence. Times Books.
    Katz, M., & Shapiro, C. (1986). Technology adoption in the presence of network externalities. Journal of Political Economy, 94(4), 822–841.
    OpenAI. (2023). GPT-4 technical report.
    Rescorla, E. (2001). SSL and TLS: Designing and building secure systems. Addison-Wesley.
    Shapiro, C., & Varian, H. R. (1998). Information rules: A strategic guide to the network economy. Harvard Business School Press.
    Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279–1285.
    The White House. (2023). Executive order on safe, secure, and trustworthy artificial intelligence.


    Source date (UTC): 2025-09-29 07:14:04 UTC

    Original post: https://x.com/i/articles/1972560494562279827

  • Trump relies on surveying the field of opportunities and seizing them causing in

    Trump relies on surveying the field of opportunities and seizing them causing incremental success or failure – just like the left. These people tried to use persuasion by means that attract opposition that trump cannot suppress. Trump does not create weakness in an attempt to overreach. They did and do.


    Source date (UTC): 2025-09-26 16:38:26 UTC

    Original post: https://twitter.com/i/web/status/1971615357560602686

  • I know I’m an outlier but I still can’t understand why everything Trump does and

    I know I’m an outlier but I still can’t understand why everything Trump does and says isn’t obvious, and in service of his strategy.


    Source date (UTC): 2025-09-26 02:59:34 UTC

    Original post: https://twitter.com/i/web/status/1971409282202841548