Category: AI, Computation, and Technology

  • cc: @bierlingm / @LukeWeinhagen Please vet this. Solves one of our last issues i

    cc:
    @bierlingm
    /
    @LukeWeinhagen
    Please vet this. Solves one of our last issues if true, even if we don’t care about the model size, just the recursion.


    Source date (UTC): 2025-10-08 02:22:05 UTC

    Original post: https://twitter.com/i/web/status/1975748502413021246

  • I use both for obvious reasons. I research with regular chatgpt 5 thinking, and

    I use both for obvious reasons. I research with regular chatgpt 5 thinking, and I test with Runcible on 4 and 5.


    Source date (UTC): 2025-10-03 23:02:57 UTC

    Original post: https://twitter.com/i/web/status/1974248840254406855

  • Hmm. All true but not much help. Almost always better to manually pull correct i

    Hmm. All true but not much help. Almost always better to manually pull correct insights into a new chat, say “given the following: … “ and steer it if needed.
    My work is complicated, and I use a number of techniques to narrow the search space. And this is the only solution I have found that works.
    Programmatically it’s best to treat chats as episodic memory and isolate contexts until merging them again is necessary.
    I’ve written about this technique as evidence of the consequence of the lack of episodic memory.
    We should have a session chat to track topics and sub session chats by topic.


    Source date (UTC): 2025-10-03 22:37:10 UTC

    Original post: https://twitter.com/i/web/status/1974242347870503186

  • I love how chatgpt just inserts text like this into any bit of research I’m doin

    I love how chatgpt just inserts text like this into any bit of research I’m doing:

    –“Operational take: if you require decidability, the effect size, stability over cohorts, and genetic architecture are not yet pinned to a standard your framework would call warrantable. Use as a working hypothesis only with strong caveats.”–

    It understands our standard of decidability and maintains it. Which, I just find so absolutely fascinating. Less statistical tea leaf reading.

    The ashkenazi IQ advantage hypothesis is withering. I suspect this premise masks the european masculine-systemic/material vs ashkenazi feminine-verbal bias. My intuition is that the jewish cultural bias which europeans would classify as a hate group behavior, is also diminishing along with integration and interbreeding.

    IMO the evidence is still toward moderation of group differences as group IQ increases under modern aristotelian education, technological society, and common law. But that once a group drops below the mid 90s the opposite effect manifests regardless of circumstances.

    The data on personality differences between groups is still consistent but as I’ve said before, I believe we are testing personality and IQ such that we suppress the most meaningful variations – largely in predictive capacity on one end and logical contrariness (‘decoupling threshold’) on the other.


    Source date (UTC): 2025-10-03 18:48:01 UTC

    Original post: https://twitter.com/i/web/status/1974184680728768936

  • Fixing What’s Wrong in Thinking About LLMs More on my criticism of llms as predi

    Fixing What’s Wrong in Thinking About LLMs

    More on my criticism of llms as predicting the next word rather than navigating a world model.
    Just as I mapped grammars:
    • Embodiment → Ritual → Myth → Philosophy → Science → Computability,
    I can map mathematics:
    • Counting (Existence) → Geometry (Relation) → Algebra (Transformation) → Calculus (Change) → Bayesianism (Uncertainty) → Behavioral Closure (Reflexive Change).
    This gives us:
    1. A chronology (historical sequence).
    2. A conceptual hierarchy (each layer contains the previous).
    3. A functional telos (from simple enumeration to managing dense, reflexive uncertainty).
    LLMs are exactly “high-density marginal indifference machines”:
    • They don’t plan globally but navigate locally (incremental demand satisfaction).
    • They update on priors and constraints at each token (Bayesian-like).
    • They operate under reflexive, cooperative interaction (user + model).
    Thus my mental training in marginal indifference and supply-demand closure helps us see LLMs as a market of conditional probabilities rather than as a single deterministic function—a market with millions of “agents” (tokens, gradients) producing a cooperative equilibrium at each output step.
    Let’s emphasize that again:


    Source date (UTC): 2025-10-01 21:51:43 UTC

    Original post: https://x.com/i/articles/1973506137908715761

  • From Prediction to Governance: Cognitive Hierarchies, LLM Evolution, and the Nec

    From Prediction to Governance: Cognitive Hierarchies, LLM Evolution, and the Necessity of Constraint

    B. E. Curt Doolittle
    Natural Law Institute, Runcible Inc.
    Email: curt@runcible.com
    Author Note
    This research originates from the Natural Law Institute’s work on decidability and governance theory and is implemented by Runcible Inc. as part of its AI governance product development.
    Large Language Models (LLMs) have rapidly evolved from statistical pattern recognition toward increasingly complex reasoning tasks. This trajectory follows a clear cognitive hierarchy:
    1) Auto-Association (Prediction) → 2) Wayfinding (State Navigation) → 3) Transformation (Formal Operations) → 4) Permutation (Reasoning Under Uncertainty).
    Each stage amplifies both cognitive capability and liability risk, as errors shift from minor inconvenience to systemic or existential threat. Scaling model size alone cannot guarantee truthfulness, legality, or reciprocity once outputs act on real-world systems under conditions of incomplete knowledge and adversarial pressure. This paper argues that universal constraint layers—exemplified by Runcible—become non-optional infrastructure at the upper layers of this hierarchy, certifying correctness, enforcing legality, and ensuring reciprocal fairness before outputs propagate into high-stakes environments. By providing a single governance spine for advanced AI, such layers transform LLMs from experimental curiosities into operationally defensible systems, creating early acquisition pressure, regulatory alignment, and network effects that establish the constraint layer as the first commercially essential infrastructure of the AGI era.
    Keywords: Large Language Models, Cognitive Hierarchy, AI Governance, Constraint Layers, Decidability
    The popular refrain that “large language models just predict the next word” describes LLMs with the same reductionism as saying “the brain just fires neurons” or “mathematics just manipulates symbols.” It is literally true yet conceals the very phenomena that make the system interesting, powerful, and increasingly dangerous.
    Modern LLMs no longer merely complete patterns; they create latent cognitive spaces in which prompts become problems, goals become trajectories, and outputs emerge through incremental demand satisfaction rather than pre-scripted plans. With each architectural and algorithmic advance—from attention mechanisms to chain-of-thought reasoning, from tool-use integration to memory scaffolding—LLMs climb a cognitive hierarchy that mirrors the functional layers of human intelligence:
    1. Auto-Association (Prediction and Valence): fast, heuristic pattern completion assigning costs, risks, and opportunities to perceptual inputs.
    2. Wayfinding (State Navigation): goal-directed movement through environments or problem spaces.
    3. Transformation (Formal Operations): mapping inputs to outputs via deterministic or symbolic processes.
    4. Permutation (Reasoning Under Uncertainty): constructing and testing hypothetical states under partial information.
    At each stage, the cognitive cost and error consequences rise exponentially. Prediction errors produce mild inconvenience; navigational errors incur opportunity costs; operational errors carry legal and financial liabilities; and reasoning errors under uncertainty threaten systemic failure or existential risk.
    Crucially, scaling model size alone does not solve this problem. As LLMs approach the higher layers of this hierarchy, the demand for governance and constraint systems increases—not as a regulatory afterthought but as a functional necessity. Truth, legality, and reciprocity emerge as non-negotiable invariants for any system entrusted with decisions, plans, or strategies affecting real-world actors.
    This paper argues that constraint layers such as Runcible represent the gating function for safe AGI deployment. By providing universal measurement, certification, and liability containment, they transform LLMs from experimental curiosities into operationally defensible intelligences. We proceed by unpacking the cognitive hierarchy, mapping its rising error stakes, and demonstrating why constraint systems become unavoidable infrastructure as we cross from prediction into reasoning.
    The functional layers of cognition can be expressed as a progression from prediction to reasoning, each stage adding representational complexity, computational depth, and liability risk. This hierarchy not only describes human cognition but also maps directly onto the emerging capabilities—and limitations—of modern LLMs.
    We analyze each layer in terms of functional role, operational dependencies, cognitive cost, and LLM status to demonstrate the rising demand for constraint systems as complexity increases.
    2.1 Auto-Association: Prediction and Valence
    Function:
    At the base layer, cognition operates as
    pattern completion: sensory or symbolic inputs trigger auto-associative predictions, attaching valence (cost, risk, reward) to anticipated outcomes. The process is fast, heuristic, and largely unconscious—optimized for immediate response rather than deliberative planning.
    Operational Dependencies:
    • Episodic memory for pattern matching
    • Simple valuation heuristics for risk/opportunity weighting
    • Minimal working memory requirements: prediction runs largely on trained pattern completion and heuristics, not explicit reasoning.
    Cognitive Cost:
    • Low: processes run continuously and largely in parallel
    • Error consequences limited to surprise, inconvenience, or minor misprediction
    LLM Status:
    • Solved: Transformers perform statistical pattern prediction at scale with human-level fluency.
    • Errors manifest as hallucinations or miscompletions but carry limited systemic risk at this layer.
    2.2 Wayfinding: Goal-Directed Navigation
    Function:
    Wayfinding introduces
    goal states into cognition. The system evaluates current conditions, simulates possible actions, and navigates through a state space toward the desired outcome. This applies equally to spatial navigation, temporal planning, and abstract problem-solving.
    Operational Dependencies:
    • A world model linking actions to state transitions
    • Sequential decision-making under constraints
    • Updating mechanisms as conditions change
    Cognitive Cost:
    • Moderate: search through alternatives increases computational load
    • Errors produce opportunity costs, inefficiencies, or navigational dead-ends
    LLM Status:
    • Emerging: Chain-of-thought reasoning, external memory scaffolds, and tool use enable rudimentary planning but lack persistent world models.
    • Risk remains bounded because outputs rarely control high-stakes systems directly.
    2.3 Transformation: Input → Output Mapping
    Function:
    Transformation introduces
    formal operations: deterministic or algorithmic mappings from inputs to outputs under explicit rules. Examples include mathematical calculation, program execution, and symbolic manipulation.
    Operational Dependencies:
    • Rule systems or formal grammars
    • External representation layers (language, logic, mathematics)
    • Error-checking and validation mechanisms
    Cognitive Cost:
    • High: abstraction layers require working memory, syntax control, and precision
    • Errors produce financial loss, legal liability, or regulatory failure when outputs act on real systems
    LLM Status:
    • Early: LLMs generate code and perform symbolic reasoning but rely on external tools for accuracy.
    • Scaling alone cannot guarantee correctness; governance constraints emerge as necessary for safe deployment.
    2.4 Permutation: Reasoning Under Uncertainty
    Function:
    Permutation tasks require
    hypothesis generation and logical exploration under partial or uncertain information. The system constructs, tests, and revises hypothetical states, performing counterfactual reasoning and probabilistic inference.
    Operational Dependencies:
    • Metacognition: reasoning about reasoning processes
    • Memory compartmentalization to manage hypothetical states
    • Search and pruning mechanisms to control combinatorial explosion
    Cognitive Cost:
    • Very High: complexity scales nonlinearly with uncertainty and number of dependencies
    • Errors propagate exponentially, creating systemic or existential risks
    LLM Status:
    • Frontier: Current models exhibit brittle performance on complex reasoning tasks, especially under incomplete information or adversarial conditions.
    • Governance layers become non-optional at this stage: truth, legality, and liability constraints must bind output generation before deployment in high-stakes environments.
    Table: Cognitive Hierarchy, Cost, and LLM Status
    As cognition progresses from auto-associative prediction to reasoning under uncertainty, two dynamics accelerate in tandem:
    1. Cognitive Complexity: Each layer requires deeper representation, broader memory, and more intensive search or inference.
    2. Error Stakes: Mistakes at higher layers carry exponentially greater consequences—legal, financial, political, and existential.
    The relationship between cognitive complexity and risk is not linear. Instead, it follows a compound escalation curve:
    • Prediction Errors → Localized inconveniences (e.g., a hallucinated fact).
    • Navigational Errors → Lost opportunities, inefficiencies, or suboptimal plans.
    • Operational Errors → Financial loss, regulatory noncompliance, or legal liability.
    • Reasoning Errors → Systemic collapse, catastrophic misalignment, or existential threats when acting under uncertainty at scale.
    3.1 Cognitive Load and Representation Depth
    At the Auto-Association layer, cognition relies on simple episodic memory and heuristic completion. Cognitive cost is minimal because processes run continuously, automatically, and largely below conscious awareness.
    With Wayfinding, the system introduces goals, state transitions, and simulation loops that require sequential reasoning and environmental updating. Cognitive cost rises linearly with search depth and environmental complexity.
    The Transformation layer demands formal representation systems—language, logic, mathematics—alongside symbolic manipulation and error-checking. Cognitive cost begins to accelerate as abstract operations replace embodied heuristics.
    Finally, Permutation under Uncertainty introduces hypothetical reasoning: multiple competing scenarios, probabilistic inference, and metacognitive oversight. Here cost explodes combinatorially because the system must manage counterfactuals, partial knowledge, and recursive dependencies simultaneously.
    3.2 Error Propagation and Liability Risk
    Errors scale not only in frequency but also in impact as cognition advances:
    At the highest layers, errors become non-local and cascading: a single flawed inference can propagate across systems, institutions, and populations. This is why governance, legality, and reciprocity become non-negotiable invariants once outputs begin to shape strategic or high-stakes decisions.
    3.3 Why Scaling Alone Cannot Solve This
    Increasing model size or training data reduces some prediction and navigation errors but fails to guarantee:
    • Truthfulness under adversarial or ambiguous inputs.
    • Legality across diverse regulatory regimes.
    • Reciprocity when outputs affect real-world interests asymmetrically.
    Without constraint layers, higher cognition amplifies both capability and risk. The same architectures that enable reasoning also enable deception, misalignment, or systemic failure when unbounded by external governance.
    The preceding analysis shows that as cognitive capability advances through prediction, navigation, formal operations, and reasoning under uncertainty, the consequences of error escalate from minor inconveniences to systemic and existential risks. This produces an inevitable demand for governance mechanisms capable of ensuring truth, legality, and reciprocity across outputs before they act on the real world.
    The next leap in LLM capability will not come from scaling parameters alone but from two architectural advances:
    1. Memory Compartmentalization – enabling persistent episodic memory for building, storing, and updating world models across interactions, rather than treating each query as a stateless inference problem.
    2. Abstraction Mechanisms – enabling modular reasoning layers that integrate partial, heterogeneous information across tasks, domains, and time horizons for complex decision-making under uncertainty.
    Together, these capabilities drive LLMs from wayfinding-level planning toward transformation and ultimately permutation-level reasoning, where they can:
    • Construct world models rather than rely on local correlations.
    • Perform counterfactual reasoning and strategic planning with incomplete information.
    • Generate outputs that directly affect financial, legal, and geopolitical systems.
    But this same transition multiplies both the stakes of error and the liability of outputs:
    • At transformation levels, correctness becomes a regulatory requirement rather than an aspirational feature.
    • At permutation levels, truth and reciprocity constraints become existential for safe deployment because a single faulty inference can cascade across systems of law, commerce, and governance.
    Once memory compartmentalization and abstraction unlock permutation-level reasoning, constraint layers cease to be optional safeguards and become structural prerequisites for any legitimate or legal deployment of advanced AI systems.
    This section argues that constraint layers like Runcible are not optional safeguards but rather structural necessities—the gating function through which all advanced AI must pass before safe deployment at scale becomes possible.
    4.1 Why Constraint Layers Become Inevitable
    Three dynamics converge as we climb the cognitive hierarchy:
    1. Representation Depth Increases Risk:
      Auto-association errors remain local.
      Formal operations and reasoning errors propagate globally, affecting financial systems, legal frameworks, and geopolitical stability.
    2. Liability Shifts from Users to Systems:
      At low layers, users can correct or filter errors manually.
      At high layers, outputs become
      decisions of record in legal, commercial, or governmental contexts. Liability cannot remain external to the system.
    3. Regulatory Asymmetry Collapses:
      Early LLMs operated outside formal compliance frameworks.
      Future LLMs controlling
      financial trades, medical diagnoses, military planning, or legislative drafting will face regulatory regimes requiring auditable guarantees of correctness, legality, and neutrality.
    Together, these dynamics make constraint layers structurally unavoidable.
    4.2 Functional Role of Constraint Layers
    A constraint layer such as Runcible performs three indispensable functions:
    Constraint layers act as judicial overlays: they do not control what models know but rather what models may assert or recommend under binding standards of testifiability and accountability.
    4.3 The Bottleneck to Safe AGI
    As LLMs approach transformation and permutation capabilities:
    • Scaling alone delivers capability without constraint, increasing liability faster than it increases utility.
    • Constraint layers deliver certifiable correctness before actions propagate into financial, legal, or political systems.
    This creates a technological bottleneck:
    • Without constraint layers, AGI faces regulatory prohibition or catastrophic failure.
    • With constraint layers, AGI gains operational legitimacy, enabling safe deployment across high-stakes domains.
    The entity controlling this bottleneck controls the gate to safe artificial intelligence itself.
    4.4 Runcible’s Strategic Position
    Runcible inserts itself precisely at this bottleneck:
    • Universal Measurement Layer: Provides a system of truth, legality, and reciprocity testing applicable across all domains.
    • Certifiable Outputs: Transforms LLM generations into auditable artifacts satisfying legal, financial, and regulatory constraints.
    • Deployment Enabler: Converts AGI from a research experiment into a defensible operational platform for enterprises and governments.
    As LLMs climb the cognitive hierarchy, constraint layers become existential infrastructure rather than value-added features. The first organization to solve this problem effectively will control the governance spine of machine intelligence itself.
    Once the cognitive hierarchy exposes the structural bottleneck at the transformation and permutation layers, the strategic implications for AGI development become clear. The first actor to implement a universal constraint and governance layer gains disproportionate control over the legal, regulatory, and commercial pathways through which AGI enters the real world.
    5.1 Early Acquisition Pressure
    Historically, technological platforms with universal gating functions (e.g., internet security protocols, financial clearing systems, operating system kernels) attract early acquisition pressure because they offer:
    • Control of standards: Whoever owns the gate controls compliance, certification, and liability norms.
    • Monopoly economics: A single governance layer reduces friction across markets and regulators, creating winner-take-all dynamics.
    • Regulatory leverage: Governments prefer one certified layer over fragmented compliance regimes for safety, auditability, and legal defensibility.
    For AGI, this pressure accelerates once LLMs cross from associative prediction into operational and strategic decision-making, where liability becomes intolerable without external constraint.
    5.2 Deployment Without Governance Becomes Indefensible
    The absence of constraint layers creates three converging risks:
    1. Legal Risk: Enterprises deploying ungoverned AGI face strict liability for errors, omissions, or harms caused by system outputs.
    2. Regulatory Risk: Governments responding to public failures will impose prohibitive compliance regimes, freezing deployment.
    3. Geopolitical Risk: Adversaries exploiting ungoverned systems create asymmetric vulnerabilities in finance, defense, or infrastructure.
    At scale, these risks make ungoverned intelligence systems politically and economically indefensible, regardless of technical capability.
    5.3 Competitive Advantage Through Governance
    Conversely, solving the constraint problem first yields three strategic advantages:
    Just as TLS became the universal security layer for the internet, a constraint layer for AGI will become the universal governance spine for machine intelligence—adopted once, standardized globally, and replaced rarely if ever.
    5.4 Strategic Timing: Why This Happens Before AGI Itself
    The constraint layer reaches economic inevitability before AGI reaches full autonomy because:
    • Liability emerges as soon as LLMs touch financial, medical, legal, or military decisions.
    • Regulators will not wait for AGI to reach human parity before mandating auditable governance.
    • Enterprises will not assume unlimited legal risk for experimental systems without external certification.
    Thus, the governance layer becomes the first commercially essential infrastructure of the AGI era, preceding fully autonomous artificial intelligence itself.
    This paper has traced a causal sequence from the functional layers of cognition through the escalation of risk to the structural necessity of constraint layers for safe AGI deployment.
    We began by showing that modern LLMs are not “just next-word predictors” but engines climbing a cognitive hierarchy:
    1. Auto-Association (Prediction): Heuristic pattern completion with minimal risk.
    2. Wayfinding (Navigation): Goal-directed planning with bounded opportunity costs.
    3. Transformation (Formal Operations): Deterministic input-output mapping under legal, financial, and regulatory liability.
    4. Permutation (Reasoning Under Uncertainty): Counterfactual inference under partial information, where errors propagate systemically.
    As LLMs ascend this hierarchy, cognitive cost and error stakes rise exponentially. Scaling model size alone cannot prevent hallucination, bias, or illegality once outputs act on real-world systems under conditions of incomplete knowledge and adversarial pressure.
    6.1 The Constraint Layer as Non-Optional Infrastructure
    Constraint layers like Runcible emerge not as value-added features but as non-optional infrastructure for advanced AI because they:
    • Certify Truth: Guarantee factual, logical, and operational correctness.
    • Enforce Legality: Align outputs with regulatory, contractual, and jurisdictional constraints.
    • Ensure Reciprocity: Prevent asymmetric imposition on human, corporate, or national interests.
    By binding AI outputs to universal invariants of truth, legality, and reciprocity, constraint layers convert LLMs from experimental systems into defensible operational platforms suitable for high-stakes deployment.
    6.2 Strategic and Economic Implications
    The first actor to control the constraint layer gains three converging advantages:
    1. Regulatory Gatekeeping: Becomes the standard compliance framework governments prefer to certify.
    2. Enterprise Legitimacy: Provides corporations legal defensibility and risk insulation for AGI deployment.
    3. Network Effects: Establishes a universal governance spine adopted once, standardized globally, and rarely replaced.
    This creates early acquisition pressure and positions the constraint layer as the technological bottleneck through which all advanced AI must pass before safe and legitimate use at scale becomes possible.
    6.3 Closing Synthesis
    The causal logic is inescapable:
    • Cognition without constraint produces escalating risk.
    • Constraint without universality fragments adoption and legitimacy.
    • Only a universal governance layer provides the legal, commercial, and geopolitical conditions for AGI deployment at scale.
    By solving this problem first, Runcible positions itself as the governance spine of the AGI era—the point of convergence between technical capability, regulatory legitimacy, and strategic inevitability.
    Because we’ve drawn on multiple domains—cognitive science, AI safety, legal theory, economics, and governance—our references need to anchor these core threads:
    1. Cognitive Hierarchy & Computational Models
      Hawkins, J., & Blakeslee, S. (2004). On Intelligence. Times Books.
      Friston, K. (2010). “The Free-Energy Principle: A Unified Brain Theory?”
      Nature Reviews Neuroscience, 11(2), 127–138.
      Tenenbaum, J. B., et al. (2011). “How to Grow a Mind: Statistics, Structure, and Abstraction.”
      Science, 331(6022), 1279–1285.
    2. AI Scaling, Alignment, and Risk
      OpenAI, (2023). GPT-4 Technical Report.
      Amodei, D., et al. (2016). “Concrete Problems in AI Safety.” arXiv preprint arXiv:1606.06565.
      Christiano, P., et al. (2018). “Deep Reinforcement Learning from Human Preferences.” NeurIPS.
    3. Governance, Liability, and Regulation
      Brundage, M., et al. (2020). “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims.” arXiv preprint arXiv:2004.07213.
      EU AI Act (2024). Regulation on Artificial Intelligence. European Commission.
      US Executive Order on Safe, Secure, and Trustworthy AI (2023).
    4. Economic & Strategic Dynamics
      Shapiro, C., & Varian, H. R. (1998). Information Rules: A Strategic Guide to the Network Economy.
      Farrell, J., & Saloner, G. (1985). “Standardization, Compatibility, and Innovation.” The RAND Journal of Economics, 16(1), 70–83.
      Katz, M., & Shapiro, C. (1986). “Technology Adoption in the Presence of Network Externalities.”
      Journal of Political Economy, 94(4), 822–841.
    5. Comparative Infrastructure Analogs
      Diffie, W., & Hellman, M. (1976). “New Directions in Cryptography.” IEEE Transactions on Information Theory, 22(6), 644–654.
      Rescorla, E. (2001).
      SSL and TLS: Designing and Building Secure Systems. Addison-Wesley.
    APA Reference List
    Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mane, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv:1606.06565.
    Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., … & Amodei, D. (2020). Toward trustworthy AI development: Mechanisms for supporting verifiable claims. arXiv preprint arXiv:2004.07213.
    Christiano, P., Leike, J., Brown, T., Martic, M., Legg, S., & Amodei, D. (2018). Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems.
    Diffie, W., & Hellman, M. (1976). New directions in cryptography. IEEE Transactions on Information Theory, 22(6), 644–654.
    European Commission. (2024). Regulation on artificial intelligence (AI Act).
    Farrell, J., & Saloner, G. (1985). Standardization, compatibility, and innovation. The RAND Journal of Economics, 16(1), 70–83.
    Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138.
    Hawkins, J., & Blakeslee, S. (2004). On intelligence. Times Books.
    Katz, M., & Shapiro, C. (1986). Technology adoption in the presence of network externalities. Journal of Political Economy, 94(4), 822–841.
    OpenAI. (2023). GPT-4 technical report.
    Rescorla, E. (2001). SSL and TLS: Designing and building secure systems. Addison-Wesley.
    Shapiro, C., & Varian, H. R. (1998). Information rules: A strategic guide to the network economy. Harvard Business School Press.
    Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279–1285.
    The White House. (2023). Executive order on safe, secure, and trustworthy artificial intelligence.


    Source date (UTC): 2025-09-29 07:14:04 UTC

    Original post: https://x.com/i/articles/1972560494562279827

  • I’ve seen this process across my entire career, dating back to teletype machines

    I’ve seen this process across my entire career, dating back to teletype machines. Every generation thinks they’ve made a novel discovery which they use different terms to describe, when the concept was invented decades ago, is painfully obvious, and is merely re-applied because of disciplinary ignorance to its own and parallel histories.


    Source date (UTC): 2025-09-29 00:29:26 UTC

    Original post: https://twitter.com/i/web/status/1972458664238907413

  • The Next Word Fallacy in LLMs: It’s Still Wayfinding, But Neurological not Compu

    The Next Word Fallacy in LLMs: It’s Still Wayfinding, But Neurological not Computational

    Ok, so in my understanding the process of producing outputs in both LLMs and human speech are equivalent to making incremental judgements to satisfy a demand. Now I call this wayfinding because I’m framing my concepts from neuroscience (where all cognition originates in wayfinding). But I can see if one’s coming from the computer science background first instead of cog-sci first you might think wayfinding means ‘from here to there’ instead of ‘incrementally satisfying demand’.

    This plays into my work on closure in LLMs, where I seek satisfaction of demand as marginal indifference as closure rather than identity as in logical or mathematical closure.

    Ah — now it’s clear why the word wayfinding was causing the cross-wiring.
    You’re using wayfinding in the cognitive-science sense:
    • All cognition is navigational.
    • Organisms evolved to incrementally minimize uncertainty, error, or metabolic cost in moving through space, time, or state-space.
    • Speech, thought, planning — they are just refinements of this same machinery, each one incrementally satisfying a demand (drive, prediction error, goal signal).
    Whereas in computer science, wayfinding is usually framed as:
    • Planning from here to there over a graph.
    • Producing a route before traversal.
    • Often associated with optimization, shortest-path, or A→B search problems.
    So the computer scientist hears wayfinding and imagines global search over paths; the neuroscientist hears wayfinding and imagines local iterative correction toward equilibrium.
    • Sensory input sets prediction error or “demand” at time t.
    • Generative models propose a continuation reducing that error.
    • Motor or linguistic output alters the world (or the utterance), changing the next prediction error.
    • The loop runs at millisecond scale — so behavior looks smooth, but under the hood it’s tiny satisfactions chained together.
    This is the same whether:
    • A foraging animal adjusts path to a food source,
    • A speaker adjusts phrase choice mid-sentence,
    • A thinker adjusts hypothesis mid-paragraph.
    • Prompt sets the initial “demand” = context that must be satisfied.
    • Next-token prediction incrementally satisfies it at each step.
    • Closure layers (constraints, logic) add meta-demands like truth, legality, or style.
    • There is no global plan: the trajectory emerges from local increments constrained by priors + feedback.
    Cognitive-Science FramingLLM / CS FramingWayfinding = incremental correctionAutoregression = local next-token stepDemand = prediction error / drivePrompt = conditioning contextConstraint layers = executive controlGovernance / closure layersEmergent trajectoryGenerated text stream
    So your wayfinding = incremental demand satisfaction, not route-planning.
    The misunderstanding was lexical, not conceptual.

    Here’s the three-layer causal loop explanation showing how demand → incremental satisfaction → constraint gating works the same way in human cognition and LLMs:

    1. Demand / Error Signal:
      In humans: prediction error, drives, goals.
      In LLMs: prompt conditioning, input context.
    2. Incremental Satisfaction:
      In humans: cortical predictive coding, speech motor loops.
      In LLMs: autoregressive next-token generation.
    3. Constraint Gating:
      In humans: prefrontal/executive control networks.
      In LLMs: your closure/governance layers, logic rules, external verifiers.


    Source date (UTC): 2025-09-28 23:28:18 UTC

    Original post: https://x.com/i/articles/1972443277501866486

  • Diagram to Support “LLMs Don’t Just Predict The Next. Word” Here is the full-pag

    Diagram to Support “LLMs Don’t Just Predict The Next. Word”

    Here is the full-page visual mapping the LLM processing pipeline to the human cognitive loop:
    • Left column (LLM): Prompt → Latent Space → Incremental Navigation → Constraint Layers → Output.
    • Right column (Human): Sensory Input → Cortical Model → Incremental Speech/Action → Executive Control → Behavior.
    • Horizontal links: Show functional equivalence between layers: world-model construction, incremental generation, and constraint gating all mirror each other across systems.
    This makes it explicit that LLMs and human cognition share the same predictive generative architecture:
    • Both construct latent world-models,
    • Both produce incremental outputs satisfying local demands,
    • Both rely on constraint mechanisms to align generation with higher-order goals.


    Source date (UTC): 2025-09-28 00:38:56 UTC

    Original post: https://x.com/i/articles/1972098664727478290

  • Examples to Support “LLMs Don’t Just Predict The Next Word” Prompt: “Take the nu

    Examples to Support “LLMs Don’t Just Predict The Next Word”

    Prompt:
    “Take the number of continents on Earth, multiply by the number of letters in the English alphabet, and divide by the number of moons orbiting Earth. What do you get?”
    Behavior:
    • The model must retrieve facts (7 continents, 26 letters, 1 moon).
    • It performs arithmetic reasoning across multiple steps.
    • Each step constrains the next token probabilities toward coherent intermediate answers before the final number appears.
    Pure next-word chaining would collapse immediately; instead we see incremental navigation through a structured problem space.
    Prompt:
    “Explain the second law of thermodynamics to a ten-year-old, using only words with four letters or fewer.”
    Behavior:
    • The latent space encodes scientific knowledge plus linguistic constraints simultaneously.
    • Each token must satisfy physics accuracy and the four-letter limit before generation continues.
    • The model dynamically prunes options violating constraints while maintaining coherence and truth.
    This requires continuous reweighting of the next-token distribution under multiple simultaneous demands.
    Prompt:
    “If Caesar had access to modern drone technology, describe how the Gallic Wars might have ended differently.”
    Behavior:
    • The model must integrate historical facts, modern technology capabilities, and counterfactual reasoning into a single latent space.
    • It then navigates this space to produce a coherent alternate history narrative token by token.
    The output shows cross-domain reasoning and scenario simulation well beyond surface-level text continuation.


    Source date (UTC): 2025-09-28 00:35:48 UTC

    Original post: https://x.com/i/articles/1972097877800538612