Author: Curt Doolittle

  • 1) They had no choice 2) The founders were deists only and typical of the enligh

    1) They had no choice 2) The founders were deists only and typical of the enlightenment. 3) even jefferson composed the Jefferson bible which eliminated all supernatural content from the new testament, converting it to a philosophical system.

    The aristocracy recognized the value of christianity to the commoners (farmers, peasants, women). But they were largely deists themselves.


    Source date (UTC): 2025-08-21 14:53:47 UTC

    Original post: https://twitter.com/i/web/status/1958543056422277277

  • “The conservative christian family structure”– The problem is, that only a mino

    –“The conservative christian family structure”–

    The problem is, that only a minority of the population, and one that is both cognitively biased to the feminine, and indoctrinated while young, is capable of suspension of disbelief sufficient to tolerate mysticism. It is a submission response. So what do we do with the majority who needs a rational and logical equivalent that provides the same mindfulness? We could institutionalize stoicism. It would work for boys and men. We could institutionalize the four way. It would work for girls and women. But we can’t institutionalize the supernatural. But demand for mindfulness is accelerating with modern forms of alienation.


    Source date (UTC): 2025-08-21 14:01:34 UTC

    Original post: https://twitter.com/i/web/status/1958529914627633575

  • True. 😉

    True. 😉


    Source date (UTC): 2025-08-21 13:56:57 UTC

    Original post: https://twitter.com/i/web/status/1958528753065214229

  • Yes. Agreed. Well done

    Yes. Agreed. Well done.


    Source date (UTC): 2025-08-21 13:56:25 UTC

    Original post: https://twitter.com/i/web/status/1958528619275595834

  • Baseline. I’m sure it varies +/- by individual

    Baseline. I’m sure it varies +/- by individual.


    Source date (UTC): 2025-08-21 13:56:00 UTC

    Original post: https://twitter.com/i/web/status/1958528514111783303

  • Funny. There is only one necessary code change and it’s to backpropagation, the

    Funny.
    There is only one necessary code change and it’s to backpropagation, the rest is just training. I clearly failed to make the point that the LLMs are capable of reasoning by their existing configurations – and that present attempts to shove questions of marginal indifference into a frame of cardinal inequality.
    I’m not worried about code. There is so little of it in LLMs in the first place. I’m worried about how those working on them do not understand foundations necessary to produce reasoning outside of internal-closure grammars. (math, programming)


    Source date (UTC): 2025-08-21 13:54:31 UTC

    Original post: https://twitter.com/i/web/status/1958528141812720091

  • On average the most unhappy person in a marriage is the woman. Why? Neuroticism:

    On average the most unhappy person in a marriage is the woman. Why? Neuroticism: greater sensitivity to negative emotions. Men worry about a few big things. Women about many small things. And between empathizing over systematizing, neuroticism over calm, magical thinking over realism, and demand for stimulation (consumption) over capitalization, its no wonder. So women with less than four children to consume those intuitions rationally (their purpose) let those emotions destroy a family rather than make it. Hence divorce rates.


    Source date (UTC): 2025-08-21 06:41:52 UTC

    Original post: https://twitter.com/i/web/status/1958419260939858026

  • The Tyranny of Method: How Disciplinary Grammars Capture the Mind Puzzles flatte

    The Tyranny of Method: How Disciplinary Grammars Capture the Mind

    Puzzles flatter elegance; problems demand responsibility. Physics closes the deterministic; behavior remains indeterminate. Every discipline is a grammar that blinds as much as it reveals. Unification is not reduction but translation: building a grammar of decidability that spans from intuition to action, and from conflict to cooperation.
    Puzzles are insulated grammars of elegance, but problems are contests of consequence; mathematics and physics give closure over determinism, yet they are too simple for the indeterminism of human behavior. Every discipline captures the mind with its grammar—formal, causal, economic, or legal—but no grammar is total. Unification is not reduction but translation: the conversion of subjective intuition into objective action across domains. The task of epistemology is therefore not to escape into puzzles, but to construct a universal grammar of decidability, capable of spanning the spectrum from intuition to action, and from responsibility to truth.
    I chose to study epistemology through science, economics, and law because I care about problems, not puzzles. Puzzles are insulated systems; problems involve conflict, cooperation, and power—the capacity to alter outcomes. Mathematics and physics give us closure over deterministic processes, but they are too simple for the lesser determinism of human behavior. The unification of fields is a linguistic problem: every discipline is a grammar that ranges from subjective intuition to objective action. My temperament drives me to integrate them, because only then can we account for conflict, cooperation, and the real stakes of human life.
    Human inquiry divides into two categories: puzzles and problems.
    • Puzzles are insulated systems of rules and representations. They reward elegance and internal consistency but remain indifferent to conflict or cooperation. Their attraction lies in escapism: they simulate rational mastery without confronting adversarial reality.
    • Problems, by contrast, are consequential. They involve conflict, cooperation, and power—the capacity to alter the probability of outcomes. Problems are never closed; they must be resolved under conditions of uncertainty, liability, and limited information.
    To focus on puzzles at the expense of problems is to privilege intellectual play over responsibility. It is to avoid the domain where choices incur consequences.
    Mathematics and physics provide closure over highly deterministic processes. Their appeal lies in their precision: once initial conditions are known, outcomes follow with necessity.
    Yet this determinism is rare outside the physical sciences. Human behavior is underdetermined: shaped by competing incentives, partial knowledge, and adversarial strategies. Where physics seeks exact solutions, the behavioral sciences must settle for satisficing, liability-weighted judgments, and reciprocal constraints.
    Thus, the mathematical and physical grammars are insufficient to capture behavioral systems. They are too simple—not because they lack rigor, but because they presuppose determinism where indeterminacy is irreducible.
    Every discipline is a grammar of representation, and each grammar captures its practitioners:
    • Mathematics teaches one to think in formal closure.
    • Physics trains one to search for deterministic causal chains.
    • Economics frames action in terms of equilibria and marginal trade-offs.
    • Law disciplines thought into adversarial argument and precedent.
    Each grammar is internally rational, but none is universally commensurable. Practitioners tend to overextend their paradigm, mistaking a partial grammar for a total one. This is the error of methodological capture: the conflation of one domain’s precision with universal adequacy.
    Unification is not a problem of mathematics alone, nor of metaphysics, nor of physics. It is a problem of linguistics and representation.
    Knowledge is organized through grammars ranging along a spectrum:
    • From subjective intuition (personal judgment, experiential immediacy).
    • To objective action (operational repeatability, physical testability).
    The challenge is not to reduce one grammar to another, but to produce translation rules between grammars. This is the function of an epistemology of measurement: a system that makes domains of inquiry commensurable without erasing their distinct causal constraints.
    The unification of the sciences, and the correction of their methodological blind spots, requires a general grammar of decidability. Such a grammar must preserve the precision of deterministic domains while extending operational testability to indeterminate, adversarial, and cooperative systems.
    Where puzzles provide elegance, problems demand responsibility. The future of inquiry depends not on escaping into puzzles but on confronting problems—through grammars capable of spanning the range from subjective intuition to objective action.
    I’ve always leaned toward problems rather than puzzles. Puzzles are self-contained—internally consistent, often elegant, but ultimately detached from the conflicts that define human life. I’ve treated puzzles as a form of escapism. They let one play at reasoning without consequence. But problems—conflict, cooperation, power, law, economy—these are the real fields where choices change outcomes.
    That orientation explains my trajectory. Mathematics and physics appealed to me because of their closure: they give precision in highly deterministic systems. But they felt insufficient for my temperament, because human behavior isn’t deterministic. It’s noisy, adversarial, and cooperative all at once. That indeterminacy requires tools that can manage uncertainty, conflict, and liability. So, I found myself studying epistemology through science, economics, and law rather than through purely abstract puzzles.
    There’s also a psychological layer: my attraction to power isn’t about domination. It’s about defense. My childhood pushed me to think about security and protection—about being able to alter the probability of outcomes when others could impose on me. That instinct shaped my work. Where others retreat to puzzles for safety, I lean into problems because that’s where safety is earned.
    And so I interpret disciplinary paradigms differently than most. Mathematicians, physicists, economists, lawyers—all are captured by the grammar of their domain. Each grammar provides precision in some dimension but blinds its practitioners to others. I’ve come to see the unification of fields as a linguistic problem. Grammars stretch along a spectrum from subjective intuition to objective action. If we can translate between them, we can unify not just knowledge but methods of cooperation.
    At bottom, my drive is simple: I want to reduce the noise of conflict and deception by building a common grammar of decidability. That drive makes sense of my choices, my intellectual pride, and even my suspicion of puzzle-solving as escapism. What drives me isn’t curiosity for its own sake but responsibility: the responsibility to solve problems that actually matter.
    [END]


    Source date (UTC): 2025-08-20 20:20:46 UTC

    Original post: https://x.com/i/articles/1958262956380283099

  • Solving The Problem: Computability and Decidability in the Open World (ed: This

    Solving The Problem: Computability and Decidability in the Open World

    (ed: This article is written for the user less comfortable with mathematics. If you are comfortable with Latex (and can tolerate that we might have made a few type formatting errors) the math version of this article follows this one.)
    TL/DR; For fellow supernerds: Doolittle’s innovation is reducible to: “Set logic with finite limits -> supply demand logic with marginally indifferent limits: Proof-carrying answers are overfitted to closed worlds; alignment-only filters are underfit to liability. The middle path is liability-weighted Bayesian accounting to marginal indifference.
    Why? Because mathematics constitutes a limit of reducibility conceivable by the human mind under self reflection, while bayesian accounting is evolved and necessary precisely because it is the only means of accounting for differences beyond the reducibility of the human mind and therefore closed to introspection. Our neurons aren’t introspectible and neither is bayesian accounting – though the truth is that current NNs used in LLMs are an intermediary point of reduction since they encode the equivalent of bundles of human neural sense perception in words. Those words are the limit of reducibility of marginal indifference.
    “Mathiness” pursues epsilon–delta in logic space; useful, but the productive epsilon is the error bound in outcome space conditional on reciprocity and externalities. That is what institutions, courts, engineers, and markets already pay for.
    The community keeps trying to buy logical certainty with formalism when the productive path for general reasoning is to buy marginal indifference with measurement. Treat reasoning as an economic process: update beliefs, price error, stop when the expected value of more information falls below the liability-weighted tolerance for error in the context. That’s computability for language.
    Explanation by GPT5:
    Proof-carrying logic is overfit to closed worlds; alignment filters are underfit to liability. The productive middle path is liability-weighted Bayesian accounting to marginal indifference.
    Mathematics is reducibility: the epsilon–delta of self-reflection, the mind’s limit of introspection. Bayesian updating is evolved necessity: the only means of accounting for variance beyond reducibility, where neurons—and their aggregates in words—are opaque to introspection. Current neural nets occupy this intermediary, encoding bundles of percepts as linguistic weights: words are the limit of reducibility of marginal indifference.
    Mathiness chases epsilon–delta in logic space. But the real epsilon is the error bound in outcome space, conditional on reciprocity and externalities. That is what institutions, engineers, and markets already pay for.
    Reasoning must be treated as an economic process: beliefs updated, error priced, and inquiry terminated when the marginal value of precision falls below the liability-weighted tolerance for error in context. That stopping rule is computability for language.
    As Such:
    Restatement
    1. The Problem with Extremes
    • Proof-carrying answers (formal logic, set-theoretic limits) are overfit: they assume a closed world where all variables can be specified.
    • Alignment-only filters (pure preference or reinforcement filters) are underfit: they lack liability-accountability because they ignore externalities.
    1. The Middle Path
    • The correct solution is liability-weighted Bayesian accounting: update beliefs until further information has no marginal value (marginal indifference), with tolerance for error scaled by the liability (cost of being wrong in context).
    1. Why Bayesian, not Pure Math?
    • Mathematics = reducibility: it captures what the human mind can introspectively reduce to first principles.
    • Bayesian accounting = evolved necessity: it is the only way to handle variation beyond the mind’s reducibility (neural processes themselves are non-introspectible, and so are Bayesian updates).
    • Neural nets sit in between: they approximate bundles of human percepts in word-weights, making language itself a limit of reducibility of marginal indifference.
    1. Implication for AI Reasoning
    • Formalism (“mathiness”) chases epsilon–delta in logic space, but real productivity comes from bounding error in outcome space given reciprocity and externalities.
    • Markets, courts, and engineers already pay for error bounds, not perfect logical closure.
    • Therefore, reasoning should be treated like an economic process:
    • update beliefs (Bayesian step),
    • price error (liability step),
    • stop when further information is not worth the cost.
    • That is what makes reasoning in language computable.
    Outline:
    • Part 1: Why Measurement Beats Mathiness (thesis + critique)
    • Part 2: The Indifference Method (full formalization + EIC + ROMI)
    • Part 3: Liability Tiers and Thresholds (defaults + examples)
    The community keeps trying to buy logical certainty with formalism when the productive path for general reasoning is to buy marginal indifference with measurement. Treat reasoning as an economic process: update beliefs, price error, stop when the expected value of more information falls below the liability-weighted tolerance for error in the context. That’s computability for language.
    Below is a tight formalization you can lift.
    Testifiability (Truth).
    Satisfaction of the demand for testifiable warrant across the accessible dimensions: categorical consistency, logical consistency, empirical correspondence, operational repeatability, and rational/reciprocal choice. Practically: keep a set of per-axis coverage scores, each between 0 and 1. The context sets minimum thresholds for each axis.
    Decidability.
    “Satisfaction of the demand for infallibility in the context in question without the necessity of discretion.” Operationally: a decision is decidable when the
    decidability margin (defined below) is zero or positive given the liability of error.
    Marginal Indifference (decision standard).
    For each candidate action, compute its
    expected loss by summing the losses across possible states of the world, each weighted by its current probability. Let the best action be the one with the lowest expected loss; the runner-up is the next best. Define the decidability margin as:
    • the runner-up’s expected loss
    • minus the best action’s expected loss
    • minus the required certainty gap for this context (the liability-derived cushion you must clear).
    Decision status:
    • Decidable: the decidability margin is zero or positive and all testifiability thresholds are met.
    • Indifferent (stop rule): the expected value of the next measurement is less than or equal to the required certainty gap.
    • Undecidable: otherwise; seek more measurement.
    Bayesian Accounting (the missing piece).
    Maintain a
    ledger rather than a proof.
    • Assets: gains in evidential support from corroborating measurements.
    • Liabilities: expected externalities of error (population × severity) plus any warranty you promise.
    • Equity (warrant): the net decisional surplus over the required certainty gap.
      Decide when equity is non-negative and testifiability thresholds are met.
    Limit-as-reasoning (unifying “math limit” and “marginal indifference”).
    As measurements accumulate, posterior odds and expected-loss gaps stabilize. The limit approached is the
    smallest practical error bound such that no additional evidence with positive value could flip the decision across the required certainty gap. Reasoning is a limit-seeking process; the “proof” is the convergence certificate.
    • Completeness vs. liability. Formal derivation optimizes certainty inside axiomatic spaces. General reasoning optimizes expected outcomes under liability. Outside math, liability is usually the binding constraint.
    • Open-world evidence. Incompleteness, path-dependence, and dependence among sources make perfect formal closure intractable. Bayesian accounting prices these imperfections and still yields action.
    • Opportunity cost. The cost of further formalization often exceeds the expected value of information. Markets stop at marginal indifference. Reasoners should, too.
    1. Operationalization. Reduce every claim to an actionably measurable sequence (who does what, when, with what materials, yielding which observations). No operation → no update.
    2. Multi-axis tests. Score testifiability across: categorical, logical, empirical, operational, and reciprocal-choice. Fail any mandatory axis → no decision.
    3. Reliability-weighted evidence. Weight updates by instrument quality, source dependence, and adversarial exposure; discount dependent testimony (log-opinion pooling with dependency penalties).
    4. Liability calibration. Map the context to its required certainty gap (e.g., casual advice < finance < medicine < law/regulation). Higher liability demands a larger expected-loss gap and higher testifiability thresholds.
    5. Stop rule (marginal indifference). Estimate the expected value of the next-best measurement; stop when it is less than or equal to the required certainty gap.
    6. Reciprocity constraint. Filter actions and claims by Pareto-improvement and non-imposition (expected externalities priced into the liability term).
    7. Audit trail. Publish the ledger: priors, evidence deltas, dependency corrections, the expected-loss table, the decidability margin, the testifiability scores, and the resulting convergence certificate.
    Epsilon-Indifference Certificate (EIC) — include:
    • the convergence bound (the smallest practical error bound described above),
    • the decidability margin (surplus over the required certainty gap),
    • the testifiability scores and their thresholds,
    • the context and liability settings,
    • and the audit (ledger entries and the measurement plan considered and rejected once the stop rule was met).
    This is the computable replacement for “sounds plausible.” It is the artifact that makes the answer testifiable and the choice decidable.
    ROMI — Reasoning as Optimizing Marginal Indifference
    1. Parse → Operations. Translate the prompt into an explicit set of hypotheses and candidate actions.
    2. Priors. Set structural priors (base rates, domain constraints).
    3. Plan measurements. Enumerate tests with estimated information gain and cost.
    4. Acquire/verify. Retrieve or simulate measurements; apply reliability and dependency corrections.
    5. Update. Revise odds and compute expected losses for each action.
    6. Calibrate liability. Choose the context class → compute the required certainty gap; set the testifiability thresholds.
    7. Stop/continue. If the expected value of the next measurement is less than or equal to the required gap and thresholds are met, stop; otherwise measure more.
    8. Decide & certify. Output the chosen action with the EIC and the full ledger.
    This is Bayesian decision-making under reciprocity constraints—accounting, not theorem-proving. It exploits the LLM’s strengths (fast hypothesis generation and measurement planning) while binding it to liability-aware stopping.
    • Computability from prose. Operationalization plus accounting turns language into a measured decision process.
    • Safety as economics. Liability is priced into the required certainty gap rather than handled by blunt alignment filters.
    • Graceful degradation. When undecidable under current evidence and liability, return the next-best measurement plan with value estimates.
    • Universally commensurable. All domains reduce to the same artifact (EIC + ledger), satisfying the demand for commensurability.
    • Context tiers → required certainty gaps: e.g., Chat (low), Technical advice (medium), Medical/Legal (high).
    • Axis thresholds: stricter for high-liability contexts.
    • Pooling rule: log-opinion pooling with a dependency penalty vs. hierarchical Bayes (choose one; both are defensible).
    • Penalty schema: externality classes and population weights.
    Claim: …
    Operations: …
    Evidence ledger: priors → updates (source, reliability, how much it moved the needle) → dependency adjustments.
    Testifiability vs. thresholds: [categorical, logical, empirical, operational, reciprocity] = […].
    Liability class → required certainty gap: …
    Expected-cost table for the candidate actions; decidability margin: …
    Expected value of the next test: … → Stop?
    Decision with EIC {convergence bound, decidability margin, testifiability scores, thresholds, context, audit}.
    Status: Decidable / Indifferent / Undecidable (with next-measurement plan).
    • Proof-carrying answers are overfitted to closed worlds; alignment-only filters are underfit to liability. The middle path is liability-weighted Bayesian accounting to marginal indifference.
    • “Mathiness” pursues epsilon–delta in logic space; useful, but the productive “epsilon” is the error bound in outcome space conditional on reciprocity and externalities. That is what institutions, courts, engineers, and markets already pay for.
    Yes—the argument stands. For general reasoning, you optimize to marginal indifference under a liability-aware evidence ledger, not to formal certainty. The goal isn’t a proof; it’s a decidable action with a warranted error bound that fits the context’s demand for infallibility.
    1) “Mathiness” vs. measurement
    Formal derivations are sufficient but rarely necessary. Outside closed worlds, the task is to
    minimize expected externalities of error, not to maximize syntactic closure.
    2) Bayesian accounting is the engine
    Treat each evidence update as a line item on an
    assets–liabilities ledger. Keep measuring until the expected value of the next measurement is lower than the required certainty gap set by the context’s liability tier. That stop rule is what delivers marginal indifference.
    3) Outputs: testifiability and decidability
    Require minimum scores on five axes of testifiability—
    categorical, logical, empirical, operational, reciprocity—and a decidability margin (best option’s advantage minus the required certainty gap) that clears the context’s threshold.
    4) Limit-as-reasoning
    Think of reasoning as convergence: keep measuring until
    additional evidence cannot reasonably flip the decision given the required certainty gap. Issue a short Indifference Certificate (EIC) documenting why further measurement isn’t worth it.
    5) LLMs’ comparative advantage
    LLMs excel at hypothesis generation and measurement planning; they struggle with global formal closure. Constrain them with the
    ledger + stop rule so their strengths are productive and their weaknesses are bounded.
    • Operationalization. Every claim reduces to concrete, measurable operations. No operation → no justified update.
    • Liability mapping. Map the context’s demand for infallibility into a required certainty gap and axis thresholds for testifiability.
    • Dependency control. Penalize correlated or duplicate evidence; price adversarial exposure.
    • Auditability. Every decision ships with the evidence ledger and the EIC.
    • Fat tails / ruin risks. Optimize risk-adjusted expected loss (e.g., average of the worst tail of outcomes) rather than plain expectation. Raise the required certainty gap or add hard guards for irreversible harms.
    • Multi-stakeholder externalities. Treat liability as a vector across affected groups. Clear the margin under a conservative aggregator (default: protect the worst-affected), so you don’t buy gains by imposing costs on a minority.
    • Severe ambiguity / imprecise priors. Use interval posteriors or imprecise probability sets; choose the set of admissible actions and apply the required certainty gap to break ties.
    • Model misspecification / distribution shift. Add a specification penalty when you suspect shift; raise the required certainty gap or fall back to minimax-regret in high-shift regions.
    • Information hazards / strategic manipulation. Price the externalities of measuring into the expected value of information; refuse measurements that reduce welfare under reciprocity constraints.
    • Liability schedule. Use discrete tiers (e.g., Chat → Engineering → Medical/Legal → Societal-risk). Each tier sets a required certainty gap and axis thresholds, with empirical and operational demands escalating faster than categorical and logical.
    • Risk-adjusted margin. Compute the decisional advantage using a tail-aware measure (e.g., average of worst-case slices), then subtract the tier’s required certainty gap.
    • Vector liability aggregator. Default to max-protect the worst-affected; optionally allow a documented weighted scheme when policy demands it.
    • Imprecise update mode. If uncertainty bands overlap the required gap, return admissible actions + next best measurement plan rather than a single action.
    • Certificate extension (EIC++). Include: chosen risk measure, stakeholder weights/guard, shift penalty, and dependency-adjusted evidence deltas.
    • Computability from prose. Language → operations → evidence ledger → certificate.
    • Graceful stopping. Every answer carries a why-stop-now justification: the next test isn’t worth enough to matter.
    • Context-commensurability. One artifact across domains; only the liability tier, axis thresholds, and required gap change.
    • Accountable disagreement. Disagreements reduce to public differences in priors, instrument reliabilities, or liability settings—all auditable.
    The argument is correct in principle and superior in practice provided you:
    (a) enforce operationalization,
    (b) calibrate liability into a risk-aware required certainty gap,
    (c) control evidence dependence, and
    (d) emit an auditable certificate.
    Do that, and “mathiness” gives way to
    measured, decidable action with bounded error—the product markets and institutions actually demand.
    We use five liability tiers. Higher tiers mean higher stakes and a bigger required cushion before we act. Think in three pieces:
    • Expected cost: what you expect each option will cost after considering chances and consequences.
    • Spread: how jumpy that comparison is—use a robust “typical swing” (median absolute deviation) rather than a fragile standard deviation.
    • Required certainty gap: how much better the best option must be (beyond noise) at this tier before we’re willing to act.
    We also look at tail risk—how the worst few percent of cases behave. Concretely, we judge using the average of the worst X% of outcomes (that’s CVaR in plain English).
    Tiers and defaults
    Tier Typical contexts Worst-tail slice we average over Required certainty gap = multiplier × spread Minimum evidence surplus 1 Casual chat, exploratory analysis worst 20% 0.25 × spread ~0.5 “bits” (≈ 1.4:1 odds) 2 Consumer advice, coding tips worst 10% 0.50 × spread ~1.0 bit (≈ 2:1 odds) 3 Engineering, finance (non-safety) worst 5% 1.00 × spread ~2.0 bits (≈ 4:1 odds) 4 Medical, legal, compliance worst 1% 2.00 × spread ~3.0 bits (≈ 8:1 odds) 5 Societal or irreversible harms worst 0.5% 4.00 × spread ~4.0 bits (≈ 16:1 odds)
    Decision rule (“decidability margin”)
    1. Compute the expected cost of the best option and the runner-up, using the worst-tail averaging appropriate to the tier.
    2. Subtract the best from the runner-up to get the benefit gap.
    3. Subtract the required certainty gap (the multiplier × spread).
    4. If what remains is zero or positive, and the testifiability thresholds (below) are met, the choice is decidable. Otherwise, gather more measurement.
    We score five axes from 0 to 1. Thresholds tighten with liability. Empirical and operational requirements ramp fastest.
    • Categorical: terms are defined and used consistently; no category mistakes.
    • Logical: reasoning is coherent; no unresolved contradictions or circularity.
    • Empirical: claims are supported by measurements from reliable instruments or sources.
    • Operational: the claim reduces to concrete, executable steps with preconditions and expected observations.
    • Reciprocity: expected externalities are priced and disclosed; the choice does not impose hidden costs on others.
    Minimum scores required to act
    Tier Categorical Logical Empirical Operational Reciprocity 1 0.60 0.60 0.30 0.30 0.50 2 0.70 0.75 0.50 0.60 0.70 3 0.85 0.85 0.70 0.75 0.85 4 0.90 0.90 0.85 0.90 0.90 5 0.95 0.95 0.95 0.95 0.95
    Interpretation: by Tier 4–5 you need near-complete measurement and a runnable procedure—not just clean logic.
    Default: log-opinion pooling with dependency penalties—plain English version:
    • Start with multiple sources (experiments, datasets, experts).
    • Give each a reliability weight from 0 to 1, based on instrument quality and track record.
    • Detect clusters of dependent or near-duplicate sources; reduce their combined influence so you don’t “double-count the same voice.”
    • Cap any single source’s influence so no one dominates.
    • Combine the adjusted contributions to update the odds for each hypothesis.
    Practical settings (defaults you can change):
    • Penalty strength for dependency: moderate.
    • Weight cap for a single source: 40%.
    • For a cluster of m near-duplicates, divide the cluster’s total weight by the square root of m (effective sample size rule of thumb).
    Every answer comes with a short Epsilon-Indifference Certificate—an audit trail that justifies why we stopped now and why this action is warranted.
    What’s in it (human-readable fields):
    • Claim and context tier.
    • Priors used.
    • Evidence ledger: each item with type, reliability, “how much it moved the needle,” and which cluster it belongs to.
    • Pooling summary: the final weights after dependency penalties.
    • Posterior odds in plain numbers.
    • Options compared and their expected costs (already using the right worst-tail averaging for the tier).
    • Spread of that cost difference (the typical swing).
    • Required certainty gap for this tier.
    • Decidability margin: benefit gap minus required gap (must be ≥ 0).
    • Testifiability scores on the five axes vs. the tier’s thresholds.
    • Value of the next measurement: how much we expect the next best test to help; if it’s below the required gap, we stop.
    • Decision and a short rationale.
    • Audit hash (so the exact artifact can be reproduced).
    A note on “bits of evidence”: 1 bit ≈ moving from 1:1 to 2:1 odds; 2 bits ≈ 4:1; 3 bits ≈ 8:1; 4 bits ≈ 16:1. We require a minimum surplus by tier.
    • Offer to settle: $2.20M.
    • If litigate: about $1.00M in legal costs; if you lose, $5.00M in damages.
    • After pooling evidence: about a 50% chance of losing in court (dependency-penalized sources).
    • Expected cost of litigating: 0.5 × $5.00M + $1.00M = $3.50M.
    • Expected cost of settling: $2.20M.
    • Benefit gap: $3.50M − $2.20M = $1.30M.
    Tier-4 settings:
    • Worst-tail averaging: we judge using the average of the worst 1% of outcomes.
    • Spread (typical swing) in the cost difference: about $0.50M.
    • Required certainty gap: 2.0 × $0.50M = $1.00M.
    • Decidability margin: $1.30M − $1.00M = $0.30Mpasses.
    Testifiability scores clear Tier-4 thresholds (empirical and operational are high because we have concrete costs and procedures). The expected value of one more study on damages might improve things by about $0.25M—below the $1.00M required gap—so we stop.
    Decision: Settle. EIC issued with the ledger.
    • Warranty price: $200 for three years.
    • If it fails: average repair cost $500.
    • After pooling: failure probability around 12% (duplicates penalized).
    • Expected cost without warranty: 0.12 × $500 = $60.
    • Expected cost with warranty: $200.
    • Benefit gap (skip − buy): $200 − $60 = $140.
    Tier-2 settings:
    • Worst-tail averaging: average of the worst 10% of outcomes.
    • Spread (typical swing) in the cost difference: about $50.
    • Required certainty gap: 0.5 × $50 = $25.
    • Decidability margin: $140 − $25 = $115passes.
    Evidence surplus is above the Tier-2 minimum. The next measurement (brand-specific reliability) is worth about $10, below the required gap, so we stop.
    Decision: Don’t buy the warranty. EIC issued.
    • Language → operations: every claim is turned into steps, measurements, and expected observations.
    • Accounting, not proof-hunting: we keep a ledger of how each piece of evidence changes the odds, while pricing externalities as liability.
    • Context-aware stopping: we stop when the next test isn’t worth as much as the required gap for this tier.
    • One artifact across domains: only the thresholds and required gap change with stakes; the method and the certificate don’t.
    • Tiers: 5, with the worst-tail slices, gap multipliers, and evidence minima listed above.
    • Thresholds: empirical and operational escalate faster than categorical and logical; table above.
    • Pooling: log-opinion pooling with dependency penalties; weight cap per source; cluster de-duplication by effective sample size.
    If you want a stricter Tier-5 (e.g., push the required gap multiplier from 4.0 to 5.0 for extra conservatism on irreversible harms), say the word and we’ll ratchet that one knob and keep everything else fixed.


    Source date (UTC): 2025-08-19 23:08:43 UTC

    Original post: https://x.com/i/articles/1957942837355639117

  • Solving The Problem: Computability and Decidability in the Open World (Math Vers

    Solving The Problem: Computability and Decidability in the Open World (Math Version)

    (ed: This article is written for the user comfortable with mathematics. If you are not there is another copy of this article in ordinary language preceding this one.)
    TL/DR; For fellow supernerds: Doolittle’s innovation is reducible to: “Set logic with finite limits -> supply demand logic with marginally indifferent limits: Proof-carrying answers are overfitted to closed worlds; alignment-only filters are underfit to liability. The middle path is liability-weighted Bayesian accounting to marginal indifference.
    Why? Because mathematics constitutes a limit of reducibility conceivable by the human mind under self reflection, while bayesian accounting is evolved and necessary precisely because it is the only means of accounting for differences beyond the reducibility of the human mind and therefore closed to introspection. Our neurons aren’t introspectible and neither is bayesian accounting – though the truth is that current NNs used in LLMs are an intermediary point of reduction since they encode the equivalent of bundles of human neural sense perception in words. Those words are the limit of reducibility of marginal indifference.
    “Mathiness” pursues epsilon–delta in logic space; useful, but the productive epsilon is the error bound in outcome space conditional on reciprocity and externalities. That is what institutions, courts, engineers, and markets already pay for.
    The community keeps trying to buy logical certainty with formalism when the productive path for general reasoning is to buy marginal indifference with measurement. Treat reasoning as an economic process: update beliefs, price error, stop when the expected value of more information falls below the liability-weighted tolerance for error in the context. That’s computability for language.
    Explanation by GPT5:
    Proof-carrying logic is overfit to closed worlds; alignment filters are underfit to liability. The productive middle path is liability-weighted Bayesian accounting to marginal indifference.
    Mathematics is reducibility: the epsilon–delta of self-reflection, the mind’s limit of introspection. Bayesian updating is evolved necessity: the only means of accounting for variance beyond reducibility, where neurons—and their aggregates in words—are opaque to introspection. Current neural nets occupy this intermediary, encoding bundles of percepts as linguistic weights: words are the limit of reducibility of marginal indifference.
    Mathiness chases epsilon–delta in logic space. But the real epsilon is the error bound in outcome space, conditional on reciprocity and externalities. That is what institutions, engineers, and markets already pay for.
    Reasoning must be treated as an economic process: beliefs updated, error priced, and inquiry terminated when the marginal value of precision falls below the liability-weighted tolerance for error in context. That stopping rule is computability for language.
    As Such:
    Restatement
    1. The Problem with Extremes
    • Proof-carrying answers (formal logic, set-theoretic limits) are overfit: they assume a closed world where all variables can be specified.
    • Alignment-only filters (pure preference or reinforcement filters) are underfit: they lack liability-accountability because they ignore externalities.
    1. The Middle Path
    • The correct solution is liability-weighted Bayesian accounting: update beliefs until further information has no marginal value (marginal indifference), with tolerance for error scaled by the liability (cost of being wrong in context).
    1. Why Bayesian, not Pure Math?
    • Mathematics = reducibility: it captures what the human mind can introspectively reduce to first principles.
    • Bayesian accounting = evolved necessity: it is the only way to handle variation beyond the mind’s reducibility (neural processes themselves are non-introspectible, and so are Bayesian updates).
    • Neural nets sit in between: they approximate bundles of human percepts in word-weights, making language itself a limit of reducibility of marginal indifference.
    1. Implication for AI Reasoning
    • Formalism (“mathiness”) chases epsilon–delta in logic space, but real productivity comes from bounding error in outcome space given reciprocity and externalities.
    • Markets, courts, and engineers already pay for error bounds, not perfect logical closure.
    • Therefore, reasoning should be treated like an economic process:
    • update beliefs (Bayesian step),
    • price error (liability step),
    • stop when further information is not worth the cost.
    • That is what makes reasoning in language computable.
    Outline:
    • Part 1: Why Measurement Beats Mathiness (thesis + critique)
    • Part 2: The Indifference Method (full formalization + EIC + ROMI)
    • Part 3: Liability Tiers and Thresholds (defaults + examples)
    Below is a tight formalization.
    Note: Ed: We had to hand edit the Latex. You may want an LLM to explain it to you in ordinary language.
    1. Testifiability (Truth): Satisfaction of the demand for testifiable warrant across the accessible dimensions (categorical consistency, logical consistency, empirical correspondence, operational repeatability, rational/reciprocal choice). Represent as a coverage vector
      T=(t1,…,tk),  ti∈[0,1]. Context sets minimum thresholds θi.
    2. Decidability: “Satisfaction of the demand for infallibility in the context in question without the necessity of discretion.” Operationally, a decision is decidable when the decidability margin (below) is ≥ 0 given the liability of error.
    3. Marginal Indifference (decision-theoretic): Given action set A, posterior P(H∣E), loss L(a,h), and context liability λ (population-weighted cost of error + warranty demanded), define

      EL(a∣E)=∑hL(a,h)P(h∣E).

      With a∗=arg mina​ EL(a∣E) and runner-up a′, define the decidability margin

      DM=EL(a′∣E)−EL(a∗∣E)−τ(λ),

      where τ(λ) is the context’s required surplus of certainty (a liability-derived gap).

    • Decidable: DM ≥ 0 and ti ≥ θi  ∀i.
    • Indifferent (stop rule): the expected value of further information EVI≤τ(λ).
    • Undecidable: otherwise (seek more measurement, or declare undecidable).
    1. Bayesian Accounting (the missing piece): Maintain a ledger rather than a proof:
    • Assets: log-likelihood gains from corroborating evidence.
    • Liabilities: expected externalities of error (population × severity) + warranty promised.
    • Equity (Warrant): net posterior surplus over τ(λ).
      Decidability occurs when equity ≥ 0 while meeting testifiability thresholds.
    1. Limit-as-reasoning (unifying “math limit” and “marginal indifference”): As measurements accumulate, posterior odds and EL gaps converge; the limit approached is the smallest εvarepsilon such that additional evidence cannot move the decision across τ(λ)tau(lambda) at positive EV. Reasoning is a limit-seeking process; the “proof” is the convergence certificate.
    • Completeness vs. liability: Formal derivation optimizes certainty in axiomatic spaces. General reasoning optimizes expected outcomes under liability. The latter is almost always the binding constraint outside math.
    • Open-world evidence: Incompleteness, path-dependence, and dependence structures make perfect formal closure intractable. But Bayesian accounting prices those imperfections and still yields action.
    • Opportunity cost: The cost of further formalization often exceeds EVImathrm{EVI}. Markets stop at marginal indifference. Reasoners should, too.
    1. Operationalization: Reduce every claim to an actionably measurable sequence OO (who does what, when, with what materials, yielding which observations). No operation → no update.
    2. Multi-axis tests: Score TT across: categorical, logical, empirical, operational, reciprocal-choice. Fail any mandatory axis → no decision.
    3. Reliability-weighted evidence: Weight updates by instrument quality, source dependence, and adversarial exposure; discount dependent testimony (log-opinion pooling with dependency penalties).
    4. Liability calibration: Map context to τ(λ)tau(lambda). E.g., casual advice < finance < medicine < law/regulation. Higher λ increases the required EL gap and testifiability thresholds.
    5. Stop rule (marginal indifference): Compute EVI of next-best measurement; stop when EVI ≤ τ(λ).
    6. Reciprocity constraint: Filter candidate actions/claims by Pareto-improvement and non-imposition (expected externalities priced into λ).
    7. Audit trail: Output the ledger: priors, evidence deltas, dependency corrections, EL table, DM, TT, and the resulting ε-certificate.
    Epsilon-Indifference Certificate (EIC):
    EIC={ε,  DM,  T,  θ,  λ,  Audit}
    • ε: posterior risk bound for the selected action/claim.
    • DM: surplus over the required liability gap τ(λ).
    • T ≥ θT: axis-wise testifiability coverage satisfied.
    • Audit: the Bayesian ledger entries and measurement plan considered-and-rejected once EVI≤τ(λ).
    This is the computable replacement for “sounds plausible.” It’s also the artifact that makes the answer testifiable and the choice decidable.
    ROMI — Reasoning as Optimizing Marginal Indifference
    1. Parse → Operations: Translate the prompt into an operational hypothesis set {hi} and candidate actions {ai}.
    2. Priors: Set structural priors (base rates, domain constraints).
    3. Plan measurements: Enumerate tests with estimated information gain and cost.
    4. Acquire/verify: Retrieve or simulate measurements; apply reliability and dependency corrections.
    5. Update: Compute P(H∣E), expected losses EL(a∣E).
    6. Calibrate liability: Pick λ (context class) → compute τ(λ); set θ for TT.
    7. Stop/continue: If EVI ≤ τ(λ) and T ≥ θT, stop; else measure more.
    8. Decide & certify: Output a∗ with EIC and the ledger.
    This is Bayesian decision-making under reciprocity constraints—accounting, not theorem-proving. It exploits the LLM’s strength (fast hypothesis and measurement planning) while binding it to liability-aware stopping.
    • Computability from prose: Operationalization + accounting turns language into a measured decision process.
    • Safety as economics, not taboo: Liability is priced into τ(λ) rather than hard-censored by alignment.
    • Graceful degradation: When undecidable under current E and λ, the model returns the next best measurement plan with EVI estimates.
    • Universally commensurable: All domains reduce to the same artifact (EIC + ledger), satisfying your demand for commensurability.
    • Context tiers λ→τ(λ): e.g., Chat (low), Tech advice (medium), Medical/Legal (high).
    • Axis thresholds θ: stricter for high-liability contexts.
    • Pooling rule: log-opinion pool with dependency penalty vs. hierarchical Bayes (choose one; both are defensible).
    • Penalty schema: externality classes and population weights.
    Claim: …
    Operations: …
    Evidence ledger: priors → updates (source, reliability, ΔLL) → dependency adjustments.
    Testifiability TT vs. θ: [cat, log, emp, op, rec] = […].
    Liability class λ → τ(λ)=…
    EL table for {ai}; DM = …
    EVI of next test = … → Stop?
    Decision a∗ with EIC {ε,DM,T,θ,λ,Audit}.
    Status: Decidable / Indifferent / Undecidable (with next measurement plan).
    • Proof-carrying answers are overfitted to closed worlds; alignment-only filters are underfit to liability. The middle path is liability-weighted Bayesian accounting to marginal indifference.
    • “Mathiness” pursues epsilon–delta in logic space; useful, but the productive epsilon is the error bound in outcome space conditional on reciprocity and externalities. That is what institutions, courts, engineers, and markets already pay for.
    For general reasoning, optimizing to marginal indifference under a liability-aware Bayesian ledger outperforms chasing formal certainty (“mathiness”). The right objective isn’t proof; it’s decidable action with warranted error given the context’s demand for infallibility.
    1. Mathiness vs. measurement.
      Correct: formal derivation is sufficient but rarely necessary. General reasoning should minimize expected externalities of error, not maximize syntactic closure.
    2. Bayesian accounting as the engine.
      Correct: treat evidence updates as entries on an assets–liabilities ledger; stop when the expected value of further information (EVI) falls below the liability-derived tolerance. This implements “marginal indifference.”
    3. Testifiability + decidability as outputs.
      Correct: require axis-wise testifiability (categorical, logical, empirical, operational, reciprocal) and a decidability margin that clears the liability threshold.
    4. Limit-as-reasoning.
      Correct: the limit you want is the smallest εvarepsilonε such that more evidence cannot rationally flip the action under the current liability schedule—an εvarepsilonε-indifference certificate rather than an εvarepsilonε-δdeltaδ proof.
    5. LLMs’ comparative advantage.
      Correct: LLMs are good at hypothesis generation and measurement planning; weak at global formal closure. Constraining them with the ledger + stop rule makes their strengths productive and their weaknesses bounded.
    • Operationalization: every claim reduces to measurable operations; otherwise no update is justified.
    • Liability mapping: the context’s demand for infallibility (λ) must translate into a decision gap τ(λ) and axis thresholds θ.
    • Dependency control: evidence correlation is penalized; adversarial exposure is priced.
    • Auditability: the model emits the ledger and its εvarepsilonε-indifference certificate (EIC).
    1. Fat tails / ruin risks (non-ergodic domains).
      Use robust Bayes or a risk measure (CVaR/entropic risk). Concretely, optimize risk-adjusted expected loss, not plain expectation; set τ(λ)tau(lambda)τ(λ) high or require worst-case guards for irreversible harms.
    2. Multi-stakeholder externalities.
      Liability is a vector λ=(λ1,…,λm). Require the margin to clear a chosen aggregator (e.g., max, lexicographic, or weighted max) to prevent cheap tradeoffs on minorities.
    3. Severe ambiguity / imprecise priors.
      Adopt interval posteriors or imprecise probability sets; decide on E-admissible actions, then apply the liability margin to break ties.
    4. Model misspecification / distribution shift.
      Add a “specification penalty” term proportional to estimated shift; raise τ(λ) or fallback to minimax-regret in high-shift zones.
    5. Information hazards / strategic manipulation.
      Price measurement externalities into the EVI (information value can be negative); refuse measurements that reduce welfare under reciprocity constraints.
    • Liability schedule: make τ(λ) a monotone map with discrete tiers (e.g., Chat < Engineering < Medical/Legal < Societal-Risk), each with axis-specific thresholds θ(λ) that escalate empirical and operational demands faster than logical ones.
    • Risk-adjusted margin: define DM = ELrisk(a′)−ELrisk(a∗)−τ(λ); choose CVaRα by tier.
    • Vector liability aggregator: default to max (protects the worst-affected), with a documented option for weighted max when policy demands it.
    • Imprecise update mode: when posterior intervals overlap τ(λ), output an admissible set + next measurement plan instead of a single action. (usually meaning suggested compromises)
    • Certificate extension (EIC++): include: risk measure, stakeholder weights/guard, shift penalty, and dependency-adjusted log-likelihood deltas.
    • Computability from prose: language → operations → ledger → certificate.
    • Graceful stopping: answers come with a why-stop-now proof (EVI ≤ τ(λ)).
    • Context-commensurability: one artifact across domains; only λ,θ,τ vary.
    • Accountable disagreement: when two agents disagree, they disagree in public on priors, instrument reliabilities, or λlambdaλ—all auditable.
    The argument is correct in principle and superior in practice, provided you (a) enforce operationalization, (b) calibrate liability into a risk-aware margin, (c) control evidence dependence, and (d) emit an auditable certificate. Do those, and “mathiness” gives way to measured, decidable action with bounded error—the thing institutions and markets actually pay for.
    We’ll use 5 tiers with a risk-adjusted gap requirement. Let
    • Risk measure: CVaRα on the loss difference ΔL=EL(a′)−EL(a∗).
    • Scale sss: robust spread of ΔL (MAD or stdev; default MAD).
    • Required margin: τ(λ)=k(λ)⋅s.
    • Posterior evidence floor: minimum log-odds surplus for a∗vs. a′.
    Decidability margin:
    DM=EL(a′)−EL(a∗)−τ(λ) (using CVaRα).
    Decidable iff DM ≥ 0 and axis thresholds T ≥ θ (λ) are met.
    Escalate empirical and operational faster than logical and categorical with liability. Reciprocity tracks stakeholder exposure.
    Scores Ti∈[0,1] on five axes: Categorical, Logical, Empirical, Operational, Reciprocity.
    Intuition: by Tier-4/5 you must have near-complete measurement and operationalization, not just clean logic.
    Adopt log-opinion pooling with dependency penalties.
    • Form: log⁡ p(h∣E)∝∑i wi log ⁡pi(h)
    • Reliability weight: ri∈[0,1] from instrument/testimony grading.
    • Dependency penalty: estimate a correlation score ρirho_iρi​ (average pairwise corr. of source iii with others, or cluster-wise).
      Wi ∝ ri/1+κ ρi​​, normalize ∑iwi=1.
      Default κ=1.0. Cap wi ≤ wmax⁡ = 0.40 to prevent dominance.
    • Cluster correction (optional, on): within any cluster of m near-duplicates, divide total cluster weight by sqrt(m) (effective sample size).
    • Categorical: Tcat = 1− normalized contradiction rate across claims/frames.
    • Logical: rule-check pass rate with penalty for unresolved entailments/loops.
    • Empirical: reliability-weighted fraction of measurements supporting the claim, with out-of-sample bonus and publication bias penalty.
    • Operational: proportion of the hypothesis reduced to executable steps with instrument specs and expected observations; penalize missing preconditions.
    • Reciprocity: expected externalities priced and disclosed; stakeholder vector cleared under chosen aggregator (default max).
      Each Ti mapped to [0,1] by calibrated rubrics; defaults above.
    A) High-liability legal (Tier-4): Settle or litigate a breach claim
    • Setup: Settlement offer S=$2.20M. If litigate: legal cost L=$1.00M, damages if lose D=$5.00M.
    • Posterior plose​: 0.50 after pooling (two independent fact patterns + one expert, dependency-penalized).
    • Expected losses:
    • Litigate: ELL=pD+L=0.5⋅5.0+1.0=$3.50M
    • Settle: ELS = S = $2.20M
      Runner-up a′=a’=a′= litigate; a∗=a^*=a∗= settle.
    • Risk: Tier-4 → α=0.99. Spread of ΔL=ELL−ELS​ has MAD s=$0.50M (from uncertainty in p and damages).
      τ(λ)=ks=2.0×0.50=$1.00M.
    • DM: 3.50−2.20−1.00= $0.30M ≥ 0 → passes.
    • Evidence floor: posterior log-odds(a* vs a′) ≈ +3.2 bits (> 3.0 required).
    • Axis thresholds (Tier-4): T = {cat .92, log .91, emp .88, op .91, rec .90} ≥ θ = {.90, .90, .85, .90, .90}.
    • EVI(next test): commissioning an additional damages study expected to refine ppp by ±0.02 → EVI≈$0.25 < τ=$1.00M.
      Decision: Settle. EIC issued.
    B) Low-liability consumer (Tier-2): Buy laptop extended warranty?
    • Warranty price: $200 (3-year). Repair if fail: mean $500.
    • Posterior fail prob: p=0.12 after pooling (reviews + failure stats, penalizing duplicate sources).
    • Expected losses:
    • Buy warranty: ELW=$200.
    • No warranty: ELN=p⋅500=$60.
      a∗ = No warranty; a′= Buy.
    • Risk: Tier-2 → α=0.90. Spread s (MAD of ΔL) ≈ $50 (uncertainty in ppp, repair costs).
      τ(λ) = ks = 0.5 × 50 = $25.
    • DM: 200−60−25=$115 ≥ 0 → passes.
    • Evidence floor: ~1.4 bits (> 1.0 required).
    • Axis thresholds (Tier-2): T = {cat .80, log .85, emp .55, op .70, rec .72} ≥ θ = {.70,.75,.50,.60,.70}.
    • EVI(next search): reading a brand-specific reliability report might change p by ±0.02 → EVI ≈ $10 < τ=$25.
      Decision: Skip the warranty. EIC issued.
    Summary of choices (locked)
    • Tiers: 5; CVaR + robust scale; k={0.25,0.5,1,2,4}; bits floor {0.5,1,2,3,4}.
    • Thresholds: escalate Emp/Op faster than Cat/Log; table above.
    • Pooling: Log-opinion pooling with dependency penalties (default κ=1.0, wmax⁡=0.40, cluster ESS sqrt(m)​)..


    Source date (UTC): 2025-08-19 23:08:17 UTC

    Original post: https://x.com/i/articles/1957942728651857924