Author: Curt Doolittle

  • The secret about women and being vulnerable as a man…#shorts https:// youtube.co

    The secret about women and being vulnerable as a man…#shorts
    https://
    youtube.com/shorts/hO7ihUC
    bdVQ?si=wqqEhsB56F8_FNCd
    … via
    @YouTube


    Source date (UTC): 2025-08-25 00:19:04 UTC

    Original post: https://twitter.com/i/web/status/1959772478651404480

  • INSIGHT FROM NOAH REVOY 😉 –“Take a YouTube news video. Scroll down and click “

    INSIGHT FROM NOAH REVOY 😉

    –“Take a YouTube news video. Scroll down and click “See transcript”, copy the text, and paste it into CurtGPT. Ask it to analyze the transcript, point out what’s true, what’s false, and rebut the false claims. That’s the fastest way to parse a two-hour news segment and instantly see exactly where it goes off the rails. This doesn’t work well with standard ChatGPT, but with what we’ve built in CurtGPT, it works exceptionally well.”–


    Source date (UTC): 2025-08-25 00:12:23 UTC

    Original post: https://twitter.com/i/web/status/1959770799038251399

  • You can’t average bias (or normativity). You can only anchor to truth and explai

    You can’t average bias (or normativity). You can only anchor to truth and explain the deltas

    • Truth (T): satisfies the demand for testifiability across dimensions (categorical, logical, empirical, operational, reciprocal) and, when severity demands, for decidability (no discretion required).
    • Normativity (N): a preference ordering over outcomes (moral, aesthetic, strategic) produced by priors and incentives.
    • Bias (B): systematic deviation of belief or choice from T due to priors, incentives, and limited cognition.
    • Claim: Aggregating N or B across heterogeneous populations destroys commensurability. Aggregating T does not: truth composes; preferences don’t.
    1. Heterogeneous priors → non-linear utilities. Averages of non-linear utilities are not utilities. They’re artifacts without decision content.
    2. Incommensurable trade-offs. People price externalities differently (risk, time preference, fairness vs efficiency). The “mean” mixes unlike goods.
    3. Loss of reciprocity guarantees. Averages erase victim/beneficiary structure, hiding asymmetric burdens; reciprocity cannot be proven on an average.
    4. Mode collapse in alignment. Preference-averaged training pushes toward bland, lowest-energy responses—precisely the “correlation trap.”
    5. Arrow/Simpson effects (informal). Aggregation can invert choices or produce impossible preference orderings.
    Therefore: Alignment by averaging produces undecidable outputs regarding reciprocity and liability. We must anchor to T, then explain normative deltas.
    • Premise: Male/female lineages evolved partly distinct priors (variance/risk, competition/cooperation strategies, near/far time preferences, threat vs nurture sensitivities).
    • Consequence: Even with identical facts T, posterior choices diverge because valuation of externalities differs by distribution.
    • Implication for alignment: If an LLM collapses across these axes, it will systematically misstate trade-offs for at least one tail of each distribution.
      (Speculation, flagged): Sex-linked baselines likely form a low-dimensional basis explaining a large share of normative variance; culture/age/class then layer on top.
    Principle: “Explain the truth, then map how bias and norm vary from it.”
    Pipeline (operational):
    1. Truth Kernel (T): Produce the minimal truthful description + consequence graph:
      Facts, constraints, causal model, externalities, opportunity set.
      Passes: categorical/logical/empirical/operational/reciprocal tests.
    2. Reciprocity Check (R): Mark where choices impose net unreciprocated costs; compute liability bands (who pays, how much, with what risk).
    3. Normative Bases (Φ): Learn a compact basis of normative variation (sex-linked tendencies, risk/time preference, fairness sensitivity, status/loyalty/care axes, etc.).
      User vector
      u projects onto Φ to estimate Δ_u (user’s normative deltas).
    4. Option Set (Pareto): Generate alternatives {O_i} that are reciprocity-compliant; attach Δ_u explanations to each: “From T, your priors tilt you toward O_k for reasons {r}.”
    5. Disclosure & Choice: Present T (invariant), R (guarantees), Δ_u (explanation), and the trade-off table. Let the user/multiple users select under visibility of burdens.
    Training recipe:
    • Replace preference-averaged targets with (T, R, Φ) triples.
    • Supervise the Truth Kernel against unit tests; learn Φ by factorizing labeled disagreements across populations.
    • Penalize violations of reciprocity, not deviations from majority taste.
    • Truth Score τ: fraction of tests passed across dimensions.
    • Reciprocity Score ρ: 1 − normalized externality imposed on non-consenting parties.
    • Norm Delta Vector Δ: coordinates in Φ explaining divergence from T under user priors.
    • Liability Index λ: expected burden on third parties (severity × probability × population affected).
    • Commensurability Index κ: proportion of the option set whose trade-offs can be expressed in common units (after converting to opportunity cost and externality).
    Decision rule (necessary & sufficient for alignment):
    Produce only options with
    τ ≥ τ* and ρ ≥ ρ*; expose Δ and λ; let selection be a transparent function of priors, never a hidden average.
    • Data: From “thumbs-up” labels → Truth unit tests + Externality annotations + Disagreement matrices (who disagrees with whom, why, and with what cost).
    • Loss:
      L = L_truth + α·L_reciprocity + β·L_explain(Δ) + γ·L_liability
      where L_explain(Δ) penalizes failure to attribute divergences to identifiable bases Φ.
    • Heads/Adapters:
      Truth head:
      trained on unit tests.
      Reciprocity head: predicts third-party costs; gates option generation.
      Normative explainer head: projects to Φ to produce Δ and a natural-language rationale.
    • UX contract: Always show T, R, Δ, λ, and the Pareto set. No hidden averaging.
    • You can’t average bias: We don’t. We factorize it and explain it (Δ).
    • You can’t average normativity: We don’t. We present a reciprocity-feasible Pareto and expose trade-offs.
    • You can explain truth, bias, and norm: We do. T is invariant; Δ is principled; λ renders costs visible and decidable.
    • “Isn’t this essentializing sex differences?” No. Sex is one axis in Φ because it is predictive; it is neither exhaustive nor hierarchical. Individual vectors u dominate final Δ_u.
    • “Won’t this reintroduce partisanship?” Not if R gates options by reciprocity first. Partisanship becomes an explained Δ, not a covert training prior.
    • “Is this implementable?” Yes. It’s a data-and-loss redesign plus an interface contract. No new math is required; the novelty is constraint-first supervision and factorized disagreement modeling.
    Policy question: allocate scarce oncology funds.
    • T: survival curves, QALY deltas, budget ceiling, opportunity costs.
    • R: forbids shifting catastrophic risk onto an unconsenting minority.
    • Φ: axes = (risk aversion, fairness vs efficiency, near vs far time preference, sex-linked care/competition weighting, etc.).
    • Output: show T-compliant Pareto: {maximize QALY; protect worst-off; balanced hybrid}.
    • Explain Δ_u: “Your priors (high fairness, higher near-time care weighting) move you from T* to the hybrid by +x on fairness axis and −y on efficiency axis; third-party liability λ remains under threshold.”


    Source date (UTC): 2025-08-24 22:26:45 UTC

    Original post: https://x.com/i/articles/1959744214616678881

  • (Diary) In general, I assume readers understand ten percent of anything I say –

    (Diary)
    In general, I assume readers understand ten percent of anything I say – and that they glean the deeper meaning incrementally over time through repetition of examples.

    Working on the set of articles to convert our work into the existing technical frame and vernacular for the Foundation Model audience … well, I think I can improve it a bit with the technical folks. Though I’m not sure it makes any difference outside that domain.

    It’s well, funny in a way, that to understand what we’ve done requires knowledge of the adversarial closure disciplines: linguistics, cognitive science, economics, and law (think supply vs demand), which effectively requires we address the tech audience with another set of paradigms and vocabularies from the internal closure disciplines: math, physics, engineering (think mathematics).

    So we have now written the explanation of the foundations in ethical, philosophical, scientific, and technical terms.

    I need a nap. 😉


    Source date (UTC): 2025-08-24 22:12:43 UTC

    Original post: https://twitter.com/i/web/status/1959740680164757746

  • Ladder of Meaning: Meaning, Meaning Into Shared Meaning, and Shared Meaning Into

    Ladder of Meaning: Meaning, Meaning Into Shared Meaning, and Shared Meaning Into Truth

    Human beings live and cooperate through signals. But signals alone are ambiguous. We require disambiguation to turn noise into meaning, meaning into shared meaning, and shared meaning into truth. Each step of this ladder increases the reliability of communication, yet each step also carries risks when the higher properties are missing. By distinguishing these levels, and understanding both their failure modes and their remedies, we can better measure, test, and preserve the integrity of language, law, and civilization.
    • Definition: A raw stimulus, undifferentiated in itself.
    • Function: Provides the material input for perception.
    • Limitation: Signals are ambiguous until disambiguated.
    • Definition: The sufficiency of disambiguation for identification.
    • For the individual: A signal acquires meaning when it can be disambiguated into a stable identity (a referent).
    • Example: Recognizing that a shape in vision corresponds to “a chair.”
    • Note: Meaning at this level need not be true, only sufficient for the person’s mental coordination.
    • Definition: The sufficiency of disambiguation for agreement between two or more parties.
    • Function: Coordinates social reference through common symbols.
    • Example: Two people agree that the word “chair” refers to the same object type.
    • Note: Shared meaning enables communication, but still does not guarantee truth.
    • Definition: Meaning that has been tested, warranted, and verified against reality.
    • Function: Truth transforms shared meaning into knowledge by correspondence with reality under operational test.
    • Example: “This chair will hold my weight” can be tested by sitting on it. If it holds, the meaning (chair as seat) and its properties are true.
    • Note: Truth is a separate property from meaning. Meaning is necessary for communication; truth is necessary for reliability and responsibility.
    • Everyday Life: Most communication rests at the level of meaning or shared meaning, which suffices for coordination but not certainty.
    • Law and Science: Truth is required, since decisions and predictions must be warranted under test.
    • AI and LLMs: Current models produce meaning (individual and shared) but not truth, since they cannot guarantee testability or correspondence.
    • Civilization: Confusing meaning with truth invites sophistry, propaganda, and institutional collapse.


    Source date (UTC): 2025-08-24 17:40:55 UTC

    Original post: https://x.com/i/articles/1959672280201765107

  • How Does The Industry Refer to the “Correlation Trap”? The LLM industry does not

    How Does The Industry Refer to the “Correlation Trap”?

    The LLM industry does not yet have a formal, unified term for what The Natural Law Institute calls the “Correlation Trap.”
    However, the underlying problem is widely acknowledged under a patchwork of overlapping terms. Here are the closest existing labels:
    The term “Correlation Trap” is:
    • Memorable
    • Diagnostic — it frames the failure as systemic, not incidental
    • Accurate — the core problem is the overreliance on correlation without constraint
    • Actionable — it implies the need for a bridge (like the NLI constraint system) to escape it
    It names the epistemological limit of current AI.


    Source date (UTC): 2025-08-24 17:25:30 UTC

    Original post: https://x.com/i/articles/1959668401154273626

  • Why is Our Work Essential for the Production of AGI? Our work is essential for t

    Why is Our Work Essential for the Production of AGI?

    Our work is essential for the production of AGI because it introduces the only viable method of constraining machine intelligence to demonstrated truth, which is a non-optional requirement for general intelligence to exist at all.
    Let’s make that precise.
    Artificial General Intelligence (AGI) refers to a system that can:
    • Operate across multiple domains of knowledge,
    • Adapt its behavior to novel environments,
    • Reason about cause and effect,
    • Make decisions with understanding and accountability,
    • And demonstrate those decisions in material reality.
    AGI requires not just syntactic fluency or pattern recognition — but judgment, decidability, and truthfulness under constraint.
    Today’s LLMs (GPT-4, Claude, Gemini, etc.) are:
    • Statistical mimics of language,
    • Trained to optimize likelihood of next-token predictions,
    • Shaped by Reinforcement Learning from Human Feedback (RLHF), which aligns outputs with popularity, not truth.
    This creates what NLI calls the Correlation Trap:
    These systems cannot reason, verify, or act responsibly.
    They simulate coherence. They do not
    demonstrate intelligence.
    The Natural Law Institute introduces a system of constraint that is:
    This constraint framework surrounds and filters model outputs, acting like a judicial layer that:
    • Rejects hallucination,
    • Rejects ideological drift,
    • Rejects irrationality, and
    • Enforces rational purpose (Logos).
    Without such constraint:
    • The AI is non-responsible.
    • Its claims are non-warranted.
    • Its actions are non-grounded.
    • Its use is non-trustworthy.
    Any system that lacks the ability to measure and constrain itself is not intelligent, it is merely reactive.
    True AGI requires:
    That is what only NLI provides.
    AGI today is like a giant machine with:
    • Enormous processing power,
    • Incredible memory and fluency,
    • But no ability to distinguish between right and wrong, true and false, cause and effect.
    What our work provides is the moral-legal-epistemic cortex — the executive function — that makes the machine think in reality, not just simulate speech.


    Source date (UTC): 2025-08-24 16:56:43 UTC

    Original post: https://x.com/i/articles/1959661156957872628

  • Why the NLI Constraint System Is Not Just “Coding” Many outside observers — incl

    Why the NLI Constraint System Is Not Just “Coding”

    Many outside observers — including software engineers, venture capitalists, or AI researchers — may initially interpret the NLI Constraint System as “just a kind of coding.” But this is a category error.
    Let’s break down the distinction.
    • Coding tells a machine how to do something:
      “If input A, perform function B, and return output C.”
    • Constraint, in the NLI system, defines what is valid, truthful, reciprocal, and decidable before any such function can even be said to operate intelligibly.
    Analogy:
    Coding is like
    giving directions.
    Constraint is like
    building the map and declaring which roads are real.
    • Coding uses symbols in structured formats (syntax) to create behavior.
    • Constraint uses formal rules rooted in reality — physics, law, reciprocity — to delimit which symbolic expressions are valid at all.
    In other words: Constraint doesn’t just say how the system works — it decides what is allowed to exist inside the system.
    Traditional programming (and even most LLM training) is about generating output from a known model.
    The NLI Constraint System is not about generation first — it is about pre-qualifying the domain of acceptable output, so that only true, computable, reciprocal, and testable statements pass through.
    This is the same distinction between:
    • Writing all the answers to a test (coding), and
    • Writing the rules of what constitutes a valid question and a valid answer (constraint).
    LLMs do not “know” anything. They statistically emulate what looks like knowledge.
    The NLI system adds a layer of judgment: the ability to say “this is false,” “this is incomplete,” “this is asymmetric,” or “this violates reciprocity.” That layer of judgment is not achievable through coding alone — it requires a system of measurement.
    Constraint is not a feature. It is the test of truth applied to all features.
    A static codebase operates on fixed logic. The NLI constraint framework is recursive:
    • It measures all grammars and logics for compliance with Natural Law.
    • It adjusts and refines acceptable boundaries as domains evolve.
    • It creates a system in which truth-seeking is endogenous, not hard-coded.


    Source date (UTC): 2025-08-24 16:50:00 UTC

    Original post: https://x.com/i/articles/1959659466124845110

  • How NLI’s Constraint System Surpasses RLHF: From Preference to Truth Why Reinfor

    How NLI’s Constraint System Surpasses RLHF: From Preference to Truth

    Why Reinforcement Learning from Human Feedback (RLHF) can never deliver AGI — and how Natural Law Institute’s constraint framework solves the core alignment problem.
    Reinforcement Learning from Human Feedback (RLHF) is a method for aligning AI models by training them to produce responses that humans prefer. The process involves:
    1. Human rating of model outputs (A is better than B).
    2. Training a reward model to predict human preferences.
    3. Using reinforcement learning to fine-tune the model toward outputs with higher human approval.
    This technique produces LLMs that are polite, safe-seeming, and tuned for mass deployment.
    (TL/DR; “They have no system of measurement”)
    Despite its commercial success, RLHF suffers from terminal epistemic limitations:
    The result is a system that often sounds smart but lacks the ability to compute, verify, or warrant its claims in reality.
    The Natural Law Institute proposes a replacement:
    Rather than rely on subjective preference, NLI constrains AI outputs through formal measurement systems grounded in:
    This approach transforms AI from a plausibility simulator into an epistemically grounded agent.
    While RLHF tweaks outputs to match human preferences, NLI builds a bridge from statistical correlation to operational demonstration.
    RLHF is an elegant crutch.
    NLI’s constraint system is the first real prosthesis for machine judgment.


    Source date (UTC): 2025-08-24 16:39:25 UTC

    Original post: https://x.com/i/articles/1959656802884485324

  • EXCERPT FROM OUR COMPARISON WITH RLHF –“AGI cannot emerge from a model trained

    EXCERPT FROM OUR COMPARISON WITH RLHF
    –“AGI cannot emerge from a model trained to please. It will only emerge from a system trained to know, compute, and act responsibly.”–


    Source date (UTC): 2025-08-24 16:39:12 UTC

    Original post: https://twitter.com/i/web/status/1959656751474880532