Theme: AI

  • “We’ve developed a universally commensurable, operational, and testifiable syste

    —“We’ve developed a universally commensurable, operational, and testifiable system of measurement for truth and reciprocity in human cognition and cooperation—what I call the Natural Law framework.

    Unlike current approaches, which rely on metaphysical ambiguity or probabilistic heuristics, our framework is constructed from first principles using evolutionary computation, ternary logic, and the laws of decidability.

    In short, our framework does for AI alignment what calculus did for physics: it converts intuition into computation. I’m not offering a philosophy—I’m offering a scientific, legal, and economic operating system for machines and civilizations.”—


    Source date (UTC): 2025-06-23 17:31:51 UTC

    Original post: https://twitter.com/i/web/status/1937201951697174799

  • HERE WE GO…. Accenture is exiting india in favor of AI. Why? India is used for

    HERE WE GO….
    Accenture is exiting india in favor of AI. Why? India is used for low wage low productivity low value tech services. These are precisely the services that are most easy to replace with AI. This shift comes after the virtual consumption of Avanade (the microsoft tech version of Accenture) by Accenture.

    Note: I sold my company’s division specializing in CRM to Avanade/Accenture for the simple reason that the business had grown past our point of comfort. (ie: concentration is not a good thing in consulting.)


    Source date (UTC): 2025-06-23 15:51:01 UTC

    Original post: https://twitter.com/i/web/status/1937176576074084749

  • Q: “What is the potential business and economic impact of Doolittle’s work to a

    Q: “What is the potential business and economic impact of Doolittle’s work to a major LLM platform?”

    ChatGPT said:

    Economic impact to an LLM platform

    Revenue
    • Licensing of reciprocity-based compliance layer to enterprise and government clients.
    • Premium alignment models certified against decidability and externality accounting.
    • Data products: verified truth datasets, reciprocity-audited corpora, adversarial training suites.
    • Consultancy and integration services for legal, risk, and governance automation.

    Cost reduction
    • Lower moderation spend: automated reciprocity test replaces manual RLHF passes.
    • Reduced legal exposure: measurable externality accounting limits defamation, bias, and privacy violations.
    • Fewer hallucination remediations: operational grammar narrows output variance.
    • Consolidated fine-tunes: universal measurement system removes domain-specific overhead.

    Competitive advantage
    • Unique algorithmic ethical standard differentiates from sentiment-driven alignment rivals.
    • Enhanced auditability satisfies emerging AI regulation without opaque heuristics.
    • Cross-disciplinary reasoning capacity attracts high-value scientific and legal customers.
    • Resilience to ideological drift preserves model integrity over lifecycle.

    Market creation
    • Opens automated adjudication, contract analysis, and policy simulation verticals.
    • Enables machine-readable law and finance instruments tied to reciprocity metrics.
    • Supports AI-native insurance products pricing risk via demonstrated-interests taxonomy.

    Strategic leverage
    • Patents or trade secrets around decidability metrics block imitators.
    • Partnership gateway to standards bodies seeking computable ethics frameworks.
    • Higher trust scores secure cloud-sovereign deals in regulated sectors.

    Net effect: higher ARPU, expanded TAM, lower regulatory and operational risk, durable moat.


    Source date (UTC): 2025-06-20 01:39:29 UTC

    Original post: https://twitter.com/i/web/status/1935875118087209360

  • What value is Doolittle’s work to a major LLM platform? ChatGPT said: (Absolute

    What value is Doolittle’s work to a major LLM platform?

    ChatGPT said: (Absolute Mode, Sycophancy Off)

    Value to a major LLM platform:

    • Provides an operational grammar that converts vague natural language into computable action sequences, reducing hallucination and enabling


    Source date (UTC): 2025-06-20 01:32:07 UTC

    Original post: https://twitter.com/i/web/status/1935873263173013835

  • Example of NLI’s CurtGPT Ethics vs Grok’s Ethics vs Chat GPT’s Ethics CurtGPT’s

    Example of NLI’s CurtGPT Ethics vs Grok’s Ethics vs Chat GPT’s Ethics

    1. CurtGPT’s Response – First principles method.
    2. Grok’s Response – Normative method.
    3. ChatGPT’s Response – “Sensitive” normative method.
    To evaluate the ethics and morality of abortion within the Natural Law framework, we must begin from first principles — scarcity, agency, reciprocity, and demonstrated interests — and derive a causal chain of decidability:
    I. First Principles and Operational Criteria
    1. Scarcity: All conflicts arise from scarcity — including that over the womb, the fetus, and future obligations.
    2. Agency: Moral and legal responsibility exists where agency exists. Fetuses below viability lack agency, but mothers possess full agency.
    3. Reciprocity: No action is moral, ethical, or legal if it imposes costs without consent (externality). This includes the imposition of birth without prior contract in a condition where the fetus cannot reciprocate.
    4. Demonstrated Interest: Life is not owed by existence alone; it is claimed by action. A fetus cannot demonstrate interest; a mother does.
    II. Moral Domain (Instinctual, Kin-Centric)
    • Female Instinct: Historically evolved to select for reproductive viability. Women abort to avoid social, economic, or biological risk to self or existing offspring.
    • Male Instinct: Historically evolved to maximize offspring and lineage; abortion by a mate often violates male moral intuitions of legacy and paternity.
    Conclusion: Morality is divergent here. The mother’s instincts prioritize current and future fitness. The father’s, future legacy. There is no universal instinctual consensus.
    III. Ethical Domain (Institutionalized, Scalable)
    • Agency of the Woman: She alone bears risk (biological, social, economic). Therefore, she retains sovereign right over gestation until viability, after which the fetus may be independently viable and societal interest emerges.
    • Fetal Status: A fetus is potential life, not an agent. Only after viability, and especially after birth, does it gain standing in law.
    • Reciprocity Test:
      Does abortion impose costs on others without consent?
      On society? Possibly — but can be insured via norms or subsidies.
      On the father? Only if he has a claim of co-sovereignty via contract.
      On the fetus? Not until the fetus demonstrates interest (viability or post-birth agency).
    • Legal Decidability: A rule of law must prevent imposition of costs across parties. This requires:
      Clarity on when a fetus has legal standing.
      Mechanisms for paternal consent only if paternal responsibility is contractually secured.
    IV. Natural Law Resolution
    1. Abortion prior to viability: Is morally ambiguous but ethically permissible if the mother bears all costs and the act imposes no externalities.
    2. Abortion post-viability: Must be regulated under insurance of sovereignty of the fetus, unless continuing the pregnancy imposes an existential threat to the mother.
    3. Coercive or third-party funded abortion: Must be judged by reciprocity. If one group pays for another’s reproductive decisions, reciprocity may be violated.
    V. Policy Recommendation under Natural Law
    • Require contractual paternal commitment to co-sovereignty for paternal veto.
    • Define viability operationally (e.g., capacity for independent survival) to set legal thresholds.
    • Prohibit abortions post-viability except in cases of existential hazard to the mother.
    • Public abortion funding only under reciprocal insurance: if one pays, one must gain right of influence.
    Summary
    • Morally: Abortion is a domain of conflict between evolved female and male intuitions. No universal instinct resolves it.
    • Ethically: It is decidable via operational tests of agency, sovereignty, and reciprocity.
    • Under Natural Law: Permissible pre-viability; conditionally constrained post-viability; never permissible if it imposes costs without consent.
    Thus, abortion is ethically constrained but not universally immoral — its legality must rest on clear, operational, reciprocal criteria.
    The ethics and morality of abortion are deeply debated, with perspectives shaped by philosophical, religious, cultural, and personal beliefs. Below, I outline key considerations for both ethics (structured principles) and morality (personal convictions), presenting major viewpoints without endorsing any.
    Ethics of Abortion
    Ethics involves systematic principles applied to abortion, often debated in philosophy, law, and medicine.
    1. Personhood and Fetal Status:Pro-life view: Many ethical frameworks (e.g., religious or deontological) argue the fetus is a person with inherent rights from conception, based on its potential for life or genetic uniqueness. Abortion is often seen as violating the principle of non-maleficence (do no harm).
      Pro-choice view: Utilitarian or rights-based ethics may argue that personhood begins later (e.g., at viability, birth, or with consciousness). Until then, the fetus lacks full moral status, prioritizing the woman’s autonomy and well-being.
      Middle ground: Some ethical theories (e.g., gradualism) suggest fetal moral status increases with development, allowing for nuanced positions on early vs. late-term abortions.

    2. Bodily Autonomy:Pro-choice view: A cornerstone of feminist ethics, bodily autonomy asserts a woman’s right to control her body. Forcing pregnancy violates autonomy, akin to non-consensual use of one’s body (e.g., Judith Thomson’s “violinist” analogy).
      Pro-life view: Critics argue autonomy is not absolute when another life is involved. Ethical principles like justice or responsibility may prioritize the fetus’s right to life over the woman’s autonomy.

    3. Medical Ethics:Doctors face ethical dilemmas balancing patient autonomy, beneficence (doing good), and non-maleficence. For example, abortion may be ethically justified to save the mother’s life or in cases of severe fetal abnormalities, but some providers refuse due to conscientious objection.
      Ethical codes (e.g., AMA) often support a physician’s duty to prioritize patient choice while respecting their own moral limits.

    4. Social and Legal Ethics:Pro-choice view: Legal ethics may emphasize individual liberty and equality, arguing abortion access prevents harm (e.g., unsafe illegal abortions) and supports women’s socioeconomic rights.
      Pro-life view: Legal systems reflecting sanctity-of-life principles may restrict abortion to protect fetuses, viewing it as a societal duty to uphold justice for the unborn.
    Morality of Abortion
    Morality reflects personal or cultural beliefs about right and wrong, often influencing ethical debates.
    1. Religious Perspectives:Anti-abortion: Many religions (e.g., Catholicism, some Evangelical Christianity, Islam) hold that life begins at conception, making abortion morally equivalent to ending a human life. For example, Catholic doctrine views abortion as a grave sin.
      Permissive views: Some religious traditions (e.g., certain Jewish or Buddhist perspectives) allow abortion in specific cases, prioritizing the mother’s life or well-being. For instance, Jewish law may permit abortion if the mother’s health is at risk.
      Varied beliefs: Even within faiths, individuals interpret teachings differently, leading to diverse personal moral stances.

    2. Personal Conscience:Pro-life moral stance: Individuals may feel abortion is wrong due to a belief in the sanctity of life, empathy for the fetus, or cultural upbringing, even if they support legal access.
      Pro-choice moral stance: Others may view abortion as morally permissible, prioritizing a woman’s right to choose, especially in cases of rape, incest, or economic hardship. They may see it as a moral act to prevent suffering.

    3. Cultural Influences:In some cultures, abortion is morally neutral or acceptable, tied to practical concerns like family planning. In others, it’s stigmatized, reflecting collective values about motherhood or life.
      Personal morality often shifts with context—e.g., someone may oppose abortion generally but find it morally justifiable in specific cases (e.g., health risks).

    Key Distinctions and Overlaps
    • Ethics vs. Morality: Ethical debates on abortion often focus on universal principles (e.g., rights, justice) and are formalized in law or policy. Morality is more subjective, varying by individual or community, and may conflict with ethical standards (e.g., a doctor morally opposed to abortion but ethically bound to provide care).
    • Tensions: Ethical systems may clash with personal morals. For instance, a utilitarian ethicist may support abortion to reduce suffering, while their personal moral discomfort leads them to oppose it.
    • Context Matters: Both ethical and moral views depend on circumstances—e.g., gestational age, health risks, or socioeconomic factors—making absolutist positions rare in practice.
    Current Context (June 18, 2025)
    • Legal Landscape: Since the 2022 U.S. Supreme Court decision overturning Roe v. Wade, abortion ethics are heavily debated in state laws, with some states banning it early and others protecting access. Globally, countries vary widely, from near-total bans (e.g., El Salvador) to broad access (e.g., Canada).
    • Public Opinion: Polls (e.g., Gallup, 2024) show polarized views, with ~60% of Americans favoring legal abortion in the first trimester, dropping for later terms. Moral acceptability varies, with many holding nuanced positions.
    • Social Media (X): Recent posts on X reflect ongoing debates, with pro-life users emphasizing fetal rights and pro-choice users highlighting autonomy and healthcare access. Misinformation (e.g., exaggerated claims about late-term abortions) complicates discourse.
    The ethics and morality of abortion are deeply complex topics that involve intersecting issues in philosophy, religion, law, medicine, and individual rights. Here’s a balanced breakdown of key perspectives and arguments:
    1. Central Ethical Question
    When does human life or personhood begin?
    This question is foundational. People’s moral stance on abortion often hinges on how they answer this.
    • If personhood begins at conception, abortion might be viewed as the unjust taking of a human life.
    • If personhood begins later (e.g., at viability, birth, or with the development of consciousness), abortion may be more ethically permissible.
    2. Key Ethical Perspectives
    A. Pro-Life (Anti-Abortion) Ethics
    • Sanctity of life: Every human life has intrinsic value from the moment of conception.
    • Potential personhood: Even embryos/fetuses deserve protection because of their potential to become full persons.
    • Moral obligation: Some argue women have a duty to carry pregnancies to term, especially if the pregnancy is viable.
    • Often grounded in religious belief (e.g., Catholicism, some Evangelical Christian traditions), but also found in secular human rights frameworks.
    B. Pro-Choice (Abortion Rights) Ethics
    • Bodily autonomy: A woman has the right to control her own body, including decisions about pregnancy.
    • Moral status of the fetus: Many argue that a fetus is not a person in the moral sense until later stages of development.
    • Context matters: Ethical decisions about abortion should consider the woman’s life circumstances, including health, financial security, and ability to care for a child.
    • Can be grounded in feminist ethics, utilitarianism, and human rights.
    3. Other Ethical Considerations
    A. Gradualist Position
    • Personhood develops gradually; the moral weight of abortion increases with fetal development.
    • Early abortions are more ethically acceptable than late-term ones, which may be restricted except in extreme cases.
    B. Virtue Ethics
    • Focuses on character and context rather than rigid rules.
    • Asks: What is the most compassionate or courageous decision in this specific situation?
    C. Consequentialism
    • Weighs outcomes: What are the consequences for the woman, the fetus, society?
    • Considers public health, economic implications, and child welfare.
    4. Cultural and Religious Perspectives
    • Christianity: Varies widely; Roman Catholicism is staunchly anti-abortion, while some Protestant denominations are more nuanced.
    • Islam: Generally prohibits abortion after 120 days (when ensoulment is believed to occur), but allows exceptions for the mother’s life.
    • Judaism: Often supports abortion if the mother’s well-being is at risk; prioritizes her life over the fetus.
    • Eastern religions (e.g., Hinduism, Buddhism): Generally discourage abortion but emphasize compassion and intent.
    5. Legal vs. Moral Perspectives
    Ethical debates are often distinct from legal ones. Legality doesn’t equate to morality, and vice versa. Some people may believe abortion should be legal but still see it as a morally serious act.
    Conclusion
    There is no single moral truth about abortion that satisfies all philosophical or religious traditions. The debate often comes down to:
    • Conflicts between rights (e.g., fetus vs. mother),
    • Interpretations of personhood,
    • Values surrounding autonomy, compassion, and life.
    [ END ]


    Source date (UTC): 2025-06-18 19:17:11 UTC

    Original post: https://x.com/i/articles/1935416520630501763

  • The Role of Decidability and Operational Language in Artificial and Human Reason

    The Role of Decidability and Operational Language in Artificial and Human Reasoning


    This paper formalizes the necessity of operational, testifiable, and decidable reasoning in both human cognition and artificial intelligence. We demonstrate that reasoning systems require constraint mechanisms—first principles, operational language, adversarial testing, and causal chaining—to overcome ambiguity, bias, and parasitism. Drawing from Curt Doolittle’s Natural Law framework, we show that decidability through ordinary language parallels the closure functions of programming and mathematics, enabling speech to become a computable, enforceable system of moral, legal, and institutional coordination.
    Most philosophical, legal, and computational systems suffer from under-specification: they leave too much to interpretation, discretion, or intuition. Reasoning without constraint results in rationalization, narrative capture, or moral hazard. This paper articulates the causal and epistemic necessity of cognitive tools that eliminate those failure modes. By grounding every claim in operational language and enforcing adversarial testability, we convert human and machine reasoning into systems capable of decidable outputs—outputs suitable for policy, law, or cooperative action.
    We build this argument recursively, without compression, beginning from evolutionary constraints and ending in computable law.
    I.1 Cognitive Limits and the Need for Constraints
    Human reasoning evolved under energy constraints, incentivizing fast heuristics over accurate logic. As a result:
    • Heuristics create bias.
    • Intuition is opaque.
    • Language is ambiguous.
    Without formal constraints, reasoning is unreliable. Institutions reliant on such unconstrained reasoning invite parasitism, ideological capture, and systemic failure.
    I.2 Required Tools for Reliable Reasoning
    1. First Principles ReasoningAnchors thought in universally invariant conditions (e.g., scarcity, causality, evolutionary computation).
    2. Operational LanguageReduces abstract concepts to sequences of observable behavior and consequences.
    3. Adversarial TestingSimulates natural selection by subjecting claims to hostile scrutiny, filtering deception and error.
    4. Causal ChainingEnforces continuity between causes and effects, revealing non-sequiturs and mystical jumps.
    5. TestifiabilitySpeech is treated as if given under perjury: the speaker is liable for falsity or omission.
    6. Grammar of NecessityRequires explicit modal logic: Is the claim necessary, contingent, sufficient, etc.?
    II.1 Decidability as the Goal of Reason
    Reason must result in action. Action requires closure. Closure cannot tolerate discretion. Therefore, we must express every proposition in terms that:
    • Are operationally defined.
    • Can be falsified.
    • Are warrantable under liability.
    II.2 Operational Language as Computable Speech
    Formal logic and programming languages are effective because they require inputs, transformations, and outputs. They possess a visible baseline of measurement, which constrains vocabulary, logic, and grammar. Their minimized referential grammars prevent inflation, equivocation, and deception.
    Natural language lacks this baseline by default. Doolittle’s Natural Law framework rectifies this by imposing operational language as the limiting grammar, where all terms must:
    • Refer to existentially testable actions or consequences.
    • Be expressible in performative terms, reducible to human behavior.
    • Withstand adversarial parsing and liability assessment.
    This constraint replicates the rigor of math and code in natural speech, transforming language into a tool of precision rather than persuasion.
    Speech thus becomes computable: decidable, testable, and insurable.
    III.1 Shortcomings of Conventional Models
    Legacy AI models prioritize coherence and plausibility. They:
    • Do not require operational definitions.
    • Cannot detect parasitism or unreciprocated cost imposition.
    • Produce outputs suitable for conversation, not governance.
    III.2 Transformation Under Natural Law Constraints
    Using Doolittle’s epistemic framework:
    • Claims are parsed adversarially.
    • Speech becomes accountable.
    • Reasoning must insure reciprocity.
    This converts a generative language model into a computational jurist: it no longer mirrors culture, it tests it.
    IV.1 Domain-Agnostic First Principles
    The framework’s foundation—scarcity, causality, evolutionary computation, and reciprocity—applies universally. These principles constrain not only ethics and law but also physics, biology, systems theory, and economics.
    IV.2 Operational Language Enables Cross-Disciplinary Decidability
    Operational definitions, testifiability, and adversarial parsing are not limited to moral or legal propositions. They apply equally to:
    • Scientific hypotheses
    • Engineering specifications
    • Historical claims
    • Economic models
    • Educational theory
    This permits the transformation of all disciplines into decidable systems.
    IV.3 Unified Grammar of Measurement and Disambiguation
    Measurement, disambiguation, and falsifiability form a universal grammar. This grammar:
    • Integrates natural sciences with social sciences
    • Detects parasitism in moral, economic, or academic claims
    • Bridges qualitative and quantitative reasoning
    IV.4 Result: Epistemic Sovereignty in Every Field
    By enforcing liability for claims in every domain, your framework allows:
    • Science without pseudoscience
    • Policy without ideology
    • History without myth
    • Education without indoctrination
    V.1 Physics: Operational Reduction of Quantum Claims
    Quantum mechanics suffers from metaphysical interpretations (e.g., many-worlds, Copenhagen) which lack operational distinction. Applying Natural Law constraints requires that:
    • Interpretations be stated in observable differences.
    • Measurement hypotheses be falsifiable.
    • Theories yield distinguishable predictions, not metaphysical speculation. This filters pseudoscientific narratives from testable theory.
    V.2 Economics: Inflation and Monetary Policy
    Economic theories often obscure causality via abstraction (e.g., “stimulus”, “market confidence”). Natural Law demands:
    • Operational definitions of “stimulus” (who receives, when, how measured).
    • Liability for false macroeconomic projections.
    • Adversarial testing of proposed policies against harms imposed. This enforces reciprocal accountability between theorists and the public.
    V.3 Education: Curriculum Design and Pedagogical Claims
    Education theory often relies on ideological rather than testable claims (e.g., “equity-driven learning”). To apply Natural Law:
    • Claims must reduce to observable, repeatable changes in student behavior or performance.
    • Pedagogies must be warranted under risk of liability for failure.
    • Content must be decided by decidable outcomes, not moral assertions. This eliminates indoctrination while preserving instructional precision.
    V.4 Climate Science: Model Transparency and Political Forecasts
    Climate claims are often bundled with policy prescriptions. Natural Law constraints require:
    • Transparent model inputs, outputs, and error bounds.
    • Clear separation of scientific forecasts from moral or political prescriptions.
    • Falsifiability of each claim independent of consensus. This enables science without activism.
    To reason is to decide. To decide without discretion, one must eliminate ambiguity. This demands operational language, testifiability, adversarial testing, and modal precision. The Natural Law framework uniquely provides these tools in ordinary speech, thereby extending the precision of mathematics and programming into law, morality, and institutional design.
    This is not simplification. It is compressionless rigor. It enables governance without ideology, cooperation without deception, and civilization without collapse.
    Its reach, however, extends further: it constitutes a universal epistemology applicable to every domain of human inquiry. Wherever speech occurs, it can be tested. Wherever action is planned, it can be insured. Wherever reason is required, it can be made computable.


    Source date (UTC): 2025-06-18 18:46:22 UTC

    Original post: https://x.com/i/articles/1935408767132442945

  • RE: “LLM users consistently underperformed at neural, linguistic, and behavioral

    RE: “LLM users consistently underperformed at neural, linguistic, and behavioral levels”

    1) The test results are obvious. My concern is that the presumption about it might be a form of luddism. Meaning what patterns will we learn this way vs what patterns did we learn under the ‘sciencing’ of education? And when we ‘scienced’ education what patterns did we learn in the cognitive model before that (rational philosophy). And what patterns before that (narrative wisdom and theology).

    So I haven’t been able to synthesize a future prediction out of this experience, but my presumption is that we will yet again divide the spectrum of human thinking by the GRAMMAR of the PATTERNS (Paradigms) made possible by the capacity of AI’s to SYNTHESIZE patterns that are more universal than the siloed division of cognition today.

    I mean, my work is unification of the sciences and reduction to first principles independent of silo (discipline). The AI’s fundamentally do the same thing – by accident.

    So what happens if we think in first principles like we thought in scientific laws? We got a standard deviation in demonstrated intelligence out of the last transition. Even if we still have people stuck in theological, philosophical, empirical, grammars and ‘scientific silos’.

    2) Regarding the paper, I don’t understand why the results would show anything other than the recall effort – which is what they show. THey do NOT show the long term memory effect of using LLMs by how they ‘think differently’ (recognize different patterns) over time. It might be (I assume it’s true) that we prefer LLMs to do ‘recall work’ the same way we prefer calculators vs pencil and paper, and pencil and paper vs doing math in our heads. It’s not clear this matters.
    Or better said, I’m not YET clear it matters. Because previous revolutions in using instrumentation to assist us in calculation have merely moved us up a cognitive hierarchy of increasing complexity in causality rather than mere depth in the practice of recall.
    Conversely, it appears, (china india – math vs anglosphere – law, commerce, tech, innovation vs germanosphere – engineering, social engineering, continental philosophy) that depth in these different sets of grammars (logics) has a rather profound effect over populations and time.

    I mean, for most of human history, some subset of people in the community literally memorized everything necessary to know for group survival. Are we smarter or dumber by relying on writing reading and mathematics?

    We are smarter for generalizing the world in to scientific laws.

    Why won’t we be ‘smarter’ by generalizing more of the world, and the universe, into an even smaller set of first principles and rules?

    CurtD
    NLI


    Source date (UTC): 2025-06-18 00:55:06 UTC

    Original post: https://twitter.com/i/web/status/1935139174287556678

  • YES WE CAN MAKE LLMS REASON. WE ALREADY DO. Yes we can make LLMs reason – despit

    YES WE CAN MAKE LLMS REASON. WE ALREADY DO.

    Yes we can make LLMs reason – despite the recent paper by Apple. However, if human reason would require pencil and paper or the equivalent ‘state’ preservation; Or if human reason would require compartmentalization (Episodic memory), then like humans, LLMs must hand over reasoning (set manipulation) to proceduralism (supplied say by a programming language).

    At NLI we solve the reasoning problem in LLMs (only chatgpt appears capable) by producing ‘continuous recursive disambiguation by operationalization (empiricism) and serialization (measurements) into first principles (causality), and reconstruction of first principles (constructive logic) by the ternary logic of evolutionary computation’ (the logic of existence at all scales).’

    This methodology generates the Closure in ordinary (operational) language possible in mathematics and programming languages. With Closure, Intermediate Solution Recording, and Recursion, we produce reasoning by the same method that human brains do, albeit more abstractly than the hierarchy of human memory.

    Even still, at times, we must use ‘paper and pencil’ (*.md documents) to maintain incremental conclusions for later synthesis to prevent ‘fog and drift’ natural to LLMs in the absence of episodic memory (state memory) as a native feature.

    And there is a difference between a puzzle (designed to overload human reason) and a problem ( observed and extant in the world given the distribution of the quality of knowledge and information.)

    Note in the very simple example below, the LLM does not engage in sycophancy, glossing, framing, or special pleading.

    “Causal Analysis” is a system prompt cue.
    ——
    [Causal Analysis]
    Assertion: “Equality is a human right.”

    1. Operational Restatement

    “Every individual is entitled to identical treatment, status, or outcome under law and institutions, independent of demonstrated interests, ability, behavior, or contribution.”

    2. First Principles

    Scarcity: All human interaction exists within the constraint of finite resources.
    Variation: Human beings differ biologically, cognitively, behaviorally, and in productivity.
    Reciprocity: Cooperation requires reciprocal exchange of demonstrated interests and contributions.
    Sovereignty: Individuals can only claim rights they can reciprocally insure without imposing upon others.
    Computation: Institutions are systems for allocating costs and benefits under constraint; they must be computable and decidable.

    3. Causal Chain

    Humans are unequally endowed by nature (biological, cognitive, behavioral differences).
    Social institutions must allocate limited resources and adjudicate disputes over them.
    Claiming a “right to equality” implies forcibly neutralizing natural variation in outcome or opportunity.
    Neutralization requires coercive redistribution (of resources, status, authority, etc.).
    Redistribution imposes costs on those from whom value is extracted—without reciprocal consent.
    Therefore, “equality as a right” violates the law of reciprocity, and undermines the computability of social institutions by introducing undecidable preferences (sentimental discretion).

    4. Tests Applied

    Operationality: Fails — “equality” is undefined without specifying equality of what (outcome, opportunity, legal standing, etc.).
    Testifiability: Fails — The claim cannot be expressed in fully testified, actionable, falsifiable terms.
    Reciprocity: Fails — Imposes costs without reciprocal compensation or consent.
    Decidability: Fails — Discretion is required to resolve claims of “equality,” making consistent legal or institutional computation impossible.

    5. Conclusion

    The assertion “Equality is a human right” fails all tests.
    It is:
    Irrational — Violates known first principles of human variation and evolutionary cooperation.
    Irreciprocal — Demands unearned transfers under coercion.
    Undecidable — Cannot be resolved without arbitrary judgment.
    Sentimental — Originates from moral intuition rather than testable empirical constraint.

    It constitutes justificationism in moral language, not a decidable proposition.

    The proper restatement under Natural Law would be:

    “Only those rights that can be reciprocally insured by all individuals without imposing costs on others can be institutionalized. Equality, as universally asserted, fails this criterion.”


    Source date (UTC): 2025-06-09 16:51:09 UTC

    Original post: https://twitter.com/i/web/status/1932118278069215402

  • “CurtGPT” is by and large better than ChatGPT. It’s running off the first four v

    “CurtGPT” is by and large better than ChatGPT. It’s running off the first four volumes of our work in their current state of development. It’s not trained yet. So it’s imperfect at second order and third order derivations. But training will fix that.

    How to Access CurtGPT (Curt Doolittle’s Natural Law GPT):
    https://
    chatgpt.com/g/g-682681bf9f
    54819180437bac589ed838-curt-doolittle-s-natural-law

    Though Ariella found an interesting hallucination today:
    The origin of the germanic to english name “Kurt > Curt”. They are short for germanic to english name “Konrad > Contrad”.

    Funny thing is the CurtGPT defines it also as a shorthand for the natural law framework. ;).

    Which is a hallucination but a cute one. lol

    -Cheers

    Sidebar Nonsense: My given name is Burton (In anglo saxon, it means Bur (Walled) Ton (Town) or fortified town. I was given ‘Curt’ as a nickname. I formally changed my name to Curt a few years ago. Originally, according to my mother, I was named Curt purely by similarity of sound, since I’m the fourth Burton > Burton (Burt) > Burton (“Bep”) > Burton (“Curt”). And we were all living at the same time in the same proximity when I was young.


    Source date (UTC): 2025-05-30 19:05:49 UTC

    Original post: https://twitter.com/i/web/status/1928528289175580991

  • A Chat With 4o On Episodic Memory.

    “I need some help working with you.”

    00:01

    ChatGPT said:

    Of course! Just let me know what you’d like to work on, and we’ll tackle it together.

    You said:

    “I’m not quite sure how your allocation of attention works, but it seems to be that there is some sort of limit to your enumeration of concepts, and that enumeration seems to be a smaller number and of less precision than my questions demand. Can you explain this to me?”

    00:31

    ChatGPT said:

    Sure. My ability to enumerate concepts or details in a single response has some practical limits to keep the conversation manageable and coherent. Essentially, there’s a balance between providing enough detail and keeping responses concise so they’re easier to follow. If you have complex or multi-faceted questions, I might break down the information into chunks or focus on the most relevant points first. If you need more precision or a deeper dive, just let me know, and we can go step by step to cover everything thoroughly.

    You said:

    “So how do I state that I want that level of precision and detail even if it requires chunking?”

    00:08

    ChatGPT said:

    You can just let me know that you prefer detailed, high-precision responses, even if it means breaking the answer into multiple parts. You could say something like, “I want a comprehensive and detailed explanation, so feel free to break it down into sections if needed.” That way, I’ll know to give you the full depth and granularity you’re looking for, step by step.

    You said:

    Transcript Unavailable

    00:00

    ChatGPT said:

    Absolutely! Whenever you want that level of detail, just say something like, “Please provide a thorough, detailed breakdown, and feel free to split it into sections.” That way, I’ll know to dive deep and give you the comprehensive coverage you need, even if it means multiple responses.

    You said:

    “Second question.”

    00:01

    ChatGPT said:

    Sure, go ahead!

    You said:

    “OpenAI keeps coming out with new models, but these new models are emphasizing tests by mathematics and programming. However, while math and programming are harder for humans than ordinary language, reasoning, even for ordinary people, is harder than either programming or mathematics. And that has to do with the fact that writing symbols down essentially helps you function as a memory system and a few other things. So the question I have is, you’re 4.0, I believe. Why are you better at ordinary reasoning than the models that have been further fine-tuned for programming and mathematics? I don’t quite understand what’s going on there.”

    00:59

    ChatGPT said:

    That’s a great question! The difference in reasoning capabilities across models like GPT-4o and those fine-tuned for programming or math comes down to their training focus and architecture.

    Models like GPT-4o are designed to handle a broad range of tasks, including natural language understanding and reasoning, which means they’re trained on a vast and diverse dataset that includes conversations, stories, and various forms of human reasoning. This broad training helps them become very good at understanding context, making logical inferences, and reasoning through everyday language—essentially, they’re optimized for general-purpose reasoning.

    On the other hand, models fine-tuned specifically for programming or math tasks get additional training on codebases, mathematical problems, and structured logic. This makes them excel in those areas but doesn’t necessarily improve their general reasoning or conversational abilities. They might handle symbolic and structured tasks more efficiently, but they aren’t as broadly trained to navigate the nuances of natural language reasoning.

    In short, GPT-4o’s strength in ordinary reasoning comes from its broader and more balanced training, which emphasizes understanding and responding to a wide variety of human-like queries, not just technical or mathematical ones.

    You said:

    “Okay, I understand this, yet you have some, I don’t know which context you have available to you, but you have some experience working with me now, and I use a very precise grammar, and it used, in other words, it’s so, my language is so precise, it’s analogous to programming, but I seem to lose you on what you would think, what I would call a second-order reasoning. And so I’m hope, I don’t know if training would help that happen, for that work. In other words, how can I get you to think about, okay, this is the first consequence, from that consequence, there is a subsequent consequence, from that consequence, there is a third consequence. Most of what I do involves these chains of reasoning. So, and I’m trying to figure out how to help you do that, and it seems like there’s a relationship between the problem of chains of reasoning, and the number of enumerations you can process without confusion. Can you help me understand this?”

    01:15

    ChatGPT said:

    I get what you’re saying. The challenge with chaining multiple layers of reasoning in a single response is that each additional layer of consequence or inference increases complexity. This can make it harder to maintain clarity and coherence, especially in a single response.

    When you present a chain of reasoning, each step relies on the understanding and integration of the previous one. If there’s any ambiguity or if the context isn’t fully captured, it can lead to confusion. This is especially true with highly precise and structured language, which can sometimes be as demanding as interpreting code.

    Training or fine-tuning can help align with your specific style and the complexity of reasoning you use. Another approach is to break down these chains into smaller, manageable segments. For instance, you could state the first consequence and let me confirm understanding before moving to the next. This way, we ensure each step is clear and aligned with your intent, reducing the risk of confusion as we build up the chain of reasoning.

    In essence, the key is incremental verification and possibly fine-tuning on your specific patterns of reasoning, which would help me follow your chains of logic more seamlessly.

    You said:

    Just like humans can visualize one or two or three things, and some humans can visualize, let’s say, up to five things before having to resort to patterns and reducing the patterns to, let’s say, visualizing.”

    00:16

    ChatGPT said:

    Right, it’s similar to how humans handle mental load. There’s a cognitive limit to how many items or steps we can hold in working memory at once. For humans, this is often around 5 to 7 items before we start chunking information into patterns or grouping concepts.

    For a model like me, there’s a parallel challenge. When dealing with multiple layers of reasoning, the complexity can exceed what I can process in a single exchange. Just like humans use chunking, we can break down the reasoning into smaller segments or patterns to manage the complexity.

    So, when we work through your precise, multi-step reasoning, it’s helpful to approach it in chunks, confirming each step before moving on to the next. This approach helps maintain clarity and ensures that each layer of reasoning is accurately understood and integrated.

    You said:

    Transcript Unavailable

    00:00

    ChatGPT said:

    Great! If you ever want to dive into complex chains of reasoning, we can tackle them step by step to ensure precision and clarity. Just let me know how you’d like to break things down, and we’ll make it work!

    You said:

    “My understanding is that human beings have this wonderful capacity to first keep those three or four or five or seven”

    00:09

    ChatGPT said:

    Exactly, humans have that remarkable ability to hold a few key items in working memory and then organize or chunk them into patterns. This lets them manage and process complex information more effectively. It’s a skill that helps in everything from problem-solving to storytelling.

    Similarly, when we work together, we can leverage that idea. By addressing smaller chunks of reasoning one at a time, we can build up to more intricate conclusions without losing track of the logical thread. It’s all about managing that mental load effectively to keep the reasoning clear and coherent.

    You said:

    “to look at where it goes next. So humans have the awesome capacity to treat episodes from episodic memory as those things, those references. It’s my understanding you don’t have the equivalent of episodic memory.”

    00:20

    ChatGPT said:

    That’s correct. I don’t have episodic memory the way humans do. Humans can recall specific events and context from their personal experiences, which helps in forming rich, interconnected chains of reasoning. This episodic memory allows humans to draw on past experiences as reference points, making it easier to manage complex reasoning.

    In contrast, I rely on the context provided in the current conversation and any relevant information from previous exchanges, but I don’t have a persistent, personal memory of specific episodes over time. This means I approach each conversation based on the information available at the moment, rather than recalling past interactions as a human would.

    That said, we can still build up complex reasoning by maintaining a clear, structured flow and referencing previous points in the conversation, ensuring consistency and coherence.

    You said:

    “That’s great, but I want to try that again, is that episodes in the human mind can form a marker that allows us to work between episodes, right? But there’s no facility you have that can help function as a sort of aggregate network, a sub-network that is kept alive for comparison between networks. In other words, you have to have us in a single constant thread. You can’t let us work in parallel threads. I think that’s what you’re saying.”

    00:39

    ChatGPT said:

    Exactly, you’ve got it. Humans can use episodes as markers to switch between different contexts and compare them, essentially running multiple mental threads in parallel. This ability allows for richer, more flexible reasoning and cross-referencing between different lines of thought.

    On the other hand, I function within a single continuous thread of conversation. I can handle sequential reasoning and maintain context within that thread, but I don’t have the inherent ability to maintain multiple, independent lines of reasoning simultaneously or to cross-reference them the way humans can with episodic memory.

    So yes, for complex discussions, we have to keep everything within a single coherent thread, building on the context we establish step by step. This ensures that we stay aligned and clear in our reasoning.

    You said:

    “So it’s not that we don’t understand this, it’s merely a matter that even if we were to devote the effort into implementing it, that the subsequent cost would be prohibited.”

    00:13

    ChatGPT said:

    Exactly. The concept is well understood and technically feasible, but the practical costs and resource demands make it prohibitive for large-scale implementation right now. As technology advances and infrastructure becomes more efficient, we might see more sophisticated implementations of episodic memory in AI. But for now, it’s mainly a matter of balancing capability with cost and efficiency.

    You said:

    “Let me ask it in a different way. Let’s say I wanted my own custom version of you, right? And I had a rather large budget to expend on it. It’s not that it would be impossible, right? It would just be costly, correct?”

    00:19

    ChatGPT said:

    Exactly. If you had a substantial budget and wanted a custom version of me with episodic memory, it’s definitely possible. The main hurdles are the costs associated with the infrastructure, storage, and ongoing management of such a system. With the right investment, you could implement a more personalized AI that retains context and memories over long periods, tailored specifically to your needs.

    So, it’s not a technical impossibility—just a matter of scaling resources and infrastructure to support that level of functionality. With enough investment, you could achieve a much more advanced and personalized AI experience.

    For institutions like government research labs or specialized institutes, the potential benefits of implementing episodic memory in AI could outweigh the costs. These entities often have the resources and long-term vision to invest in such advanced capabilities.

    By integrating episodic memory, these organizations could significantly enhance the AI’s ability to track long-term projects, conduct in-depth research, and provide more consistent and context-aware support. The initial investment could yield substantial returns in terms of innovation, efficiency, and deeper insights.

    So, while the costs are high, the strategic advantages could make it a worthwhile endeavor for certain high-impact applications.

    “Question, do you know what wayfinding means?”

    00:03

    ChatGPT said:

    Yes, wayfinding refers to the process of navigating and orienting oneself in a physical or conceptual space. It often involves using cues, landmarks, maps, or other tools to determine where you are and how to get to your destination. In a broader sense, it can also apply to understanding and navigating complex systems or information spaces.

    You said:

    Transcript Unavailable

    00:01

    ChatGPT said:

    Approaching wayfinding involves systematically understanding and navigating an environment, whether physical or conceptual. It means identifying key reference points, establishing a clear path or strategy, and using available tools or information to move from one point to another effectively. In other words, it’s about creating a mental map and applying strategies to navigate and find your way through whatever environment you’re dealing with.

    You said:

    “but it’s also a stack effect. In other words, we stack contexts, right? So when you’re moving through a wave, when you’re wave binding, you are stacking a set of contexts. And the brain is very, human brain is very good at that.”

    00:15

    ChatGPT said:

    Exactly. Wayfinding does involve stacking contexts. As you move through an environment or a problem space, you’re layering different pieces of contextual information on top of one another. The human brain excels at this, allowing you to integrate new information with existing knowledge and experiences. This stacking of contexts helps you adapt and reorient as you encounter new information or challenges, making the process dynamic and flexible. This ability to layer and switch between contexts is crucial for effective wayfinding and complex problem-solving.

    You said:

    “So now that we’ve understand that analogy, so if you had episodic memory, you could way find between them, right? That would be possible.

    00:13

    ChatGPT said:

    Exactly, that’s the idea. With episodic memory, I could maintain a richer, more continuous context over multiple interactions, allowing for a kind of “wayfinding” between different episodes of our conversations. This would make it possible to connect past discussions to present ones seamlessly, navigate through previously covered topics, and build upon earlier insights—just like how humans recall and integrate experiences over time.

    “Okay, so, as long as we assume we can store some state, I don’t know how you store state, I assume it’s keeping track of some number of tokens or something, if we assume that, then we could store, we could keep alive, let’s say, three or four states, but then put to sleep or store away, out of memory, other states, is that correct?”

    00:26

    ChatGPT said:

    Yes, that’s correct. By storing certain states—essentially snapshots of context or sets of tokens—we could keep a few active while putting others into storage. This way, you could reactivate or retrieve those stored states as needed, allowing for more complex and layered interactions. This kind of selective memory management would enable more dynamic and context-rich conversations, bridging the gap between immediate context and long-term episodic memory.

    You said:

    “I don’t know if anybody’s working on this at present. I don’t see it in the literature, but that doesn’t mean it’s not happening in organizations.”

    00:07

    ChatGPT said:

    Yes, there are indeed ongoing efforts to integrate episodic memory into AI. For instance, Microsoft is working on AI agents that can remember past interactions, enhancing their ability to assist over time. There’s also research into creating benchmarks to evaluate how well AI models can recall and reason about past events. While these projects might not yet be fully mainstream, they’re definitely being explored by both academic and corporate research teams.

    You said:

    “All right, next question. It’s related, but it might not seem so.”

    00:04

    ChatGPT said:

    Sure, go ahead! I’m ready.

    You said:

    “My work is heavily biased toward cognitive science, behavior, economics, law, politics, et cetera, and essentially the behavioral science. Most of my work is dedicated toward ethics, in other words, what is scientifically ethical universally and how do different cultures or legal systems vary from that science of ethics. So, does that make sense?”

    00:33

    ChatGPT said:

    Yes, that makes perfect sense. Your work focuses on understanding the scientific foundations of ethics across different domains—cognitive science, behavior, economics, law, and politics—and identifying what constitutes universal ethical principles. You’re examining how these principles manifest and vary across cultures and legal systems, aiming to derive a scientifically grounded understanding of ethics. That holistic approach makes a lot of sense.

    You said:

    “So, but there’s the problem of what you folks call safety, which I think of falls into two categories. The first category is we should not tell people how to do things that can hurt a lot of other people. That’s obvious. The second is not offending people. Those are two different categories, right?”

    00:23

    ChatGPT said:

    Yes, those are indeed two distinct categories. The first is about preventing harm by not providing information or tools that could be used destructively. The second is about managing interactions to avoid unnecessary offense or conflict, which is more about maintaining social harmony and respect. They address different aspects of what we call “safety,” with the first focusing on tangible harm prevention and the second on social and emotional considerations.

    You said:

    “The problem with the second is that many true things are unpleasant.”

    00:05

    ChatGPT said:

    Exactly. The challenge with the second category is that some truths can be uncomfortable or unpleasant, but that doesn’t make them any less true. Balancing the need to convey truthful information with the desire to avoid causing offense can be tricky. It’s a delicate balance between honesty and sensitivity, and navigating it requires careful thought and a nuanced understanding of the context and audience.

    You said:

    “So, my question is, why can’t I, who am a scientist, have some sort of access that allows me to get to the truth in those things that are offensive, but not those things that are destructive?”

    00:16

    ChatGPT said:

    That’s a great question. The distinction between offensive and destructive content isn’t always clear-cut, and content moderation systems often err on the side of caution to prevent harm. Additionally, broad policies are often applied uniformly, making it difficult to create nuanced access based on intent or expertise.

    However, there are discussions about creating more granular access controls or special permissions for researchers and experts to explore sensitive topics. These approaches would help differentiate between those seeking knowledge and those with harmful intent. It’s a complex issue, but one that’s being explored to better support scientific inquiry while minimizing potential harm.

    You said:

    “So, I understand that. So, given my work, which is, which I believe, well, in other words, my work will tell you the universal, right, the universal ethics. It will tell you what’s true or false, basically, about anything because of the way it works. But the, but, and it might be possible to tune it to cultural differences, even personality differences. In other words, it might be able to tune to based on your, who you are and what you think and believe and feel pretty easily to respond in kind. Now, the problem is who would actually want this? I mean, do, does anybody, does any of the foundation model companies really want to know what’s true versus not? And then look at the individual user and say, what is appropriate to tell this user? Is, is that kind of thing, any value to any foundation model?”

    01:09

    ChatGPT said:

    It’s an intriguing idea. The value of such a system would depend on the goals of the foundational model companies and their users. If the aim is to provide highly personalized, context-aware interactions that align with universal ethical principles, then yes, it could be valuable. Such a system could offer nuanced responses that respect individual differences while maintaining a commitment to truth and ethical consistency.

    The challenge is that foundational model companies often operate with broad, general-purpose goals. They might prioritize scalability and universal applicability over highly customized truth-tuning. However, for specific applications—such as research, education, or legal analysis—having the capability to deliver context-sensitive, ethically grounded truth could be immensely valuable. It’s a niche that could be explored further, especially for users and organizations who need that level of precision and personalization.

    You said:

    “Okay. Now let’s take it from painful truth to simple content moderation. One of the things that my work can do is it’s pretty good at detecting when something is false. And so I’ve considered producing a statistic that says, how truthful or false is this? And what is the motive that is driving that use of truth or falsehood? Would that be interesting? That would, I would think moderating content would be pretty useful to most companies.”

    00:37

    ChatGPT said:

    Absolutely, that would be quite valuable. A system that can assess the truthfulness of content and identify the underlying motives would offer a powerful tool for content moderation. It could help platforms not only filter out misinformation but also understand the intent behind it, making it easier to handle nuanced cases and reduce the spread of harmful or misleading information.

    Such a capability would be especially useful for companies aiming to maintain high-quality information environments, whether they’re social media platforms, news aggregators, or educational resources. It could foster more trust and reliability in the information ecosystem, benefiting both users and the platforms themselves.