Category: Epistemology and Method

  • Gödel, Chaitin, Wolfram, and Doolittle – The Limits of Decidability Gödel, Chait

    Gödel, Chaitin, Wolfram, and Doolittle – The Limits of Decidability

    Gödel, Chaitin, Wolfram, and Doolittle are all working on a similar problem space—namely, the limits of decidability, computability, and formal systems—but from different domains and with different purposes. Here’s a structured comparison across seven dimensions: ternary logic, evolutionary computation, constructive logic, ethics, testimony, and decidability, focusing on Doolittle’s differences with them.
    Problem Solved: Demonstrated that in any sufficiently expressive formal system, there exist true statements that are unprovable within the system.
    Method: Proof via binary logic and formal arithmetic.
    Contribution: Set epistemic limits on formal, axiomatic systems (math, logic).
    Focus: Negativa—what you cannot do.
    Limitation: Didn’t attempt to operationalize or embed in human action or computation.
    Contrast: Doolittle treats Gödel’s incompleteness as a boundary condition, but aim to operate within those constraints using ternary logic (truth, falsehood, undecidability) and constructive methods, to extend decidability into behavior, law, and economics by empirical rather than purely formal means.
    Problem Solved: Proved that randomness and incompressibility are intrinsic to formal systems.
    Method: Introduced Kolmogorov complexity, Ω (Chaitin’s constant), showing that there’s a limit to compressibility (and thus predictability).
    Contribution: Proved irreducible complexity in mathematics and computation.
    Focus: Epistemological entropy in symbolic representation.
    Limitation: Doesn’t extend into ethics, behavior, or institutional design.
    Contrast: You extend this insight into epistemic accounting—but rather than treating incompressibility as a terminal point, you account for it operationally via testimonial adversarialism, embedding it in your science of decidability that survives contact with reality.
    Problem Solved: Demonstrated that simple rules can generate complex, often irreducible, behavior—most of it undecidable without simulation.
    Method: Explores cellular automata and rule-based computation.
    Contribution: Operationalized evolutionary computation, but mostly as a descriptive ontology.
    Focus: Demonstrates emergence, not decidability.
    Limitation: Stays in the domain of physical and mathematical systems; doesn’t formalize social institutions or law.
    Contrast: Where Wolfram ends with computational irreducibility, Doolittle begins with it—treating human cognition and cooperation as an attempt to manage it via constructive decidability using operational logic and adversarial testing of testimony.
    Problem Solved: The absence of a universally commensurable system of measurement for behavior, cooperation, and law.
    Method: Constructive logic from first principles of evolutionary computation, tested via testimonial adversarialism, formalized in ternary logic.
    Contribution: Transforms the epistemic problem of measurement into an institutional and legal solution by producing a science of decidability.
    Focus: Applies scientific rigor to truth, law, economics, and morality, where others fear to tread.
    Unique Strength:
    Doolittle resolves the
    demarcation problem not by logic alone, but by testifiability and the cost of variation from natural law.
    Doolittle’s unites
    ethics, law, economics, and science under a single operational logic.
    Doolittle’s method is both
    descriptive (explains natural law) and prescriptive (institutionalizes it).
    Comparative Matrix
    Summary:
    Gödel says: You can’t prove everything, even if it’s true.
    Chaitin says: You can’t compress everything, some truths are incompressibly random.
    Wolfram says: You can’t always reduce everything—many systems are computationally irreducible.
    Doolittle says: True—but if we start from the Ternary logic of Evolutionary Computation to identify the patterns of emergence in the universe, followed by the physical limits of cooperation and testify operationally, we can produce decidability sufficient for truthful law, moral action, and institutional design, and warranty that testimony using adversarialism.

    Doolittle acknowledges all their contributions as setting boundaries on justificationary knowledge, while he creates a constructive, operational, testifiable method to act within those boundaries — especially for the domains they avoided: law, ethics, and cooperation.

    [END]


    Source date (UTC): 2025-03-26 20:04:24 UTC

    Original post: https://x.com/i/articles/1904987823956222156

  • Gödel, Chaitin, Wolfram, and Doolittle are all working on a similar problem spac

    Gödel, Chaitin, Wolfram, and Doolittle are all working on a similar problem space—namely, the limits of decidability, computability, and formal systems—but from different domains and with different purposes. Here’s a structured comparison across seven dimensions: ternary logic, evolutionary computation, constructive logic, ethics, testimony, and decidability, focusing on Doolittle’s differences with them.

    1. Gödel: Incompleteness & Limits of Formal Systems

    Problem Solved: Demonstrated that in any sufficiently expressive formal system, there exist true statements that are unprovable within the system.
    Method: Proof via binary logic and formal arithmetic.
    Contribution: Set epistemic limits on formal, axiomatic systems (math, logic).
    Focus: Negativa—what you cannot do.
    Limitation: Didn’t attempt to operationalize or embed in human action or computation.
    Contrast: Doolittle treats Gödel’s incompleteness as a boundary condition, but aim to operate within those constraints using ternary logic (truth, falsehood, undecidability) and constructive methods, to extend decidability into behavior, law, and economics by empirical rather than purely formal means.

    2. Chaitin: Algorithmic Information Theory

    Problem Solved: Proved that randomness and incompressibility are intrinsic to formal systems.
    Method: Introduced Kolmogorov complexity, Ω (Chaitin’s constant), showing that there’s a limit to compressibility (and thus predictability).
    Contribution: Proved irreducible complexity in mathematics and computation.
    Focus: Epistemological entropy in symbolic representation.
    Limitation: Doesn’t extend into ethics, behavior, or institutional design.
    Contrast: You extend this insight into epistemic accounting—but rather than treating incompressibility as a terminal point, you account for it operationally via testimonial adversarialism, embedding it in your science of decidability that survives contact with reality.

    3. Wolfram: Computational Irreducibility & A New Kind of Science

    Problem Solved: Demonstrated that simple rules can generate complex, often irreducible, behavior—most of it undecidable without simulation.
    Method: Explores cellular automata and rule-based computation.
    Contribution: Operationalized evolutionary computation, but mostly as a descriptive ontology.
    Focus: Demonstrates emergence, not decidability.
    Limitation: Stays in the domain of physical and mathematical systems; doesn’t formalize social institutions or law.
    Contrast: Where Wolfram ends with computational irreducibility, Doolittle begins with it—treating human cognition and cooperation as an attempt to manage it via constructive decidability using operational logic and adversarial testing of testimony.

    4. Curt Doolittle: Operational Decidability Across All Domains

    Problem Solved: The absence of a universally commensurable system of measurement for behavior, cooperation, and law.
    Method: Constructive logic from first principles of evolutionary computation, tested via testimonial adversarialism, formalized in ternary logic.
    Contribution: Transforms the epistemic problem of measurement into an institutional and legal solution by producing a science of decidability.
    Focus: Applies scientific rigor to truth, law, economics, and morality, where others fear to tread.
    Unique Strength:
    Doolittle resolves the demarcation problem not by logic alone, but by testifiability and the cost of variation from natural law.
    Doolittle’s unites ethics, law, economics, and science under a single operational logic.
    Doolittle’s method is both descriptive (explains natural law) and prescriptive (institutionalizes it).

    Comparative Matrix

    Summary:
    Gödel says: You can’t prove everything, even if it’s true.
    Chaitin says: You can’t compress everything, some truths are incompressibly random.
    Wolfram says: You can’t always reduce everything—many systems are computationally irreducible.
    Doolittle says: True—but if we start from the Ternary logic of Evolutionary Computation to identify the patterns of emergence in the universe, followed by the physical limits of cooperation and testify operationally, we can produce decidability sufficient for truthful law, moral action, and institutional design, and warranty that testimony using adversarialism.

    Doolittle acknowledges all their contributions as setting boundaries on justificationary knowledge, while he creates a constructive, operational, testifiable method to act within those boundaries — especially for the domains they avoided: law, ethics, and cooperation.

    [END]


    Source date (UTC): 2025-03-26 19:08:47 UTC

    Original post: https://x.com/i/articles/1904973826750070784

  • WHAT IS THE DIFFERENCE BETWEEN MY WORK AND THE WORK OF GODEL, CHATIN, AND WOLFRA

    WHAT IS THE DIFFERENCE BETWEEN MY WORK AND THE WORK OF GODEL, CHATIN, AND WOLFRAM?

    The text contrasts your work with the ideas of Gödel, Chaitin, and Wolfram by focusing on how each approaches the concept of “decidability,” which is the capacity to make definitive judgments or resolve claims within a system of knowledge. The comparison is centered on how your system, described as “Natural Law,” differs from these figures in addressing the challenge of undecidability.

    Here’s a breakdown of the differences:

    Gödel, Chatin, and Wolfram: These thinkers are associated with exploring the limits of decidability within formal systems like mathematics and computation. Gödel’s incompleteness theorems, Chatin’s work on algorithmic information theory, and Wolfram’s investigations into cellular automata all touch on the inherent undecidability or unpredictability found in certain systems. Their work often demonstrates that within any sufficiently complex system, there will be propositions that cannot be proven true or false, or outcomes that cannot be predicted.

    Your Work (Natural Law): In contrast, your work aims to establish a “universal framework of decidability” that can be applied across all domains of human life, including areas like law, ethics, and social behavior. Your approach seeks to move beyond the limitations identified by Gödel, Chatin, and Wolfram by providing a methodology to achieve decidability even in complex and seemingly subjective domains.

    The key differences lie in the scope and methodology:

    Scope: Gödel, Chatin, and Wolfram focus on the formal limits of decidability within closed systems (mathematical, computational), while your work seeks to create a system of decidability for open systems, including human behavior and social interactions.

    Methodology: Your work uses “Natural Law” as a framework to achieve decidability. This framework involves:
    – Operationalizing concepts to make them testable and measurable.
    – Establishing universal principles derived from the laws of nature and human behavior.
    – Applying adversarial testing to claims to ensure their robustness.

    This methodology aims to provide a “precise, actionable method” for resolving human questions, contrasting with the undecidability results in formal systems.


    Source date (UTC): 2025-03-26 18:52:41 UTC

    Original post: https://twitter.com/i/web/status/1904969774175793152

  • No, I am teaching you the method of argument by unambiguous chains of causation

    No, I am teaching you the method of argument by unambiguous chains of causation instead of pretense of argument justified by selective analogy.

    Either you are knowledgeable enough to make an argument from first principles (first causes) and to trace the evolution of these ideas across time, or you are just ‘fitting’ examples to justify your hypothesis.

    You are a good person. I can easily detect that. But my job is the truth whether we like it or not.

    The truth is that and any good that came to be under the church was the good that would arise from any government at all, just as the common law arose under the Romans. The only difference is whether it’s empirical (european) or a lie (semitic abrahamic).

    The church destroyed knowledge that took the return of the muslims fleeing from fundamentalism to return that knowledge to europe. Which,, in 1200 started our recovery immediately, and despite the plague, gave us literacy and the printing press within a few generations, science within a few more, and the agrarian then industrial revolution in a few more.

    Abrahamism is a cancer.

    Reply addressees: @ProductionMan @MikeChardin


    Source date (UTC): 2025-03-24 18:38:46 UTC

    Original post: https://twitter.com/i/web/status/1904241494506307584

    Replying to: https://twitter.com/i/web/status/1904240128333418638

  • “Truth has very little relation to consensus, it’s much more likely to be heard

    –“Truth has very little relation to consensus, it’s much more likely to be heard from those willing to break from it.”–Martin Štěpán @AutistocratMS


    Source date (UTC): 2025-03-19 22:21:12 UTC

    Original post: https://twitter.com/i/web/status/1902485535903666661

  • “Truth has very little relation to consensus, it’s much more likely to be heard

    –“Truth has very little relation to consensus, it’s much more likely to be heard from those willing to break from it.”–Martin Štěpán @AutistocratMS


    Source date (UTC): 2025-03-19 22:21:12 UTC

    Original post: https://twitter.com/i/web/status/1902485535832338433

  • (Truth can only exist as speech, and then only as testimony, or ‘performative tr

    (Truth can only exist as speech, and then only as testimony, or ‘performative truth’. It only has relevance as the provision of decidability between people. Else it only means an individual’s confidence. Please ask me or Martin if you need further clarification. You may also visit this webpage where we have disambiguated the term. Only bullets 1-6 are necessary for you to grasp the idea. https://t.co/nFFGaOP0xD)

    Reply addressees: @scottdomianus @AutistocratMS


    Source date (UTC): 2025-03-19 21:34:47 UTC

    Original post: https://twitter.com/i/web/status/1902473853500317696

    Replying to: https://twitter.com/i/web/status/1902470186982117464

  • I see the confusion. Stated fully: “What we do consists of continuous recursive

    I see the confusion. Stated fully: “What we do consists of continuous recursive disambiguation by reduction of all phenomena to first principles, thereby discovering the reversal of the process of evolution, and its use of evolutionary computation by continuous recursive disambiguation of entropy into negative entropy (order, mass).”

    The audience likely understood, but I can see quite easily why those who aren’t involved in the technical aspects of the work wouldn’t grasp my meaning.

    Reply addressees: @scottdomianus


    Source date (UTC): 2025-03-19 18:37:47 UTC

    Original post: https://twitter.com/i/web/status/1902429309404639232

    Replying to: https://twitter.com/i/web/status/1902416105710158274

  • CC: @NatLawInstitute See? It’s not just me. 😉 Admittedly Joscha has a german ph

    CC: @NatLawInstitute
    See? It’s not just me. 😉 Admittedly Joscha has a german phenomenalist bent instead of my anglo operationalist, but effectively he heavily loads language across disciplines just as much as I do. I mean… and we still have @LukeWeinhagen to consider. 😉

    I…


    Source date (UTC): 2025-03-17 21:12:27 UTC

    Original post: https://twitter.com/i/web/status/1901743457544405180

    Replying to: https://twitter.com/i/web/status/1901217710459191744

  • OVERVIEW OF VOLUME 2 BY GOOGLE Welcome to an overview of “The Natural Law Volume

    OVERVIEW OF VOLUME 2 BY GOOGLE

    Welcome to an overview of “The Natural Law Volume 2 – A System of Measurement” by B. E. Curt Doolittle with Bradley H. Werrell D.O. and the Natural Law Institute. This volume is a crucial part of the larger “Natural Law” series, aiming to tackle what the authors call “Effing the Ineffable” – making comprehensible things traditionally considered beyond expression.

    At its core, Volume Two focuses on establishing a universally commensurable system of measurement and detailing the method of its application. Building upon the first volume’s identification of the “Crisis of Our Age” and the need to combat falsehoods, this book provides the how – the foundational tools for understanding and analyzing the world according to natural law principles. It strives to move disciplines like philosophy, law, and social sciences towards greater rigor and measurability.

    The book delves into the fundamental concept of measurement itself, explaining how it arises from our nervous system’s dimensional analysis of sensory inputs. It explores the crucial distinctions between dimensions (the basis of measurement), indices (formalized scales), and the names of points along the index. Various types of indices are discussed, including natural, ordinal, and cardinal indices, as well as more complex forms like operational and adaptive indices. The emergence of ternary logic is also explored.

    A significant portion of Volume Two examines language as a system of measurement and an accounting system. It highlights how language, from words to stories, helps us disambiguate identities and categories. The book introduces the “Periodic Table of Grammars”, which illustrates the evolution of human communication and the underlying causal principles of grammar. It categorizes grammars into types like deflationary, ordinary, and inflationary, each serving different purposes and operating with varying degrees of precision.

    Furthermore, the volume lays out a formal methodology for producing actionable and testable claims. This method involves systematic disambiguation, enumeration, serialization, operationalization, and adversarial testing to refine concepts and ensure they are reducible to observable phenomena.

    The ultimate purpose of “A System of Measurement” is to provide the foundational framework necessary for decidability in complex human affairs. By establishing a universally commensurable system of measurement, the authors aim to unify epistemology, ethics, law, and evolutionary biology under a common operational framework. This volume is causally dependent on Volume One and provides the essential tools for understanding the logic and science discussed in Volume Three and for applying natural law principles consistently throughout the series.

    In conclusion, Volume Two of “The Natural Law” series provides the critical methodological backbone of the entire work. It offers a detailed exploration of measurement, language, and grammar as systems of making sense of reality, ultimately striving for clearer communication, greater accountability, and more reliable decision-making through a universally applicable framework.


    Source date (UTC): 2025-03-16 23:07:24 UTC

    Original post: https://twitter.com/i/web/status/1901409998036746240