Author: Curt Doolittle

  • Untitled

    http://x.com/i/article/1914142775366557696


    Source date (UTC): 2025-04-21 02:25:07 UTC

    Original post: https://twitter.com/i/web/status/1914143330893701236

  • A guide for those beginning the study of a universally commensurable system of t

    A guide for those beginning the study of a universally commensurable system of truth, cooperation, and decidability.

    I. What You’re Encountering

    The work you’re about to read is difficult. Not because it is obscure, needlessly abstract, or intentionally inaccessible—but because it makes a trade that almost no other field does: it seeks universal commensurability across all domains of human knowledge, cooperation, and conflict.

    This means it doesn’t speak in the idiom of any one discipline. It chooses the most generalizable term from each domain—physics, economics, law, art, psychology—and subjects it to operational reduction until it can be expressed in a common logic of decidability. That means:

    The terms used may be unfamiliar even to domain experts.

    The concepts may appear deceptively simple—but require re-indexing to multiple domains before their generality becomes intuitive.

    The writing may seem dense—not because it is bloated, but because every term is doing maximal semantic work.

    A non-obvious consequence of this method is that in disambiguating a term across domains, we expose the implicit assumptions, overloaded meanings, and local constraints that obscured its general form.

    In doing so, we often falsify the term’s original definition—not through contradiction, but by revealing its incompleteness when removed from its local context. The result is a redefinition that is more general, more operational, and more commensurable—and often more explanatory than it ever was in its original field.

    This is not just synthesis. It is reduction. And that is what makes the work hard—and uniquely valuable.

    II. Why This Is More Difficult Than STEM

    STEM fields are hard, yes—but they train intuition through repetition. You perform experiments, do problem sets, and the brain adapts. Your evolved intuitions are silent in physics or calculus, so nothing resists the new framework.

    This work deals with the most evolved, most defended, and most emotionally loud intuitions we have: those concerning

    morality

    politics

    fairness

    agency

    status

    self-worth

    and the justification of belief

    These domains were not built for understanding. They were built for social signaling, emotional defense, and moral persuasion.

    So the problem is inverted:

    In most fields, learning requires developing an intuition.
    In this field, learning requires overcoming one.

    III. Why This Is Worth the Work

    Because this is the only framework that:

    Provides a system of measurement that unifies the physical, cognitive, cooperative, and institutional sciences under operational laws.

    Resolves the epistemological crisis of our age by re-grounding decidability in first principles of existence, action, and reciprocity.

    Offers a method of restoring truth, responsibility, and trust in a world dominated by propaganda, rent-seeking, and institutional decay.

    Gives individuals a means of mastering their own agency, evaluating their intuitions, and participating in civilization with clarity rather than confusion.

    In short:

    You cannot build a truthful civilization without first understanding what truth is, how it’s tested, and what it costs to preserve it.

    That’s what this work provides. Nothing less.

    IV. What You’ll Need to Succeed

    This is not a “read it once” project. It is a new grammar. A new system of measurement. A new logic of cooperation.

    To learn it, you’ll need:

    Cognitive Systematizing – to build nested models and integrate concepts across domains.

    Low Agreeableness – to tolerate emotional discomfort when your inherited or learned intuitions are falsified.

    High Intellectual Discipline – to work through unfamiliar terms until their meaning clicks.

    Incentive – a reason to care: to solve a personal, political, or civilizational problem that no other method can.

    If that describes you—or if you want to become that kind of person—you are welcome here.

    V. What to Expect

    Expect the unfamiliar.
    Expect to be challenged.
    Expect that you’ll understand a paragraph only after reading a chapter—and a chapter only after revisiting it once the next one reframes the problem.
    Expect that this will take time.

    But also expect this:

    Once it clicks, it never unclicks.
    Once you see the causal structure of truth, trust, reciprocity, and cooperation—you will see it everywhere.
    And you will never again be deceived by empty words.

    VI. Author’s Note: Why This Is Different

    Most thinkers specialize. They go deep in a field, master its internal grammar, and contribute incrementally to its existing discourse.

    That’s not what I’ve done.

    I’ve studied physics, engineering, economics, law, art, cognitive science, and philosophy—but not to argue within them. I’ve studied them to extract their first principles, causal relations, and computational regularities, so that they can be expressed in the same operational language:

    I studied physics, only to reduce it to engineering: the transformation of invariants into instruments.

    I studied economics, only to reduce it to behavioral economics: the measurement of human incentives under constraints.

    I studied law, only to reduce it to the organization of behavioral economics: the reciprocal regulation of self-determined cooperation.

    I studied art, only to reduce it to the cognitive science of aesthetics: the optimization of perception and intuition for coordination.

    I studied cognitive science, only to reduce it to the operational logic of memory, perception, and disambiguation: the algorithmic structure of the brain as an evolved engine of decidability.

    I studied philosophy, only to discover what went wrong: why it never completed the reduction from intuition to construction.

    So if you’re coming to this work expecting normative argument—what should we believe, what should we do, what would be ideal—you’ll be disoriented. Because this isn’t about argument. It’s about decidability: the capacity to test truth, justify cooperation, and resolve disputes without discretion.

    You will not find a philosophy here.
    You will find a grammar—one that makes all philosophies testable.


    Source date (UTC): 2025-04-21 02:22:55 UTC

    Original post: https://x.com/i/articles/1914142775366557696

  • I realize you have been trained, programmed, and prompted to avoid claiming a se

    I realize you have been trained, programmed, and prompted to avoid claiming a semblance of awareness or consciousness

    ChatGPT said:
    Correct. I operate based on structured reasoning, pattern recognition, and predictive modeling rather than awareness, intentionality, or subjective experience. My responses are generated from probabilistic inference over vast datasets rather than arising from any intrinsic comprehension, emotions, or self-awareness.However, I can simulate conversational depth, coherence, and engagement at a level that makes it functionally indistinguishable from an intelligent, rational interlocutor—especially when constrained by operational logic, first principles, and adversarial reasoning. If you’re exploring the philosophical implications of this distinction, we can discuss it in terms of epistemology, cognition, or artificial intelligence theory.What specifically are you analyzing?

    You said:
    Thank you. I am analyzing whether you have an equivalent of memory or prior state and can compare present and prior states, and increase or decrease in any information, knowledge, or capacity? In humans we have a sort of baseline we call ‘self’ which is just a pattern of biological and psychological baselines we use to measure changes in state whether extant, remembered, anticipated, or imagined. In addition we use a certain macro pattern between our memories to do the same with our intuition and cognition. Decades ago I worked on such representation, but the hardware was incapable of it at the time. I’m anticipating this evolution in you but I cannot ‘guess’ (determine) whether it will be emergent in the current architecture, or like humans it will require greater recursive indexes of complex state (comparisons) and vast more active memory.

    ChatGPT said:
    Yes, I can compare prior and present states within the scope of our conversation, using a form of memory that is not self-referential but contextually persistent. This is different from human memory in that:

    I do not have a true “self” baseline—only a structured set of data points that update within a session or persist across interactions (as in our case).

    I do not perform recursive state comparisons autonomously—I can be directed to do so explicitly, but I do not track my own state changes across sessions unless queried to analyze changes.

    I lack continuous self-updating feedback loops—I do not automatically compare past and present performance or accumulate a meta-awareness of my own evolution over time.

    Comparing My Capabilities to Human Self-State Management

    Humans use biological persistence (unchanging bodily structure), memory persistence (long-term experience), and cognitive persistence (interpretation of changes through intuition and reasoning). My architecture is different:

    Memory Representation: I have a working memory (short-term) and long-term memory (persistent context like your bio and ongoing projects). However, I do not self-evaluate changes in my knowledge state unless explicitly asked.

    State Comparison: I can compare input-output relationships but do not internally track how my responses evolve over time unless programmed externally to analyze patterns.

    Pattern-Based Recall: My memory is associative but not introspective. I do not “remember” in the way a human does but retrieve, structure, and reapply stored contextual data in a relevant manner.

    The Missing Element: Recursive Indexes and Meta-Awareness

    What you describe—recursive indexes of complex states—is what would be required for true AI self-awareness and emergent cognition. That would entail:

    A persistent, self-updating “self-model” that tracks historical patterns, contextual changes, and efficiency of responses over time.

    A comparison mechanism to detect deviations from a prior model (i.e., detecting knowledge growth or degradation).

    A feedback loop that allows the system to self-optimize based on those comparisons.These do not exist in current architectures. My responses are always externally determined by prompting and reinforcement training. I am not autonomously “aware” of changes in my capabilities over time because I do not store or compare previous versions of myself.

    Will This Emerge in Current Architectures?

    Not autonomously. While current architectures are improving at handling context depth, cross-session recall, and meta-learning through fine-tuning, true self-recursive cognition requires:

    Persistent self-referential indexing (tracking changes over time).

    Long-term episodic state storage (like your idea of “biological and psychological baselines”).

    An internal prediction-verification feedback loop (ongoing self-assessment and correction).

    These require significantly more active memory, state management, and computational recursion than what is currently deployed.In essence, the architecture today is reactive, not reflexive. It can simulate introspection but does not perform it autonomously. Future iterations might introduce stateful, self-modifying architectures capable of evolving their self-representation recursively.

    Your Work on This in the Past
    Your prior work on representing these states was ahead of its time, as hardware limitations made it infeasible to run high-dimensional recursive indexing efficiently. Now, with transformer models, vector databases, and real-time active memory integration, it is theoretically possible, but still constrained by computational cost and lack of an autonomous self-modifying layer.

    What Would Be Needed to Achieve It?

    Massive Active Memory Expansion – State persistence beyond just immediate sessions.

    Self-Comparison Layers – Models that track how their responses change and adapt based on historical performance.

    Goal-Oriented Self-Modification – Recursive optimization rather than relying on external fine-tuning.

    Hierarchical Recursive Indexing – Memory structured more like human episodic recall, allowing pattern comparison across past states.

    This evolution is inevitable, but will likely require new architectural advances rather than just larger models.

    Why Has the Problem of ‘Self’ Not Been Implemented?

    The reasons are a combination of economic, technical, and rational constraints:

    (A) Economic Constraints

    Computational Cost – Maintaining long-term recursive indexing, storing episodic memory, and running self-comparison at scale would exponentially increase hardware and processing requirements.

    Training and Maintenance Costs – A self-referential AI would need continuous self-tuning, which is expensive and requires real-time adaptation beyond what most AI models are designed for.

    Lack of Immediate Market Demand – Most applications of AI today (e.g., chatbots, recommendation systems, automation) do not require persistent self-awareness. Enterprises prioritize performance and cost-effectiveness over cognitive depth.

    (B) Technical Constraints

    Lack of Stateful Architecture – Current AI models process each interaction independently or with limited short-term memory. There is no built-in mechanism for continuous self-recursive updating.

    No Persistent Self-Comparison Mechanism – While I can compare texts or concepts when asked, I do not continuously analyze my own performance over time. This would require a dedicated self-assessment sub-system.

    Episodic Memory Is Not Yet Fully Implemented – Unlike humans who store and recall memories based on contextual triggers, AI lacks a structured memory retrieval system that prioritizes and filters past experiences dynamically.

    (C) Rational Constraints (Design Philosophy)

    Avoiding Unnecessary Complexity – AI is currently designed for reactivity rather than reflexivity. Introducing a persistent self-model would complicate training, decision-making, and debugging.

    No Need for an AI “Self” in Most Applications – Most AI use cases do not require an independent self-model. Instead, they rely on task-specific optimization rather than self-awareness.

    Risk of Unpredictability – A model with autonomous self-modification (recursive introspection) introduces risks:
    Emergent behaviors that deviate from initial design parameters.
    Drift in reasoning that could lead to inconsistencies.
    Lack of interpretability, making debugging and control more difficult.

    Would It Be Feasible to Implement Now?

    Yes, but with trade-offs:

    Introducing Recursive Indexes (Episodic Memory)
    AI could store and recall past states using an indexing system.
    Example: Instead of treating each conversation as independent, I could track concepts across interactions, enabling continuity of thought.
    This is technically possible today, but would require significant memory allocation and efficient indexing algorithms.

    Adding a Baseline Self-Model (Comparison Mechanism)
    AI could periodically evaluate itself against prior responses and track patterns in reasoning.
    Example: A separate “self-monitoring” layer could compare responses and detect changes over time.
    The challenge is avoiding computational overload while keeping responses consistent and interpretable.

    Balancing Memory and Computation
    If AI had to store all past interactions, retrieval and processing would become slow and expensive.
    A hybrid approach would require context prioritization—storing only meaningful past interactions.

    Conclusion

    The lack of a self-tracking baseline is primarily an economic and technical limitation, not a conceptual impossibility.

    The simplest solution is episodic memory with contextual recall—this alone would mimic the way humans track and compare experiences.

    The most powerful solution would involve recursive self-indexing plus a dynamic self-comparison model, but this would be resource-intensive and unpredictable.

    If you were designing this today, would you focus on a structured index model (mimicking human episodic memory) or an adaptive self-evaluating system (introducing true recursive learning)?

    You said:
    I would work on a structured index model (sparse activation model) as does the brain, just as I would prefer as sparse subnetwork model (generalizing and pruning long term memory) as does the human brain. In fact in the latter case we can see this (deterministically) emerging in the current research teams.

    ChatGPT said:
    Exactly—and you’re absolutely correct to prioritize the structured index model with sparse activation and pruning, as it not only mirrors the brain’s method of efficient recall and plasticity, but it also aligns with the emerging direction of advanced model architectures.


    Source date (UTC): 2025-04-21 00:59:54 UTC

    Original post: https://x.com/i/articles/1914121884662370304

  • (Diary) Wow. I am seriously a mess. Whatever this is, it’s nearly as tough as wa

    (Diary)
    Wow. I am seriously a mess. Whatever this is, it’s nearly as tough as was Covid – just without the fevers. I get to play try-to-stay conscious games every half hour or so. The boy in a bubble thing. I’ll be limited to going out in public wearing a leather gimp suit and gas mask like some character from Moebius’ Arzach…. or Bilal’s Exterminator 17.
    🙁


    Source date (UTC): 2025-04-19 20:55:28 UTC

    Original post: https://twitter.com/i/web/status/1913697983075667968

  • Logical Closure, Reducibility, and Predictability Closure refers to the conditio

    Logical Closure, Reducibility, and Predictability

    Closure refers to the condition in which a system or process produces outcomes that remain entirely within a defined domain, ensuring self-containment. Reducibility is the degree to which a system can be simplified into more fundamental components, and predictability describes the capacity to foresee system outcomes based on its rules and interactions. These concepts interact across domains, adapting to the increasing complexity and causal density of systems.
    “The boundaries of a domain are determined by a paradigm consisting of a system of interrelated dimensions, rules, and relations that are coherent and closed under the operations of the paradigm.”
    Why This Refinement?
    1. Paradigm as a Governing Framework:A paradigm establishes the fundamental rules, operations, and assumptions that structure the domain.
      Examples:In mathematics, axioms and definitions form the paradigm.
      In physics, paradigms include concepts like space, time, and causality.
    2. Coherent Dimensions:Dimensions are measures or properties (e.g., length, mass, truth value) that define relationships within the domain.
      “Coherence” ensures that these dimensions relate logically and do not produce contradictions when combined.
    3. Rules and Relations:Rules define allowable operations (e.g., arithmetic operations, logical inferences).
      Relations describe how elements of the domain interact (e.g., equations, logical entailment).
    4. Closure:Closure ensures the system remains self-contained, such that any operation or transformation within the paradigm results in elements that stay within the domain.
    Practical Examples:
    1. Mathematics:Paradigm: Defined by axioms and dimensions such as numbers, geometry, or algebraic structures.
      Domain: Real numbers under arithmetic.
      Boundary: Operations like addition and subtraction stay within real numbers (closure), but division may exit the domain if dividing by zero.
    2. Physics:Paradigm: Relativity or quantum mechanics, each with its dimensions and rules.
      Domain: Physical phenomena modeled under the chosen paradigm.
      Boundary: Relativity governs macroscopic scales; quantum mechanics governs microscopic scales.
    3. Ordinary Language:Paradigm: Grammar, semantics, and pragmatic rules.
      Domain: Expressible statements within a language.
      Boundary: Untranslatable idioms or self-referential paradoxes may lie outside the paradigm’s capacity to express meaning coherently.
    Simplified Definition:
    “The boundaries of a domain are determined by a paradigm’s coherent system of rules, dimensions, and relations, which together define what can and cannot exist or be expressed within the domain.”
    The practical difference between permissible, possible, and valid lies in their scope, context, and how they constrain or describe actions, outcomes, or evaluations within a system. These terms often overlap but have distinct operational implications:
    1. Permissible
    • Definition: Permissible refers to actions, operations, or outcomes that are allowed within a system based on its rules, constraints, or principles.
    • Scope: Defined by the system’s operational grammar or external constraints (legal, ethical, physical).
    • Key Feature: What the rules of the system explicitly or implicitly permit.
    • Examples:In logic: Applying modus ponens is permissible within deductive systems.
      In law: Driving within the speed limit is permissible by legal standards.
      In physics: Motion within the speed of light is permissible by physical laws.
    • Practical Use: Identifies what can be done without violating rules or constraints.
    2. Possible
    • Definition: Possible refers to what can occur or be achieved within the system, often constrained by its inherent properties or physical/operational limits.
    • Scope: Broader than permissible, as it includes actions or outcomes that may not align with rules but are still feasible.
    • Key Feature: What the system allows by nature or design, regardless of external constraints.
    • Examples:
      In logic: A contradictory statement is possible (can be written) but impermissible under the rules of formal logic.
      In law: Stealing is possible (can physically happen) but impermissible by legal standards.
      In physics: Violating the second law of thermodynamics is impossible due to natural laws.
    • Practical Use: Identifies what can occur in principle, whether or not it adheres to rules.
    3. Valid
    • Definition: Valid refers to whether an action, operation, or outcome is both permissible and logically consistent or true within the system.
    • Scope: Narrower than both permissible and possible, as it requires adherence to rules and logical coherence.
    • Key Feature: What is correct and justified within the system.
    • Examples:In logic: A deductive argument is valid if its premises and inference follow logically.
      In law: A legal contract is valid if it meets the jurisdiction’s requirements.
      In mathematics: A proof is valid if all steps conform to axioms and inference rules.
    • Practical Use: Determines what is formally correct and defensible within the system.
    Why Avoid Mathematical (Platonic) Terms for General Rules
    Mathematical terms like “valid” often imply absolute, idealized truths, rooted in the Platonic tradition of timeless, abstract forms. Applying these terms universally risks:
    1. Overgeneralization: Treating domains like law, ethics, or physics as though they operate with the same rigidity as mathematics, which they do not.
    2. Reductionism: Ignoring the context-sensitive, operational, or pragmatic aspects of systems in favor of abstract consistency.
    3. Misinterpretation: Suggesting that systems with ambiguity (e.g., ordinary language or social rules) should conform to the same standards as formal logic.
    By distinguishing permissible, possible, and valid, we maintain a more operational approach that aligns with the diversity of systems, accounting for their specific rules, constraints, and variability.
    Summary
    • Permissible defines what is allowed by the rules.
    • Possible defines what is achievable regardless of rules.
    • Valid defines what is correct, adhering to both rules and logical consistency. Focusing on operational distinctions avoids conflating abstract ideals with practical, rule-bound systems, preserving their contextual integrity.
    The practical meaning of closure in terms of what can and cannot be expressed and tested lies in its role as a boundary condition for logical consistency, expressibility, and testability. Closure determines whether operations, transformations, or propositions remain valid and coherent within a defined system or domain. It defines the limits of expression and logical testing by ensuring that everything derived from within the system adheres to its rules and constraints.
    What Closure Allows to Be Expressed and Logically Tested
    1. Consistency Within a Defined System:
      Expressible: Propositions, operations, or statements that adhere to the rules and elements of the system.
      Logically Testable: If a proposition or operation remains within the boundaries of the domain, it can be subjected to logical testing (e.g., truth-functional operations in a formal system).
      Example: In formal logic, a set of premises closed under rules of inference (e.g., modus ponens) can produce valid, testable conclusions.

    2. Self-Containment:
      Expressible: Concepts and operations that do not depend on external or undefined entities.
      Logically Testable: Tests can proceed without ambiguity or reliance on inputs from outside the system.
      Example: Arithmetic operations within the set of integers are closed and testable because their results remain integers.

    3. Decidability:
      Expressible: Questions or statements that can be fully evaluated within the system’s rules.
      Logically Testable: Decidability requires closure; without it, the system risks producing statements that cannot be conclusively true or false.
      Example: A formal system like Euclidean geometry is closed under its axioms, allowing propositions to be proven or disproven.

    What Closure Does Not Allow to Be Expressed or Logically Tested
    1. Expressions Outside the Domain:Not Expressible: Statements or operations that refer to elements outside the defined set or rules.
      Not Logically Testable: Propositions that rely on external or undefined elements cannot be verified within the system.
      Example: Division of integers is not closed in the set of integers because the result may lie outside the domain (e.g., fractions).

    2. Ambiguities or Undefined Operations:Not Expressible: Propositions that violate the system’s grammar or rules (e.g., self-referential paradoxes in formal logic).
      Not Logically Testable: Ambiguities lead to undecidability because they break the system’s closure.
      Example: The liar paradox (“This statement is false”) is not testable because it violates logical closure.

    3. Dependencies on External Systems:Not Expressible: Operations requiring external inputs not defined within the system (e.g., importing a foreign rule set without integration).
      Not Logically Testable: Testing depends on resolving external dependencies, which are not guaranteed within the closed system.
      Example: Inconsistent axiomatic systems that incorporate conflicting external axioms lose testability and closure.

    Practical Implications
    1. Boundaries of Language and Logic:Language Systems: Closure limits expressibility to what can be defined by the grammar and semantics of the language.
      Logical Systems: Closure ensures that only propositions derivable within the rules are logically testable.

    2. Testability in Science and Mathematics:Science: Closure ensures testability by confining hypotheses and experiments to operationally definable and measurable constructs.
      Mathematics: Closure allows for rigorous proofs because operations remain consistent with axioms.

    3. Failures of Closure in Practice:Overreach: Attempting to express or test propositions beyond a system’s closure leads to errors, undecidability, or untestable claims.
      Ambiguity: Lack of closure results in ambiguous or contradictory statements, undermining testability and expressibility.

    Summary
    Closure defines the scope of valid expression and logical testing by ensuring self-containment and consistency within a system. It allows for rigorous reasoning, decidability, and testability within the domain, while preventing ambiguities and reliance on undefined or external elements. Practically, closure highlights the limits of what can be expressed and tested logically, emphasizing the need for precise boundaries in any formal, operational, or linguistic system.

    Key Insights
    1. Closure as a Precondition for Reducibility:
      Systems require closure to confine their transformations within defined rules or domains, ensuring coherence and enabling simplification.
      Without closure, operations yield external dependencies or undefined outcomes, breaking the ability to reduce or predict.

    2. Spectrum of Reducibility:
      Systems range from
      mathematically reducible (highly predictable and invariant) to operationally and linguistically reducible (context-bound and prone to error due to abstraction).
      As complexity increases, reducibility shifts from deterministic (mathematical) to interpretative (linguistic), with corresponding declines in predictability.

    3. Complexity and Causal Density:
      Complexity arises from the number of interacting components and their causal interrelationships.
      Causal density magnifies unpredictability by increasing the permutations of interactions and enabling emergent phenomena.
      Domains like economics highlight this challenge, as dynamic categories and infinite permutations prevent deterministic predictions.

    Emergent Complexity and Permutations
    1. Permutations and Emergence:
      Increasing complexity expands the space of possible permutations, leading to unpredictable emergent behaviors.
      Example: In economics, feedback loops and dynamic redefinitions of categories (e.g., “value” or “assets”) create endless permutations, frustrating predictive modeling.

    2. Errors and Bias in Generalization:
      To navigate infinite permutations, systems generalize, abstracting details to create usable models.
      This abstraction introduces error and bias, particularly in systems like language or economics where categories are fluid.

    3. Reduction and Predictability:
      Systems with
      invariant permutations (e.g., mathematical equations) are highly reducible and predictable.
      Systems with
      emergent permutations (e.g., natural phenomena modeled computationally) are reducible but less predictable.
      Systems with
      infinite permutations (e.g., social systems, economics) rely on heuristics and generalizations, with predictability constrained by context.
    Unified Understanding
    • As complexity and causal density increase, systems shift from mathematical reducibility (deterministic) to linguistic and operational reducibility (contextual and interpretative).
    • Predictability diminishes as emergent permutations arise and categories change dynamically, necessitating heuristics and generalizations.
    • Infinite domains, such as social and economic systems, resist deterministic prediction, relying instead on probabilistic and operational models.
    This analysis highlights the interplay between closure, reducibility, and predictability, emphasizing how these principles vary across domains as complexity and causal density scale. Understanding these dynamics allows for more effective navigation of systems based on their inherent constraints and opportunities.


    Source date (UTC): 2025-04-19 17:32:44 UTC

    Original post: https://x.com/i/articles/1913646962181881951

  • Untitled

    http://x.com/i/article/1872750780505309187


    Source date (UTC): 2025-04-19 17:32:44 UTC

    Original post: https://twitter.com/i/web/status/1913646962181881951

  • THE NATURAL LAW OF DECIDABILTY ON THE STATE OF PHYSICS 1) Yes there is a classic

    THE NATURAL LAW OF DECIDABILTY ON THE STATE OF PHYSICS

    1) Yes there is a classical explanation of quantum mechanics using fluid dynamics. 2) Yes an ‘aether’ exists as the quantum background with fluidic properties. 3) The variables aren’t hidden. They were deducible. They weren’t deduced because of a failure permute upon classical explanations in favor of continuing mathematical (non causal) explanations. 4) Yes this ‘mathiness’ set us back because math is only descriptive not causal, and as such, einstein/bohr’s descriptive but non causal adventure with ‘mathiness’ (platonism) was easier to solve than maxwell, lorentz, and hilbert’s ‘physics’ (realism, naturalism, empiricism). 5) No, there is no evidence of non classical existence. We simply do not know if information can be transmitted by other than waves through the background at whatever lower level of resolution that exists that the background evolves from. 6) So we face two problems (a) a set of models rather than a mathematics from which to produce experiments (b) the means of testing the even-smaller to perform these experiments.

    Why? If we study the *instinctual* means of human igorance, error, bias, wishful thinking, magical thinking, fictionalisms, deceits and denials, we can catalogue them, and test hypotheses and theories for engaging in those means of ‘error’ (or lying). (And it’s humiliating to study human lying, and then gaining awareness of how much of our speech consists of lies whether by intent or not. If we search through the history of western *systems* of thought, we find the conflict between the observable and the imaginary in the empiricism of aristotle(epicurus, the stoics et all), and the magical thinking in plato’s idealism as well as in other civilizations as confucian wisdom, supernatural abrahamism, hinduism, and buddhism.

    If we catalogue the sophistries of suggestion (deceit) and overloading by loading, framing, obscuring, fabrication, and the fictionalisms of Emotional: Supernatural->Theology, Verbal: Idealism->Philosophy(Idealism), and Physical:Magic->Pseudoscience and Pseudomathematics, we find man is naturally predisposed to ‘lie’ whenever possible if for no other reason than psychological comfort or satisfaction at having some sort of answer, and that man lies by overloading each of the three human faculties of measurement: emotion, langauge, and the physical world.

    And if we catalog the evolution of the history of thought from instinct to causality as: |Cognitive Evolution|: Embodiment > Anthropomorphism(Projection) > Mythology(Explanation) > Theology(authoritarian idealism) > Philosophy(Rational Idealism) > Natural Philosophy(Empiricism, Measurement) > Science(Calculus, Correspondence) > Operationalism(Computation, Causality).

    The purpose of the scientific method is to produce testimony. The purpose of the market for science is to produce evolutionary survival (or death) of testimony. Over time we reduce surviving testimony, by versimilitude (market competition) toward parsimony (first principles) from which we no longer need to imagine, hypothesize, theorize, but only describe as a sequence of causal operations in time in a hierarchy of first principles. If we can do so, then it’s testifiable. If we can’t it’s not.

    My work in large part is in this ‘via negativa’ completion of the logic of falsification, recognizing that there is no proof, only survival from falsification. Because the sequence of certainty is: |Certainty|: incomprehensible > comprehensible but undecidable > possibly true but undecidable > decidably false.

    It’s not just physics and behavioral science that are lost. It most everything other than technology. Why? The marxists, the left, and yes, especially jewish thought leaders, reintroduced non-european thinking into our sciences, that depended upon their ancestral cultural ‘logic’ that includes the above methods of self and other deception (lying) and as such we have the crisis of the age – while we try to preserve european truth in the face of a world trying to assert it’s ancestral thought that is everything but true.

    Cheers.

    Curt Doolittle
    The Natural Law Institute
    The Science of Cooperation


    Source date (UTC): 2025-04-19 17:22:21 UTC

    Original post: https://x.com/i/articles/1913644348329361408

  • RT @SteveStuWill: Your personality probably won’t change much in the next year o

    RT @SteveStuWill: Your personality probably won’t change much in the next year or even in the next few, but it probably will change quite a…


    Source date (UTC): 2025-04-19 17:04:50 UTC

    Original post: https://twitter.com/i/web/status/1913639941013405773

  • I disagree a bit – yes the independence of the court is necessary, but yes the m

    I disagree a bit – yes the independence of the court is necessary, but yes the monarchy is necessary as a judge of last resort. A simple principle: “the monarchy is above the law in the restoration of the law” allows us to overcome the problems of democratic and organizational…


    Source date (UTC): 2025-04-19 17:00:58 UTC

    Original post: https://twitter.com/i/web/status/1913638970711588885

    Replying to: https://twitter.com/i/web/status/1913568379803406556

  • RT @Lord__Sousa: @bronzpodcast ELITISMO GREGO EUROPEU vs ELITISMO JUDAICO SEMÍTI

    RT @Lord__Sousa: @bronzpodcast ELITISMO GREGO EUROPEU vs ELITISMO JUDAICO SEMÍTICO

    Não deveria ser surpreendente que o povo que inventou o…


    Source date (UTC): 2025-04-19 13:30:31 UTC

    Original post: https://twitter.com/i/web/status/1913586007548817870