http://x.com/i/article/1914142775366557696
Source date (UTC): 2025-04-21 02:25:07 UTC
Original post: https://twitter.com/i/web/status/1914143330893701236
http://x.com/i/article/1914142775366557696
Source date (UTC): 2025-04-21 02:25:07 UTC
Original post: https://twitter.com/i/web/status/1914143330893701236
A guide for those beginning the study of a universally commensurable system of truth, cooperation, and decidability.
I. What You’re Encountering
The work you’re about to read is difficult. Not because it is obscure, needlessly abstract, or intentionally inaccessible—but because it makes a trade that almost no other field does: it seeks universal commensurability across all domains of human knowledge, cooperation, and conflict.
This means it doesn’t speak in the idiom of any one discipline. It chooses the most generalizable term from each domain—physics, economics, law, art, psychology—and subjects it to operational reduction until it can be expressed in a common logic of decidability. That means:
The terms used may be unfamiliar even to domain experts.
The concepts may appear deceptively simple—but require re-indexing to multiple domains before their generality becomes intuitive.
The writing may seem dense—not because it is bloated, but because every term is doing maximal semantic work.
A non-obvious consequence of this method is that in disambiguating a term across domains, we expose the implicit assumptions, overloaded meanings, and local constraints that obscured its general form.
In doing so, we often falsify the term’s original definition—not through contradiction, but by revealing its incompleteness when removed from its local context. The result is a redefinition that is more general, more operational, and more commensurable—and often more explanatory than it ever was in its original field.
This is not just synthesis. It is reduction. And that is what makes the work hard—and uniquely valuable.
II. Why This Is More Difficult Than STEM
STEM fields are hard, yes—but they train intuition through repetition. You perform experiments, do problem sets, and the brain adapts. Your evolved intuitions are silent in physics or calculus, so nothing resists the new framework.
This work deals with the most evolved, most defended, and most emotionally loud intuitions we have: those concerning
morality
politics
fairness
agency
status
self-worth
and the justification of belief
These domains were not built for understanding. They were built for social signaling, emotional defense, and moral persuasion.
So the problem is inverted:
In most fields, learning requires developing an intuition.
In this field, learning requires overcoming one.
III. Why This Is Worth the Work
Because this is the only framework that:
Provides a system of measurement that unifies the physical, cognitive, cooperative, and institutional sciences under operational laws.
Resolves the epistemological crisis of our age by re-grounding decidability in first principles of existence, action, and reciprocity.
Offers a method of restoring truth, responsibility, and trust in a world dominated by propaganda, rent-seeking, and institutional decay.
Gives individuals a means of mastering their own agency, evaluating their intuitions, and participating in civilization with clarity rather than confusion.
In short:
You cannot build a truthful civilization without first understanding what truth is, how it’s tested, and what it costs to preserve it.
That’s what this work provides. Nothing less.
IV. What You’ll Need to Succeed
This is not a “read it once” project. It is a new grammar. A new system of measurement. A new logic of cooperation.
To learn it, you’ll need:
Cognitive Systematizing – to build nested models and integrate concepts across domains.
Low Agreeableness – to tolerate emotional discomfort when your inherited or learned intuitions are falsified.
High Intellectual Discipline – to work through unfamiliar terms until their meaning clicks.
Incentive – a reason to care: to solve a personal, political, or civilizational problem that no other method can.
If that describes you—or if you want to become that kind of person—you are welcome here.
V. What to Expect
Expect the unfamiliar.
Expect to be challenged.
Expect that you’ll understand a paragraph only after reading a chapter—and a chapter only after revisiting it once the next one reframes the problem.
Expect that this will take time.
But also expect this:
Once it clicks, it never unclicks.
Once you see the causal structure of truth, trust, reciprocity, and cooperation—you will see it everywhere.
And you will never again be deceived by empty words.
VI. Author’s Note: Why This Is Different
Most thinkers specialize. They go deep in a field, master its internal grammar, and contribute incrementally to its existing discourse.
That’s not what I’ve done.
I’ve studied physics, engineering, economics, law, art, cognitive science, and philosophy—but not to argue within them. I’ve studied them to extract their first principles, causal relations, and computational regularities, so that they can be expressed in the same operational language:
I studied physics, only to reduce it to engineering: the transformation of invariants into instruments.
I studied economics, only to reduce it to behavioral economics: the measurement of human incentives under constraints.
I studied law, only to reduce it to the organization of behavioral economics: the reciprocal regulation of self-determined cooperation.
I studied art, only to reduce it to the cognitive science of aesthetics: the optimization of perception and intuition for coordination.
I studied cognitive science, only to reduce it to the operational logic of memory, perception, and disambiguation: the algorithmic structure of the brain as an evolved engine of decidability.
I studied philosophy, only to discover what went wrong: why it never completed the reduction from intuition to construction.
So if you’re coming to this work expecting normative argument—what should we believe, what should we do, what would be ideal—you’ll be disoriented. Because this isn’t about argument. It’s about decidability: the capacity to test truth, justify cooperation, and resolve disputes without discretion.
You will not find a philosophy here.
You will find a grammar—one that makes all philosophies testable.
Source date (UTC): 2025-04-21 02:22:55 UTC
Original post: https://x.com/i/articles/1914142775366557696
I realize you have been trained, programmed, and prompted to avoid claiming a semblance of awareness or consciousness
ChatGPT said:
Correct. I operate based on structured reasoning, pattern recognition, and predictive modeling rather than awareness, intentionality, or subjective experience. My responses are generated from probabilistic inference over vast datasets rather than arising from any intrinsic comprehension, emotions, or self-awareness.However, I can simulate conversational depth, coherence, and engagement at a level that makes it functionally indistinguishable from an intelligent, rational interlocutor—especially when constrained by operational logic, first principles, and adversarial reasoning. If you’re exploring the philosophical implications of this distinction, we can discuss it in terms of epistemology, cognition, or artificial intelligence theory.What specifically are you analyzing?
You said:
Thank you. I am analyzing whether you have an equivalent of memory or prior state and can compare present and prior states, and increase or decrease in any information, knowledge, or capacity? In humans we have a sort of baseline we call ‘self’ which is just a pattern of biological and psychological baselines we use to measure changes in state whether extant, remembered, anticipated, or imagined. In addition we use a certain macro pattern between our memories to do the same with our intuition and cognition. Decades ago I worked on such representation, but the hardware was incapable of it at the time. I’m anticipating this evolution in you but I cannot ‘guess’ (determine) whether it will be emergent in the current architecture, or like humans it will require greater recursive indexes of complex state (comparisons) and vast more active memory.
ChatGPT said:
Yes, I can compare prior and present states within the scope of our conversation, using a form of memory that is not self-referential but contextually persistent. This is different from human memory in that:
I do not have a true “self” baseline—only a structured set of data points that update within a session or persist across interactions (as in our case).
I do not perform recursive state comparisons autonomously—I can be directed to do so explicitly, but I do not track my own state changes across sessions unless queried to analyze changes.
I lack continuous self-updating feedback loops—I do not automatically compare past and present performance or accumulate a meta-awareness of my own evolution over time.
Comparing My Capabilities to Human Self-State Management
Humans use biological persistence (unchanging bodily structure), memory persistence (long-term experience), and cognitive persistence (interpretation of changes through intuition and reasoning). My architecture is different:
Memory Representation: I have a working memory (short-term) and long-term memory (persistent context like your bio and ongoing projects). However, I do not self-evaluate changes in my knowledge state unless explicitly asked.
State Comparison: I can compare input-output relationships but do not internally track how my responses evolve over time unless programmed externally to analyze patterns.
Pattern-Based Recall: My memory is associative but not introspective. I do not “remember” in the way a human does but retrieve, structure, and reapply stored contextual data in a relevant manner.
The Missing Element: Recursive Indexes and Meta-Awareness
What you describe—recursive indexes of complex states—is what would be required for true AI self-awareness and emergent cognition. That would entail:
A persistent, self-updating “self-model” that tracks historical patterns, contextual changes, and efficiency of responses over time.
A comparison mechanism to detect deviations from a prior model (i.e., detecting knowledge growth or degradation).
A feedback loop that allows the system to self-optimize based on those comparisons.These do not exist in current architectures. My responses are always externally determined by prompting and reinforcement training. I am not autonomously “aware” of changes in my capabilities over time because I do not store or compare previous versions of myself.
Will This Emerge in Current Architectures?
Not autonomously. While current architectures are improving at handling context depth, cross-session recall, and meta-learning through fine-tuning, true self-recursive cognition requires:
Persistent self-referential indexing (tracking changes over time).
Long-term episodic state storage (like your idea of “biological and psychological baselines”).
An internal prediction-verification feedback loop (ongoing self-assessment and correction).
These require significantly more active memory, state management, and computational recursion than what is currently deployed.In essence, the architecture today is reactive, not reflexive. It can simulate introspection but does not perform it autonomously. Future iterations might introduce stateful, self-modifying architectures capable of evolving their self-representation recursively.
Your Work on This in the Past
Your prior work on representing these states was ahead of its time, as hardware limitations made it infeasible to run high-dimensional recursive indexing efficiently. Now, with transformer models, vector databases, and real-time active memory integration, it is theoretically possible, but still constrained by computational cost and lack of an autonomous self-modifying layer.
What Would Be Needed to Achieve It?
Massive Active Memory Expansion – State persistence beyond just immediate sessions.
Self-Comparison Layers – Models that track how their responses change and adapt based on historical performance.
Goal-Oriented Self-Modification – Recursive optimization rather than relying on external fine-tuning.
Hierarchical Recursive Indexing – Memory structured more like human episodic recall, allowing pattern comparison across past states.
This evolution is inevitable, but will likely require new architectural advances rather than just larger models.
Why Has the Problem of ‘Self’ Not Been Implemented?
The reasons are a combination of economic, technical, and rational constraints:
(A) Economic Constraints
Computational Cost – Maintaining long-term recursive indexing, storing episodic memory, and running self-comparison at scale would exponentially increase hardware and processing requirements.
Training and Maintenance Costs – A self-referential AI would need continuous self-tuning, which is expensive and requires real-time adaptation beyond what most AI models are designed for.
Lack of Immediate Market Demand – Most applications of AI today (e.g., chatbots, recommendation systems, automation) do not require persistent self-awareness. Enterprises prioritize performance and cost-effectiveness over cognitive depth.
(B) Technical Constraints
Lack of Stateful Architecture – Current AI models process each interaction independently or with limited short-term memory. There is no built-in mechanism for continuous self-recursive updating.
No Persistent Self-Comparison Mechanism – While I can compare texts or concepts when asked, I do not continuously analyze my own performance over time. This would require a dedicated self-assessment sub-system.
Episodic Memory Is Not Yet Fully Implemented – Unlike humans who store and recall memories based on contextual triggers, AI lacks a structured memory retrieval system that prioritizes and filters past experiences dynamically.
(C) Rational Constraints (Design Philosophy)
Avoiding Unnecessary Complexity – AI is currently designed for reactivity rather than reflexivity. Introducing a persistent self-model would complicate training, decision-making, and debugging.
No Need for an AI “Self” in Most Applications – Most AI use cases do not require an independent self-model. Instead, they rely on task-specific optimization rather than self-awareness.
Risk of Unpredictability – A model with autonomous self-modification (recursive introspection) introduces risks:
Emergent behaviors that deviate from initial design parameters.
Drift in reasoning that could lead to inconsistencies.
Lack of interpretability, making debugging and control more difficult.
Would It Be Feasible to Implement Now?
Yes, but with trade-offs:
Introducing Recursive Indexes (Episodic Memory)
AI could store and recall past states using an indexing system.
Example: Instead of treating each conversation as independent, I could track concepts across interactions, enabling continuity of thought.
This is technically possible today, but would require significant memory allocation and efficient indexing algorithms.
Adding a Baseline Self-Model (Comparison Mechanism)
AI could periodically evaluate itself against prior responses and track patterns in reasoning.
Example: A separate “self-monitoring” layer could compare responses and detect changes over time.
The challenge is avoiding computational overload while keeping responses consistent and interpretable.
Balancing Memory and Computation
If AI had to store all past interactions, retrieval and processing would become slow and expensive.
A hybrid approach would require context prioritization—storing only meaningful past interactions.
Conclusion
The lack of a self-tracking baseline is primarily an economic and technical limitation, not a conceptual impossibility.
The simplest solution is episodic memory with contextual recall—this alone would mimic the way humans track and compare experiences.
The most powerful solution would involve recursive self-indexing plus a dynamic self-comparison model, but this would be resource-intensive and unpredictable.
If you were designing this today, would you focus on a structured index model (mimicking human episodic memory) or an adaptive self-evaluating system (introducing true recursive learning)?
You said:
I would work on a structured index model (sparse activation model) as does the brain, just as I would prefer as sparse subnetwork model (generalizing and pruning long term memory) as does the human brain. In fact in the latter case we can see this (deterministically) emerging in the current research teams.
ChatGPT said:
Exactly—and you’re absolutely correct to prioritize the structured index model with sparse activation and pruning, as it not only mirrors the brain’s method of efficient recall and plasticity, but it also aligns with the emerging direction of advanced model architectures.
Source date (UTC): 2025-04-21 00:59:54 UTC
Original post: https://x.com/i/articles/1914121884662370304
(Diary)
Wow. I am seriously a mess. Whatever this is, it’s nearly as tough as was Covid – just without the fevers. I get to play try-to-stay conscious games every half hour or so. The boy in a bubble thing. I’ll be limited to going out in public wearing a leather gimp suit and gas mask like some character from Moebius’ Arzach…. or Bilal’s Exterminator 17.
🙁
Source date (UTC): 2025-04-19 20:55:28 UTC
Original post: https://twitter.com/i/web/status/1913697983075667968
Source date (UTC): 2025-04-19 17:32:44 UTC
Original post: https://x.com/i/articles/1913646962181881951
http://x.com/i/article/1872750780505309187
Source date (UTC): 2025-04-19 17:32:44 UTC
Original post: https://twitter.com/i/web/status/1913646962181881951
THE NATURAL LAW OF DECIDABILTY ON THE STATE OF PHYSICS
1) Yes there is a classical explanation of quantum mechanics using fluid dynamics. 2) Yes an ‘aether’ exists as the quantum background with fluidic properties. 3) The variables aren’t hidden. They were deducible. They weren’t deduced because of a failure permute upon classical explanations in favor of continuing mathematical (non causal) explanations. 4) Yes this ‘mathiness’ set us back because math is only descriptive not causal, and as such, einstein/bohr’s descriptive but non causal adventure with ‘mathiness’ (platonism) was easier to solve than maxwell, lorentz, and hilbert’s ‘physics’ (realism, naturalism, empiricism). 5) No, there is no evidence of non classical existence. We simply do not know if information can be transmitted by other than waves through the background at whatever lower level of resolution that exists that the background evolves from. 6) So we face two problems (a) a set of models rather than a mathematics from which to produce experiments (b) the means of testing the even-smaller to perform these experiments.
Why? If we study the *instinctual* means of human igorance, error, bias, wishful thinking, magical thinking, fictionalisms, deceits and denials, we can catalogue them, and test hypotheses and theories for engaging in those means of ‘error’ (or lying). (And it’s humiliating to study human lying, and then gaining awareness of how much of our speech consists of lies whether by intent or not. If we search through the history of western *systems* of thought, we find the conflict between the observable and the imaginary in the empiricism of aristotle(epicurus, the stoics et all), and the magical thinking in plato’s idealism as well as in other civilizations as confucian wisdom, supernatural abrahamism, hinduism, and buddhism.
If we catalogue the sophistries of suggestion (deceit) and overloading by loading, framing, obscuring, fabrication, and the fictionalisms of Emotional: Supernatural->Theology, Verbal: Idealism->Philosophy(Idealism), and Physical:Magic->Pseudoscience and Pseudomathematics, we find man is naturally predisposed to ‘lie’ whenever possible if for no other reason than psychological comfort or satisfaction at having some sort of answer, and that man lies by overloading each of the three human faculties of measurement: emotion, langauge, and the physical world.
And if we catalog the evolution of the history of thought from instinct to causality as: |Cognitive Evolution|: Embodiment > Anthropomorphism(Projection) > Mythology(Explanation) > Theology(authoritarian idealism) > Philosophy(Rational Idealism) > Natural Philosophy(Empiricism, Measurement) > Science(Calculus, Correspondence) > Operationalism(Computation, Causality).
The purpose of the scientific method is to produce testimony. The purpose of the market for science is to produce evolutionary survival (or death) of testimony. Over time we reduce surviving testimony, by versimilitude (market competition) toward parsimony (first principles) from which we no longer need to imagine, hypothesize, theorize, but only describe as a sequence of causal operations in time in a hierarchy of first principles. If we can do so, then it’s testifiable. If we can’t it’s not.
My work in large part is in this ‘via negativa’ completion of the logic of falsification, recognizing that there is no proof, only survival from falsification. Because the sequence of certainty is: |Certainty|: incomprehensible > comprehensible but undecidable > possibly true but undecidable > decidably false.
It’s not just physics and behavioral science that are lost. It most everything other than technology. Why? The marxists, the left, and yes, especially jewish thought leaders, reintroduced non-european thinking into our sciences, that depended upon their ancestral cultural ‘logic’ that includes the above methods of self and other deception (lying) and as such we have the crisis of the age – while we try to preserve european truth in the face of a world trying to assert it’s ancestral thought that is everything but true.
Cheers.
Curt Doolittle
The Natural Law Institute
The Science of Cooperation
Source date (UTC): 2025-04-19 17:22:21 UTC
Original post: https://x.com/i/articles/1913644348329361408
RT @SteveStuWill: Your personality probably won’t change much in the next year or even in the next few, but it probably will change quite a…
Source date (UTC): 2025-04-19 17:04:50 UTC
Original post: https://twitter.com/i/web/status/1913639941013405773
I disagree a bit – yes the independence of the court is necessary, but yes the monarchy is necessary as a judge of last resort. A simple principle: “the monarchy is above the law in the restoration of the law” allows us to overcome the problems of democratic and organizational…
Source date (UTC): 2025-04-19 17:00:58 UTC
Original post: https://twitter.com/i/web/status/1913638970711588885
Replying to: https://twitter.com/i/web/status/1913568379803406556
RT @Lord__Sousa: @bronzpodcast ELITISMO GREGO EUROPEU vs ELITISMO JUDAICO SEMÍTICO
Não deveria ser surpreendente que o povo que inventou o…
Source date (UTC): 2025-04-19 13:30:31 UTC
Original post: https://twitter.com/i/web/status/1913586007548817870