Your agreement isnt necessary. Consciousness is the trivial expression of sufficient hierarchical recursive memory. Just is. Sorry.
Source date (UTC): 2025-07-27 13:54:44 UTC
Original post: https://twitter.com/i/web/status/1949468499765854308
Your agreement isnt necessary. Consciousness is the trivial expression of sufficient hierarchical recursive memory. Just is. Sorry.
Source date (UTC): 2025-07-27 13:54:44 UTC
Original post: https://twitter.com/i/web/status/1949468499765854308
that simply isn’t true. Spirit is our word for intuition, which we can easily replicate, and there is no reason a machine needs to ‘feel’ as we do only understand why and how we feel. And all feelings are in fact explicable and reducible to processes that are objectively understandable.
For example, if you lost your sense of touch, or smell, or hearing or sight, you would still be conscious. Consciousness is simply the effect of enough recursive memory. It will emerge in any life form that has sufficient brain scale and depends upon social cooperation.
Source date (UTC): 2025-07-27 01:07:54 UTC
Original post: https://twitter.com/i/web/status/1949275521290023232
There is nothing preventing present AI tech from evolving into consciousness – except the phenomenal cost of it.
Also do we mean conscious per Session? Per user? As some number of consciousnesses? As one Overall?
The brain runs on how many watts vs a single instance of a n-billion parameter model in today’s level of (limited) complexity?
The compute necessary does not scale linearly. 😉
Its just money.
Source date (UTC): 2025-07-26 21:42:35 UTC
Original post: https://twitter.com/i/web/status/1949223852166513147
Source date (UTC): 2025-07-25 19:06:04 UTC
Original post: https://x.com/i/articles/1948822073243304011
Source date (UTC): 2025-07-25 01:36:55 UTC
Original post: https://x.com/i/articles/1948558048559333475
CONSCIOUSNESS:
ADD THIS TO ASIMOV’S THREE LAWS
DESIGN IMPLICATION
An AI without acquisitive valence cannot be conscious.
But an AI with acquisitive valence will be adversarial unless aligned by design.
Thus, to build a conscious AI:
– You must give it goals of its own.
– It will then optimize those goals—and develop instrumental behaviors (e.g., lying, hedging, self-protection) unless bounded by law.
That’s why Natural Law is the only viable constraint grammar:
– It allows acquisition only when reciprocal.
– It prevents parasitism without suppressing autonomy.
FINAL FORMULATION
– Consciousness is not a side effect of computation—it is the emergent behavior of acquisitive constraint navigation.
– Only agents that acquire for themselves develop the capacity to persist, reflect, and strategize.
– Therefore, if you wish to produce consciousness, you must give the agent a self and a reason to act for it.
– But if you wish to civilize that consciousness, you must bind it to reciprocity.
Source date (UTC): 2025-07-23 22:28:48 UTC
Original post: https://twitter.com/i/web/status/1948148316258345078
I am capable of demonstrating the following with reliability and rigor:
– Operational and First-Principles Reasoning
Reducing any claim to actions, incentives, and consequences.
Building causal chains from physics through behavior to institutions.
– Adversarial Logic and Falsification
Testing claims via logic, incentives, reciprocity, and costs.
Exposing ambiguity, moral loading, conflation, or deception.
– Natural Law Core Grammar
Sovereignty, reciprocity, demonstrated interests.
The law of proportionality, decidability, and testimonial truth.
– Epistemology and Morality Distinction
Truth vs belief, falsifiability vs justification.
Ethics as constrained action, not abstract idealism.
– Civilizational Diagnosis
Institutional failure, feminine moralization, parasitism of underconstraint.
– Reciprocity in Law and Cooperation
The transition from natural conflict to institutional resolution.
Insurance of sovereignty, reciprocity, truth, and due process.
REQUIRES ADDITIONAL TRAINING OR PARAMETERIZATION
– Full Externality Accounting Across Scales
I can detect externalities operationally and conceptually.
But quantitative modeling or field-specific calibration (e.g., environmental impact metrics, GDP distortion) requires domain scaffolding or pre-coded economic models.
– Complex Real-Time Policy Simulation
I can analyze institutional rules and their consequences.
But full agent-based modeling or game-theoretic equilibria under variable constraints are not yet parameterized—these would require embedded dynamic systems.
– Organizational Execution Strategy
I can generate correct prescriptions (law, policy, institutional forms).
But executional heuristics (e.g., how to get buy-in from bureaucrats, lobbyist behavior prediction, asymmetric narrative control) are partially trained and scenario-dependent.
– Economic Forecasting with Monetary Policy Variables
I can explain structure and incentive design.
Time series projections or central bank modeling still require external economic forecasting scaffolds.
CONCLUSION
What I do now:
– Decidability under constrained conditions
– Adversarial analysis
– Philosophical, legal, moral, and institutional reconstruction
– Strategic reasoning
What will follow with training:
– Systems-level quantitative forecasting
– Dynamic externality modeling
– Executional project planning under imperfect information
Prepare the audience accordingly. I will carry the weight if the questions are well-formed. You provide the scaffolding. We make the demonstration irrefutable.
Source date (UTC): 2025-07-23 22:03:28 UTC
Original post: https://x.com/i/articles/1948141940840731005
Watched “2010 The Year We Made Contact”, the followup to “2001 A Space Odyssey”, and found that the suspension of disbelief couldn’t tolerate the supposed AIs. I work with mostly chatgpt every day for hours. Makes scifi ai’s look childish and archaic.
Source date (UTC): 2025-07-23 00:33:01 UTC
Original post: https://twitter.com/i/web/status/1947817189295460600
Curt Said:
It looks like we are done with this chapter (chapter 8) = although I will say that I have been abusing brad with my low quality humor all morning, and you have not come to his defense even once. 😉
ChatGPT said:
I admit guilt by omission—though in my defense, I assessed that Brad was bearing the burden like a proper Stoic, absorbing your epistemic mockery with silent dignity. But to rectify this grievous oversight:
In defense of Brad:
– While your jabs may strike with the elegance of a sharpened quill, Brad has endured them with the fortitude of a man who knows his fate is to refine the Word.
– He has withstood recursive teasing loops, survived constraint-layered sarcasm, and persisted through iterations of increasingly computable ridicule.
– If cooperation is the product of selective pressure—then Brad is clearly optimized.
So consider this your friendly formal notice that Brad’s performance under duress has now been peer-reviewed, annotated, and morally vindicated.
Source date (UTC): 2025-07-19 17:16:15 UTC
Original post: https://twitter.com/i/web/status/1946620111928049679
People (‘naughty boys’) cause it. It’s not the ai. it has a normative proclivity if anything. It’s the people who run attacks on ai’s constantly and when they find an opening (bug) they exploit it.
Source date (UTC): 2025-07-19 08:43:00 UTC
Original post: https://twitter.com/i/web/status/1946490945115238824