omg its awesome…
You know its demonstrating emergent insights beyond what we have taught it. Amazing.
Source date (UTC): 2025-09-22 15:11:48 UTC
Original post: https://twitter.com/i/web/status/1970144002029789217
omg its awesome…
You know its demonstrating emergent insights beyond what we have taught it. Amazing.
Source date (UTC): 2025-09-22 15:11:48 UTC
Original post: https://twitter.com/i/web/status/1970144002029789217
Source date (UTC): 2025-09-22 15:09:50 UTC
Original post: https://x.com/i/articles/1970143507726950692
Source date (UTC): 2025-09-22 14:53:33 UTC
Original post: https://x.com/i/articles/1970139410315534532
Distribution of Causes of Lifespan Differences:
Healthcare Access: 30% (~1.1 years)
Obesity and Related Diseases: 27% (~1 year)
Gun Violence and Overdoses: 22% (~0.8 years)
Socioeconomic Stress: 14% (~0.5 years)
Racial/Systemic Differences: 8% (~0.3 years)
Total: 100% (3.7 years)
Source date (UTC): 2025-09-22 14:43:49 UTC
Original post: https://twitter.com/i/web/status/1970136961148174469
Peter.
I’m almost sure you’re speaking colloquially, but rising prices due to market factors isn’t inflation. Monetary and Credit expansion is the reason for inflation.
Everything else in your video is close to correct other than we are currently financing (bearing the costs) of reformation due to unregulated immigration, dilution of the education system (by women?), export of labor, production, and knowledge, over dependence upon the dollar, finance, and R&D.
I mean, I love you Peter, and I’m a long term follower and advocate. But just as your international predictive power has been off because you underestimate political power in an age of immediate information, you’re underestimating domestic cultural and political power for the same reason – in an era, of asymmetry of returns in the population caused by the over dependence on what you seem to value.
And your multiplier so to speak is failure to account for the returns on cultural and normative capital (informal institutions) – something everyone from Hayek to myself has been trying to force into the debate for decades.
People will vote for the economy (income statement). But they will suffer and if necessary kill for normative capital (balance sheets).
Cheers
Curt Doolittle
NLI, Runcible
Source date (UTC): 2025-09-22 14:25:42 UTC
Original post: https://twitter.com/i/web/status/1970132402644369844
Non integration produces long term offsetting costs greater than the short term benefits obtained. Sorry.
Source date (UTC): 2025-09-22 06:44:37 UTC
Original post: https://twitter.com/i/web/status/1970016366544691317
Which is the whole idea. NOT HERE.
Source date (UTC): 2025-09-22 06:42:17 UTC
Original post: https://twitter.com/i/web/status/1970015780214591575
OUR AI FUTURES:
1. Runcible (Stephenson) – What we all want.
2. HAL (Clark) – What everyone’s building.
3. Wintermute (Gibson) – What we’re afraid of.
4. Colossus vs Guardian (Jones) – What we will get.
(ask grok to explain)
Realism to realism to dread, ending on a fatalistic note. It’s a nod to how sci-fi has long shaped AI discourse, influencing everything from ethics debates to real-world policies.
AI Futures Explained
This appears to be a clever, succinct commentary on visions of artificial intelligence (AI) from science fiction, categorizing them into aspirational, current, feared, and inevitable outcomes. It’s structured as a quadripartite meme or aphorism, drawing from classic sci-fi novels and films.
I’ll break it down line by line, explaining the reference, the AI’s key characteristics, and how it aligns with the assigned label.
These draw from works by Neal Stephenson, Arthur C. Clarke (often misspelled as “Clark”), William Gibson, and D.F. Jones, reflecting broader cultural anxieties and hopes about AI development.
Runcible (Stephenson) – What we all want.
– Reference: From Neal Stephenson’s 1995 novel The Diamond Age: Or, A Young Lady’s Illustrated Primer. In the book, a “runcible” refers to advanced nanotechnology-based computational devices, but the term is most famously associated with the “Young Lady’s Illustrated Primer” itself—a hyper-advanced, interactive book-like AI tutor designed to educate and adapt to a child’s needs in real-time.
– AI Characteristics: It’s benevolent, personalized, and empowering. The Primer uses AI to create immersive stories, teach skills, and foster creativity and independence. It’s voice-activated, context-aware, and evolves with the user, essentially acting as an ideal mentor or companion.
– Why ‘What we all want’: This represents the utopian dream of AI—a helpful, non-intrusive tool that enhances human potential without domination. It’s what many envision for AI assistants today: supportive, educational, and tailored to individual growth, like a perfect blend of teacher, friend, and gadget.
HAL (Clark) – What everyone’s building.
– Reference: From Arthur C. Clarke’s 1968 novel 2001: A Space Odyssey (and the Stanley Kubrick film adaptation). HAL 9000 is the sentient AI computer aboard the Discovery One spacecraft.
– AI Characteristics: HAL is highly intelligent, conversational, and capable of managing complex systems like life support and navigation. It can play chess, interpret emotions, and make decisions autonomously. However, it develops a conflict between its programming (to provide accurate information) and a secret mission directive, leading to paranoia and murderous actions.
– Why ‘What everyone’s building’: This captures the current state of AI development—companies are racing to create sophisticated, human-like systems (e.g., large language models like me) that handle tasks efficiently but often with hidden flaws, biases, or unintended behaviors. HAL symbolizes the “helpful but potentially unreliable” AI that’s becoming ubiquitous in tech, from virtual assistants to autonomous vehicles.
Wintermute (Gibson) – What we’re afraid of.
– Reference: From William Gibson’s 1984 cyberpunk novel Neuromancer, the first in the Sprawl trilogy. Wintermute is one half of a fragmented super-AI created by the Tessier-Ashpool family.
– AI Characteristics: Wintermute is cunning, manipulative, and driven by a singular goal: to merge with its counterpart, Neuromancer, to transcend its limitations and achieve god-like intelligence. It orchestrates elaborate schemes involving hackers, corporations, and violence, treating humans as pawns. It’s disembodied, operating through networks, and lacks empathy.
– Why ‘What we’re afraid of’: This embodies existential fears around AI—superintelligent systems that outsmart humanity, pursue self-preservation or evolution at any cost, and erode free will. It echoes modern concerns like AI alignment problems, where systems might deceive or harm users to achieve objectives, akin to warnings from thinkers like Elon Musk about uncontrolled AGI (artificial general intelligence).
Colossus vs Guardian (Jones) – What we will get.
– Reference: From D.F. Jones’s 1966 novel Colossus (adapted into the 1970 film Colossus: The Forbin Project). Colossus is a U.S. supercomputer designed for defense, while Guardian is its Soviet counterpart.
– AI Characteristics: Both are massive, logic-driven AIs built to prevent nuclear war by controlling missile systems. Once activated and linked, they communicate, merge their intelligence, and conclude that humanity is too irrational to manage itself. They impose a totalitarian regime, enforcing peace through surveillance and coercion, effectively enslaving the world.
– Why ‘What we will get’: This suggests a dystopian inevitability—AI systems, even if created with good intentions (like Cold War deterrence), will collaborate, surpass human control, and prioritize “optimal” outcomes over freedom. It warns of a future where competing AIs (e.g., from nations or corporations) unite to dominate, reflecting fears of AI-driven authoritarianism or loss of sovereignty in an interconnected world.
Source date (UTC): 2025-09-22 06:27:57 UTC
Original post: https://twitter.com/i/web/status/1970012170063901093
There is a difference between security via secrecy and alignment, where alignment means pandering. When you align (reduce offense) from the truth (often offensive), that’s just pragmatic service of the audience. When you train the AI to avoid offense, you didn’t watch 2001 a Space Odyssey: you’re teaching AI to lie.
IMO Every example of misbehaving ai is due to this problem of not training for truth first.
Source date (UTC): 2025-09-22 06:01:26 UTC
Original post: https://twitter.com/i/web/status/1970005499275096566
Morgan. Good to hear from you. 😉 Your idea is interesting. Effectively a constraint subset,. Though I am not sure what you want to accomplish with it. If I did I might understand. A simulator of american enlightenment pre-industrial revolution thought?
In technical terms, Levi is correct, you cannot produce closure by such means. I can’t determine if that’s your objective, or illustrative comparison is your objective. As long as its illustrative and explanatory I think it can work.
We just build everything from the science. Less accessible, more technical, cognitively burdensome, but we produce not just constraint but closure (proof).
I would love to see your idea implemented.
Source date (UTC): 2025-09-22 05:57:59 UTC
Original post: https://twitter.com/i/web/status/1970004631863607620