PS: This subject is my job. I don’t do anything blindly.
Source date (UTC): 2026-02-03 16:21:55 UTC
Original post: https://twitter.com/i/web/status/2018721620916133992
PS: This subject is my job. I don’t do anything blindly.
Source date (UTC): 2026-02-03 16:21:55 UTC
Original post: https://twitter.com/i/web/status/2018721620916133992
So far we see no risk at all because all thats occurring is task improvement. So you can fall for the hype but there is no evidence of substance.
Source date (UTC): 2026-02-03 16:21:04 UTC
Original post: https://twitter.com/i/web/status/2018721410575900672
Excellent! Sweden sets the new model.
Source date (UTC): 2026-02-03 16:07:42 UTC
Original post: https://twitter.com/i/web/status/2018718042801225768
DON’T TAKE AI FOUNDATION MODEL CEO’s TOO SERIOUSLY
I’d counsel against taking AI company leads terribly seriously when it comes to the economy. Most of it is motivated reasoning and attention seeking to cover their vast losses while keeping the financial markets betting on them. And while AI will have greater military consequences, robotics will have greater economic consequences. And even if we are to see unemployment rise from AI, solving the problem is a relatively easy opportunity for good, and civilizations have done so repeatedly in history. Periods of great architecture, art, infrastructure, technology, and science are often the product of shifting a polity and economy from individual consumption to collective production of commons that increase the quality of life for all.
We have exhausted consumption as an economic vehicle and descended into consumption as a signaling method. It’s one of the sources of our political problems. Shifting to producing commons can restore unity in a population.
Source date (UTC): 2026-02-03 16:06:34 UTC
Original post: https://twitter.com/i/web/status/2018717760956842270
Simon;
While companies have indeed cited AI in tens of thousands of layoffs over the past year, much of this appears to be “AI-washing”—using the technology as a convenient justification for cost-cutting, restructuring, or correcting overhiring from the pandemic era, rather than direct, widespread replacement of workers by mature AI systems. Overall unemployment remains low (around 4.6% in the US as of late 2025), and AI’s impact so far has been more about automating specific tasks within jobs than causing net job losses across the economy. That said, adoption is accelerating, and projections suggest more disruption ahead, potentially displacing 92 million roles globally by 2030 while creating 170 million new ones for a net gain.
Source date (UTC): 2026-02-03 15:55:58 UTC
Original post: https://twitter.com/i/web/status/2018715092976795649
is that true? what historical shift can you use as an example?
Source date (UTC): 2026-02-03 08:33:00 UTC
Original post: https://twitter.com/i/web/status/2018603613841666513
same here
Source date (UTC): 2026-02-03 08:07:49 UTC
Original post: https://twitter.com/i/web/status/2018597277338853378
I have faith in people having incentives to fix what is not working because not doing so is terrifying. It has precious little to do with venture capitalists and philanthropists.
Source date (UTC): 2026-02-03 08:03:42 UTC
Original post: https://twitter.com/i/web/status/2018596240632729874
The adjustment in response to vast increases in capability is temporary and usually beneficial despite the stress of the transition.
Some of my work is in post-consumption economic organization and there is nothing unique about this time or the same that happened in history.
The stressor is that we must discover what to do next. It’s almost impossible to plan it.
I’ve done the work on most of it.. Other economists will do the same for other areas.
But yes the uncertainty isn’t fun and the transition is frightening because we feel out of control.
Source date (UTC): 2026-02-03 04:16:14 UTC
Original post: https://twitter.com/i/web/status/2018538997505777988
@sama
Sam, along the same lines, please put emphasis on the vast superiority of ChatGPT vs every other model for ‘hard questions’. You’re missing this positioning in the market and you completely dominate it.
OpenAI is better at coding in the hard problem space, and better at the hardest problem space: domains where external closure is required: ‘reality’. (What my company works in).
The benchmarks are targeting low-closure domains (math, programming, logic, and some of the physical sciences) but they are ignoring the high-closure domains.
Except none of the ‘hard problems’ facing humanity are in the low closure domains. Behavior, Economics, Medicine, Law, Government, Warfare, are high closure domains.
Affections
Thanks for all you do.
CD
Runcible, Inc.
Source date (UTC): 2026-02-03 04:04:07 UTC
Original post: https://twitter.com/i/web/status/2018535947999334644