Category: AI, Computation, and Technology

  • So far we see no risk at all because all thats occurring is task improvement. So

    So far we see no risk at all because all thats occurring is task improvement. So you can fall for the hype but there is no evidence of substance.


    Source date (UTC): 2026-02-03 16:21:04 UTC

    Original post: https://twitter.com/i/web/status/2018721410575900672

  • DON’T TAKE AI FOUNDATION MODEL CEO’s TOO SERIOUSLY I’d counsel against taking AI

    DON’T TAKE AI FOUNDATION MODEL CEO’s TOO SERIOUSLY
    I’d counsel against taking AI company leads terribly seriously when it comes to the economy. Most of it is motivated reasoning and attention seeking to cover their vast losses while keeping the financial markets betting on them. And while AI will have greater military consequences, robotics will have greater economic consequences. And even if we are to see unemployment rise from AI, solving the problem is a relatively easy opportunity for good, and civilizations have done so repeatedly in history. Periods of great architecture, art, infrastructure, technology, and science are often the product of shifting a polity and economy from individual consumption to collective production of commons that increase the quality of life for all.
    We have exhausted consumption as an economic vehicle and descended into consumption as a signaling method. It’s one of the sources of our political problems. Shifting to producing commons can restore unity in a population.


    Source date (UTC): 2026-02-03 16:06:34 UTC

    Original post: https://twitter.com/i/web/status/2018717760956842270

  • Simon; While companies have indeed cited AI in tens of thousands of layoffs over

    Simon;
    While companies have indeed cited AI in tens of thousands of layoffs over the past year, much of this appears to be “AI-washing”—using the technology as a convenient justification for cost-cutting, restructuring, or correcting overhiring from the pandemic era, rather than direct, widespread replacement of workers by mature AI systems. Overall unemployment remains low (around 4.6% in the US as of late 2025), and AI’s impact so far has been more about automating specific tasks within jobs than causing net job losses across the economy. That said, adoption is accelerating, and projections suggest more disruption ahead, potentially displacing 92 million roles globally by 2030 while creating 170 million new ones for a net gain.


    Source date (UTC): 2026-02-03 15:55:58 UTC

    Original post: https://twitter.com/i/web/status/2018715092976795649

  • @sama Sam, along the same lines, please put emphasis on the vast superiority of

    @sama

    Sam, along the same lines, please put emphasis on the vast superiority of ChatGPT vs every other model for ‘hard questions’. You’re missing this positioning in the market and you completely dominate it.

    OpenAI is better at coding in the hard problem space, and better at the hardest problem space: domains where external closure is required: ‘reality’. (What my company works in).

    The benchmarks are targeting low-closure domains (math, programming, logic, and some of the physical sciences) but they are ignoring the high-closure domains.

    Except none of the ‘hard problems’ facing humanity are in the low closure domains. Behavior, Economics, Medicine, Law, Government, Warfare, are high closure domains.

    Affections
    Thanks for all you do.
    CD
    Runcible, Inc.


    Source date (UTC): 2026-02-03 04:04:07 UTC

    Original post: https://twitter.com/i/web/status/2018535947999334644

  • (Runcible) UPDATE: Ok. So despite all my efforts at trying to keep Runcible Inte

    (Runcible)
    UPDATE: Ok. So despite all my efforts at trying to keep Runcible Intelligence (Runcible Governance Layer) a minimum package applicable to any LLM, with the intention of licensing the tech to the major foundation model producers, I’ve discovered we essentially have to produce the same technology stack as every other Lab, with the only difference that we govern an LLM rather than train one. Which is a heck of a different objective and a lot less work. Otherwise it’s exactly the same architecture and division of labor.
    The immediate benefits of the extra scale are:
    1. We can modify the code of the LLM we use as a router and mixture of experts.
    2. We can (eventually) modify the attention nodes for even greater precision.
    3. We can reach out and touch every foundation model and constrain it indirectly instead of locally (we send a package of protocols).
    4. We can produce the protocols for a vertical market in hours (yes really) because the core (epistemic layer) can process literally anything.
    5. So we can cover the entire set of markets in trivial time.
    6. We have better control over our IP which is the ‘secret’ to the velocity of production and the universalism of the core’s application.
    And that’s just the beginning.
    I was struggling to try to ‘keep it simple’. But without having to wait for approval from someone’s partnership program, and jump through hoops, asking them for special dispensation, and in addition waiting on their partnership decision cycle, and their adoption decision cycle, we can go to market immediately.
    We already have our first customers.
    And we can see the radical ramp up for an LLM that doesn’t hallucinate and produces warrantable and lability defensible outputs.
    Cheers
    CD


    Source date (UTC): 2026-01-31 19:15:47 UTC

    Original post: https://twitter.com/i/web/status/2017678214609748178

  • (Dark Humor) —“You know, writing the right prompt for anything of even limited

    (Dark Humor)
    —“You know, writing the right prompt for anything of even limited complexity is a little too close to trying to figure out what to say get in a girls pants.”–

    I’m not sure why this is the analogy I come up with but it’s the most honest one.


    Source date (UTC): 2026-01-31 13:40:49 UTC

    Original post: https://twitter.com/i/web/status/2017593918393881087

  • VALIDATION OF AND LIMITS OF USING AI AGENTS I probably have told you some part o

    VALIDATION OF AND LIMITS OF USING AI AGENTS
    I probably have told you some part of this story, but around 2006, a couple of senior microsoft guys, one in the tools division (programming) who I had known as a a software architect for a long time and one in the test division (same relative area) who had programmed the stealth bombers, came to me with an idea suggesting competing agents on an N-Dimensional manifold. They told me the tech didn’t exist yet, but that they could build it.

    I was already too overloaded and felt it as a lot of time and money before returns and so hard to raise money for (basically it was too early). That said the same idea, which is adversarial processes (bots, agents) on an n-dimensional manifold (neural network) can compete (darwinian selection) for superior outcomes (solutions).

    Now the brain does the same thing with networks of brain regions, and only raises to your attention the best of the ‘hallucinations’ (Imagination).

    So that’s validation. On the other hand I still see rapid commoditization of a tiny number of agent design patterns that like a piano keyboard can compose by combination and recombination a relatively infinite permutation of outcomes.

    IMO we are at the ‘toy’ stage. And the extrapolation of the possibilities is the equivalent of an AI hallucination. In other words I think the outcome is deterministic and what we are seeing is the equivalent of another operating system layer that will add functionality in limited patterns just like we added applications in a limited number of patterns. Jus like we added mobile application in a limited number of patterns.

    So what am I saying? I’m saying that the money to be made is in that commoditization into a fixed number of design patterns that can be organized or self organized by an LLM into a solution without much user interaction or even understanding.

    And that process appears to be underway.

    Cheers
    CD


    Source date (UTC): 2026-01-30 21:36:12 UTC

    Original post: https://twitter.com/i/web/status/2017351162287296658

  • high confidence its sorting inventory in time vs predicting outcomes over time

    high confidence its sorting inventory in time vs predicting outcomes over time.


    Source date (UTC): 2026-01-30 16:39:44 UTC

    Original post: https://twitter.com/i/web/status/2017276553206632472

  • I follow @BrianRoemmele of course because I love his takes and his passion. I wo

    I follow
    @BrianRoemmele
    of course because I love his takes and his passion. I work on different ambitions that seek to allow people to work together with ai at scale whether in private or public domains. And so I’m interested in his work as an experiment in possibility. But not deeply enough to comment on it rather than him as a thought leader.


    Source date (UTC): 2026-01-27 20:02:49 UTC

    Original post: https://twitter.com/i/web/status/2016240500660175033

  • RUNCIBLE GOVERNANCE – MODEL CURRENCY K2 apparently just came out with a killer m

    RUNCIBLE GOVERNANCE – MODEL CURRENCY
    K2 apparently just came out with a killer model that’s open source and with a hella-lot of parameters. I won’t have time to play with it any time soon, but I think we might be getting closer to exiting the proprietary model issue.

    But one warning: models must stay current. There is an absurd cost to keeping models current. It is hard to see open source without some sort of funding (state?) keeping current if they effectively deprive the for-profit firms of moat and revenue.
    Now

    One pitch we might make is that our work almost guarantees curation of information that can be added to models to keep them current.


    Source date (UTC): 2026-01-27 18:26:50 UTC

    Original post: https://twitter.com/i/web/status/2016216342555459965