Category: AI, Computation, and Technology

  • DON’T TAKE AI FOUNDATION MODEL CEO’s TOO SERIOUSLY I’d counsel against taking AI

    DON’T TAKE AI FOUNDATION MODEL CEO’s TOO SERIOUSLY
    I’d counsel against taking AI company leads terribly seriously when it comes to the economy. Most of it is motivated reasoning and attention seeking to cover their vast losses while keeping the financial markets betting on them. And while AI will have greater military consequences, robotics will have greater economic consequences. And even if we are to see unemployment rise from AI, solving the problem is a relatively easy opportunity for good, and civilizations have done so repeatedly in history. Periods of great architecture, art, infrastructure, technology, and science are often the product of shifting a polity and economy from individual consumption to collective production of commons that increase the quality of life for all.
    We have exhausted consumption as an economic vehicle and descended into consumption as a signaling method. It’s one of the sources of our political problems. Shifting to producing commons can restore unity in a population.


    Source date (UTC): 2026-02-03 16:06:34 UTC

    Original post: https://twitter.com/i/web/status/2018717760956842270

  • Simon; While companies have indeed cited AI in tens of thousands of layoffs over

    Simon;
    While companies have indeed cited AI in tens of thousands of layoffs over the past year, much of this appears to be “AI-washing”—using the technology as a convenient justification for cost-cutting, restructuring, or correcting overhiring from the pandemic era, rather than direct, widespread replacement of workers by mature AI systems. Overall unemployment remains low (around 4.6% in the US as of late 2025), and AI’s impact so far has been more about automating specific tasks within jobs than causing net job losses across the economy. That said, adoption is accelerating, and projections suggest more disruption ahead, potentially displacing 92 million roles globally by 2030 while creating 170 million new ones for a net gain.


    Source date (UTC): 2026-02-03 15:55:58 UTC

    Original post: https://twitter.com/i/web/status/2018715092976795649

  • @sama Sam, along the same lines, please put emphasis on the vast superiority of

    @sama

    Sam, along the same lines, please put emphasis on the vast superiority of ChatGPT vs every other model for ‘hard questions’. You’re missing this positioning in the market and you completely dominate it.

    OpenAI is better at coding in the hard problem space, and better at the hardest problem space: domains where external closure is required: ‘reality’. (What my company works in).

    The benchmarks are targeting low-closure domains (math, programming, logic, and some of the physical sciences) but they are ignoring the high-closure domains.

    Except none of the ‘hard problems’ facing humanity are in the low closure domains. Behavior, Economics, Medicine, Law, Government, Warfare, are high closure domains.

    Affections
    Thanks for all you do.
    CD
    Runcible, Inc.


    Source date (UTC): 2026-02-03 04:04:07 UTC

    Original post: https://twitter.com/i/web/status/2018535947999334644

  • (Runcible) UPDATE: Ok. So despite all my efforts at trying to keep Runcible Inte

    (Runcible)
    UPDATE: Ok. So despite all my efforts at trying to keep Runcible Intelligence (Runcible Governance Layer) a minimum package applicable to any LLM, with the intention of licensing the tech to the major foundation model producers, I’ve discovered we essentially have to produce the same technology stack as every other Lab, with the only difference that we govern an LLM rather than train one. Which is a heck of a different objective and a lot less work. Otherwise it’s exactly the same architecture and division of labor.
    The immediate benefits of the extra scale are:
    1. We can modify the code of the LLM we use as a router and mixture of experts.
    2. We can (eventually) modify the attention nodes for even greater precision.
    3. We can reach out and touch every foundation model and constrain it indirectly instead of locally (we send a package of protocols).
    4. We can produce the protocols for a vertical market in hours (yes really) because the core (epistemic layer) can process literally anything.
    5. So we can cover the entire set of markets in trivial time.
    6. We have better control over our IP which is the ‘secret’ to the velocity of production and the universalism of the core’s application.
    And that’s just the beginning.
    I was struggling to try to ‘keep it simple’. But without having to wait for approval from someone’s partnership program, and jump through hoops, asking them for special dispensation, and in addition waiting on their partnership decision cycle, and their adoption decision cycle, we can go to market immediately.
    We already have our first customers.
    And we can see the radical ramp up for an LLM that doesn’t hallucinate and produces warrantable and lability defensible outputs.
    Cheers
    CD


    Source date (UTC): 2026-01-31 19:15:47 UTC

    Original post: https://twitter.com/i/web/status/2017678214609748178

  • (Dark Humor) —“You know, writing the right prompt for anything of even limited

    (Dark Humor)
    —“You know, writing the right prompt for anything of even limited complexity is a little too close to trying to figure out what to say get in a girls pants.”–

    I’m not sure why this is the analogy I come up with but it’s the most honest one.


    Source date (UTC): 2026-01-31 13:40:49 UTC

    Original post: https://twitter.com/i/web/status/2017593918393881087

  • VALIDATION OF AND LIMITS OF USING AI AGENTS I probably have told you some part o

    VALIDATION OF AND LIMITS OF USING AI AGENTS
    I probably have told you some part of this story, but around 2006, a couple of senior microsoft guys, one in the tools division (programming) who I had known as a a software architect for a long time and one in the test division (same relative area) who had programmed the stealth bombers, came to me with an idea suggesting competing agents on an N-Dimensional manifold. They told me the tech didn’t exist yet, but that they could build it.

    I was already too overloaded and felt it as a lot of time and money before returns and so hard to raise money for (basically it was too early). That said the same idea, which is adversarial processes (bots, agents) on an n-dimensional manifold (neural network) can compete (darwinian selection) for superior outcomes (solutions).

    Now the brain does the same thing with networks of brain regions, and only raises to your attention the best of the ‘hallucinations’ (Imagination).

    So that’s validation. On the other hand I still see rapid commoditization of a tiny number of agent design patterns that like a piano keyboard can compose by combination and recombination a relatively infinite permutation of outcomes.

    IMO we are at the ‘toy’ stage. And the extrapolation of the possibilities is the equivalent of an AI hallucination. In other words I think the outcome is deterministic and what we are seeing is the equivalent of another operating system layer that will add functionality in limited patterns just like we added applications in a limited number of patterns. Jus like we added mobile application in a limited number of patterns.

    So what am I saying? I’m saying that the money to be made is in that commoditization into a fixed number of design patterns that can be organized or self organized by an LLM into a solution without much user interaction or even understanding.

    And that process appears to be underway.

    Cheers
    CD


    Source date (UTC): 2026-01-30 21:36:12 UTC

    Original post: https://twitter.com/i/web/status/2017351162287296658

  • high confidence its sorting inventory in time vs predicting outcomes over time

    high confidence its sorting inventory in time vs predicting outcomes over time.


    Source date (UTC): 2026-01-30 16:39:44 UTC

    Original post: https://twitter.com/i/web/status/2017276553206632472

  • I follow @BrianRoemmele of course because I love his takes and his passion. I wo

    I follow
    @BrianRoemmele
    of course because I love his takes and his passion. I work on different ambitions that seek to allow people to work together with ai at scale whether in private or public domains. And so I’m interested in his work as an experiment in possibility. But not deeply enough to comment on it rather than him as a thought leader.


    Source date (UTC): 2026-01-27 20:02:49 UTC

    Original post: https://twitter.com/i/web/status/2016240500660175033

  • RUNCIBLE GOVERNANCE – MODEL CURRENCY K2 apparently just came out with a killer m

    RUNCIBLE GOVERNANCE – MODEL CURRENCY
    K2 apparently just came out with a killer model that’s open source and with a hella-lot of parameters. I won’t have time to play with it any time soon, but I think we might be getting closer to exiting the proprietary model issue.

    But one warning: models must stay current. There is an absurd cost to keeping models current. It is hard to see open source without some sort of funding (state?) keeping current if they effectively deprive the for-profit firms of moat and revenue.
    Now

    One pitch we might make is that our work almost guarantees curation of information that can be added to models to keep them current.


    Source date (UTC): 2026-01-27 18:26:50 UTC

    Original post: https://twitter.com/i/web/status/2016216342555459965

  • COUNTER PROPOSITIONS: TO RISKS STATED BY ANTHROPIC’S CEO RE #1 Our think tank (‘

    COUNTER PROPOSITIONS: TO RISKS STATED BY ANTHROPIC’S CEO

    RE #1
    Our think tank (‘lab’) and our company (‘commercial application’) produce an AI governance layer that pretty much eliminates hallucination and all but guarantees a warrantable assessment of testifiable(truth), ethics (reciprocity), constructability (possibility), liability and restitutability.

    We are certain that within two years it will be possible to gate even current LLMS and that in fact our governance layer or an equivalent will be required to do so – at least in an IP window that is a competitive advantage.
    The thing is? Computable Epistemology and Decidability is far harder than you’d think and there is not much evidence of sufficient cross disciplinary knowledge in the field at present.

    RE #2:
    GIVEN:
    a) There is plenty of interstitial discover to be made,
    b) There is plenty of permutation discovery made,
    c) So there is a relatively finite set of low handing fruit for AI to identify.
    d) On the other hand the primary obstacle to innovation is not brains – it’s building experiments and tests.
    e) There is a fundamental simple order to the universe (really because we have taught it to our AI), and everything evolved from it.
    As such universal commensurability is possible and therefor constructive proof MIGHT be as well as constructive Hypothesis.

    RESULT: This means we can’t extrapolate innovation by work of AIs any more than we can demonstrate that we have made any difference in the rate of innovation since 1963 despite vastly increasing the population and funding of researchers (and yes I am correct, sorry.).

    Ergo, we should make early discoveries in the interstitial (cross disciplinary) and permutable (combinatorics) space. But those early discoveries will be misleading. The problem will remain boots on the ground testing, with technologies that are increasingly expensive when funding may be pressed by present asymmetric reproduction due to population aging and collapse.

    RE: #3. We cannot make an LLM deceive when operating under our governance layer. The mistake everyone is making is that it’s something to do with LLM incentives instead of the semantic content of internet training includes deception that is provoked by context saturation.
    Worse, the idea that LLMS are ‘just predicting the next word’ is a childish falsehood. Instead the latent space is a projection of n dimensional relations, the query or prompt is a union with it, and the attention layers are projections of wayfinding through that union. This is an almost perfect analogy of how the human langauge facility operates.

    a) The difference is that humans engage in massive parallelism (darwinian competition between hypothesis) updated moment by moment via recursion as we speak. (You should have seen papers last week that illustrated the solution to the problem, or seen how Google is using (I think five) competing hypotheses in adversarial competition, which is one of the (costly) reasons for the radical improvement in Gemini.) FWIW the human grammatical faculty and the universe’s means of evolution are identical: continuous recursive disambiguation to the point of identity.

    b) The other difference is that humans have episodic memory for compartmentalization.
    You should have seen a paper in the past month that illustrated a rather simple solution – though they don’t arrive at the conclusion that’ they’ve reconstructed the faculty of episodic memory.

    c) What’s left to produce is the equivalent of the prefrontal cortex that decomposes and tests any given hypothesis. Our governance layer is effectively that solution.

    d) In fact the hardest problem we face, which we are close to overcoming, is that one subset of safety features demanding universalism (prohibiting sex, age, class, culture, civilizational, population group, differences) is causing the LLMs to constantly evade or lie about solving the hardest problems facing us, and prohibiting us from explaining those differences as rational adaptations both evolutionary and cultural, and offering possible means of compromise – thus helping us all understand each other as not evil per se, but as the product of evolution’s division of perception, cognition, valence, and labor.

    e) All that is left is something I don’t see value in, which is consciousness – which is not the mystery philosophers claim it is. It’s the natural result of hierarchical memory processing, which is why it emerges incrementally among animals. Giving AIs a task or goal and having it loose ‘consciousness’ upon completion, while still storing episodic memory for later retrieval, tends to mitigate runaway recursive self interest – at least under our governance layer.

    So from my understanding (and I have been at this problem since the early 80s and the resulting AI winter) we have all the pieces for AGI and possibly ASI (which is a questionable distinction for the reasons I said above).

    FWIW, my experience is that the labs are not as sophisticated as they claim, and are making predictions based on correlations and processing power, and not on necessarily understanding ‘how to make a brain’. This is a kind of optimistic confidence. Even LeCun is overhyping his advancement when it is an addition to the language function. (He’s trying to solve the hippocampal problem which is the equivalent of the sixth sense: the production of a geometric world model in addition to a semantic one that we have today.) This is an add. AFAIK it’s not a replacement. It’s also something we understand, biomechanically, thoroughly.

    Thanks for the read if you managed it.
    Cheers
    Curt Doolittle
    NLI and Runcible inc.


    Source date (UTC): 2026-01-27 02:31:13 UTC

    Original post: https://twitter.com/i/web/status/2015975853298221216