Category: AI, Computation, and Technology

  • All language consists of measurement. (yes) There should be no reason that if so

    All language consists of measurement. (yes)
    There should be no reason that if something is described in language it can’t be modeled. The question is wether the LLM can be constrained to an operational model using langauge or it must use a tool (shell out) to do so (as it does with programming). To some degree we should treat programming as the equivalent of humans using any measurement tool.
    In our work we force high dimensionality questions into operational prose, sequences of tests, and distinct outputs. I can’t yet fully test it’s operationalization against the ternary logic hierarchy since I need to finish what I’m working on first. But the partial tests work fine.
    But asking it how to fix a 64 ford carburetor or something of that nature is wholly dependent upon existing text. Which is true for anything in that real world category.
    I dont consider any of that very challenging. The robotics folks are tearing up the universe already. So between self driving (perception and navigation), robotics (manipulation and transformation) and llms (concepts and language) it’s just a matter (just? 😉 ) of representing and interfacing the three domains. And we have data models and languages for doing so.
    Regardless of what others think, IMO the hard problem has always been language, and attention was the revolutionary leap that made it possible. Language is the system of measurement for humans at human scale.


    Source date (UTC): 2025-11-28 23:58:27 UTC

    Original post: https://twitter.com/i/web/status/1994556524400971860

  • Smart. Ontologies emphasize hierarchical knowledge structures and semantic relat

    Smart.
    Ontologies emphasize hierarchical knowledge structures and semantic relations, while algebras focus on operational rules and symbols. I’d need to write something meaningful to show correlations and contrasts, but I see the source of your intuition. Off the top of my head I can see both the natural conflict between them, the overlap when applied to ai, and an opportunity for insight into the grammars of operational prose versus quantities and ratios. Mostly I see that math is a lower dimensional logic that is internally commensurable and natural law is a much higher dimensional logic that must be made commensurable externally. So It may be that those differences describe the specrum sufficiently.


    Source date (UTC): 2025-11-28 09:28:00 UTC

    Original post: https://twitter.com/i/web/status/1994337468477472974

  • We know how to solve the problem of computability using LLMs. I would argue that

    We know how to solve the problem of computability using LLMs. I would argue that the foundation model producers don’t understand the problem which is why they can’t solve it.
    We did. And it’s really, really, hard.


    Source date (UTC): 2025-11-26 22:40:15 UTC

    Original post: https://twitter.com/i/web/status/1993812071356747997

  • Always true. I just retired my 2014 top of the line macbook pro retina for a new

    Always true.
    I just retired my 2014 top of the line macbook pro retina for a newer top of the line macbook pro M1. Meaning I got a decade of use out of that Macbook Pro. I didn’t need anything more than the 2014 model. Its only that no one will repair them any longer, and they can’t accept the OS upgrades.
    Apple is a better buy.


    Source date (UTC): 2025-11-26 22:38:08 UTC

    Original post: https://twitter.com/i/web/status/1993811538587820346

  • I would argue that’s not quite true. The brain is possible to understand at leas

    I would argue that’s not quite true. The brain is possible to understand at least functionally. If we look at LLMs as the language faculty, and that we’re brute forcing the LLM’s world models via language, but that we haven’t yet created the prefrontal cortex and consciousness, then every LLM behavior is obvious and predictable. The impediment to completing that circuit is that it dramatically increases costs.


    Source date (UTC): 2025-11-26 22:25:45 UTC

    Original post: https://twitter.com/i/web/status/1993808423214043355

  • In our opinion (our organization) this is true. The value of any ai is dependent

    In our opinion (our organization) this is true. The value of any ai is dependent upon the capacity of individuals to leverage extant AI. For the .001% of us, the value is infinite. But that value doesn’t scale enough to pay for the absurd cost of compute.

    I don’t know if architectures is the right frame, I might argue it’s contexts. One must know enough to ask the meaningful question. And the AI must know the context in order to meaningfully respond to it.


    Source date (UTC): 2025-11-26 22:22:41 UTC

    Original post: https://twitter.com/i/web/status/1993807650879098983

  • @dwarkesh_sp Unfortunately, you don’t know me or my organization, but in simple

    @dwarkesh_sp

    Unfortunately, you don’t know me or my organization, but in simple terms, the evals are measuring low dimensionality easy-closure domains (math, programming, tests) with non-existent liability which we consider puzzles, whereas most problems are in high dimensionality hard-closure domains with attached liability. Ergo the evals over estimate the value of the AI in anything that is revenue producing. 😉

    I work, my organization works, in high dimensional closure (real world problems), which is where liability exists and revenue to pay for AI exists. And oddly there is no one else even vaguely in the space.

    So the evals are not indicative of the value of AI outside of easy closure (mathematics, programming, combinatorics).

    Cheers
    Curt Doolittle

    http://
    Runcible.com


    Source date (UTC): 2025-11-26 22:20:02 UTC

    Original post: https://twitter.com/i/web/status/1993806983120765323

  • You are incorrect. It is absolutely possible. We have done it. You are, like man

    You are incorrect. It is absolutely possible. We have done it. You are, like many in and out of the field, confusing LLMs as the equivalent of the human language faculty, with LLMs as including the prefrontal cortex’s regulation.
    We create a governance layer, that constrains the navigation through the latent space, as a set of tests through that space, and then emitting a narrative of that result.
    This is what brains do.
    Over time the feedback from these outputs will constrain the latent space without human intervention.
    We are early in the development of ai.
    After our solution we still need to solve episodic memory in way that is not prohibitively expensive.

    Cheers


    Source date (UTC): 2025-11-24 18:02:10 UTC

    Original post: https://twitter.com/i/web/status/1993017312551878842

  • No. Our governance layer prohibits hallucination. Cheers

    No. Our governance layer prohibits hallucination.
    Cheers


    Source date (UTC): 2025-11-24 01:27:46 UTC

    Original post: https://twitter.com/i/web/status/1992767064206160356

  • Disagree. They mirror the human language faculty with extraordinary accuracy. Th

    Disagree. They mirror the human language faculty with extraordinary accuracy. The fact that we must refine the constraint of the traversal or and build a better context before navigating, is merely a reflection of our stage of development.

    Curt Doolittle
    runcible inc.


    Source date (UTC): 2025-11-24 01:21:56 UTC

    Original post: https://twitter.com/i/web/status/1992765596078129347