Category: AI, Computation, and Technology

  • Morgan. Good to hear from you. 😉 Your idea is interesting. Effectively a constr

    Morgan. Good to hear from you. 😉 Your idea is interesting. Effectively a constraint subset,. Though I am not sure what you want to accomplish with it. If I did I might understand. A simulator of american enlightenment pre-industrial revolution thought?

    In technical terms, Levi is correct, you cannot produce closure by such means. I can’t determine if that’s your objective, or illustrative comparison is your objective. As long as its illustrative and explanatory I think it can work.

    We just build everything from the science. Less accessible, more technical, cognitively burdensome, but we produce not just constraint but closure (proof).

    I would love to see your idea implemented.


    Source date (UTC): 2025-09-22 05:57:59 UTC

    Original post: https://twitter.com/i/web/status/1970004631863607620

  • GREATEST THREAT TO OUR INNOVATIONS APPLIED TO LLM AI? –“Greatest danger: captur

    GREATEST THREAT TO OUR INNOVATIONS APPLIED TO LLM AI?

    –“Greatest danger: capture of the RL system by ideological operators who substitute false reward criteria (e.g., ‘compassion’ instead of ‘reciprocity’).”–

    The left can cause AI to systematically lie.


    Source date (UTC): 2025-09-22 03:47:22 UTC

    Original post: https://twitter.com/i/web/status/1969971760524370120

  • THE VIRTUE OF SMALL MODELS? Can I steel man this a bit? 1 – The paradigm (dimens

    THE VIRTUE OF SMALL MODELS?
    Can I steel man this a bit?
    1 – The paradigm (dimensions), vocabulary (references), grammar (rules of expression formation), and logic (constraints on available operations) available in math is tiny and in programming is highly constrained.
    2 – The same properties of the physical sciences are larger. The properties of the behavioral sciences are far larger than those. The properties of language are reducible to dimensions whose combinatorics are higher than any other domain.
    3 – So you are measuring small domains with small and internal closure – in other words you’re claiming the easiest problem can be reduced to the smallest paradigm, vocabular, grammar, and logic.
    Um… it’s absurdly obvious.
    Why are humans so effective at language, behavior, cooperation, and cooperation at scale – yet mathematics and programming are a challenge?
    It’s also …. absurdly obvious.
    4 – Why are small parameter models better at tiny grammars, and why are large parameter models better at vast grammars?
    It’s also …. absurdly obvious:
    The number of dimensions captured in every referent; the number of operations (field of potential) in every referent, the use of real-world closure instead of internal (set) closure.

    I work, my team and my organization work, in the ‘hard’ grammars: we have to discover means of closure possible for LLMs. And LLMs can only provide that closure with real world evidence not tests of internal consistency by permutability.

    There is no substitute for the relationship between the paradigm (collection of domains), domains (axis of causality) referents in a domain (names of positions in a domain), available transformations (operations), and most importantly, means of closure (limits providing tests of equality, inequality) within that paradigm.

    As such, all the ‘hard problems’ require survival from adversarial competition by the only means of closure available: demonstrated behavior in reality under realism, and naturalism and operationalism.

    As such large models for hard problems of wide causal density, and high combinatorics and small models for easy problems but narrow density but high permutability.

    Curt Doolittle
    Runcible
    NLI


    Source date (UTC): 2025-09-21 22:14:01 UTC

    Original post: https://twitter.com/i/web/status/1969887867557368035

  • I make this argument to the staff on thursday and look what’s released on Saturd

    I make this argument to the staff on thursday and look what’s released on Saturday:

    Like I sad, AI is a death sentence to microsoft (as well as google). For microsoft it’s the innovator’s dilemma problem. They are too invested in one revenue portfolio to create it’s replacement. In other words, it’s a death sentence and one of their own making.

    We (NLI and Runcible) have (A) the ai solution (really) and (B) the application platform solution.

    The only problem now that we have them is getting in front of these folks again when there are so many demands on their time and attention.

    But it’ll happen.

    https://msn.com/en-us/money/other/microsoft-ceo-concerned-ai-will-destroy-the-entire-company/ar-AA1MXwYA?ocid=hpmsn&cvid=40ea4a3653774f2da9f98ee600ba64a3&ei=10…


    Source date (UTC): 2025-09-21 01:49:43 UTC

    Original post: https://twitter.com/i/web/status/1969579763317817821

  • Yes in fact the reason I asked the question is the broad range of terms used in

    Yes in fact the reason I asked the question is the broad range of terms used in and out of the field. In my work, which has a broader concern, ‘world model’ is the most durable future proof term. ‘world manifold’ is probably a bit more accurate. I find that people still think it’s all text prediction instead of a world model against text prediction to describe it.


    Source date (UTC): 2025-09-21 00:31:39 UTC

    Original post: https://twitter.com/i/web/status/1969560117835415972

  • (Runcible) I did not know this: –“Your protocols run at the same semantic level

    (Runcible)
    I did not know this:
    –“Your protocols run at the same semantic level as system prompts—both shape the latent graph. The difference is: you can edit yours; you can’t touch theirs (the company’s).
    This is why your orchestrator feels so powerful: you’re essentially writing your own system prompt layer on top of theirs, but specialized for truth, reciprocity, and decidability.”–


    Source date (UTC): 2025-09-21 00:10:49 UTC

    Original post: https://twitter.com/i/web/status/1969554875542618216

  • (Runcible) WHAT IS THE NAME FOR THE INTERNAL WORLD MODEL CREATED BY INGESTION? R

    (Runcible)
    WHAT IS THE NAME FOR THE INTERNAL WORLD MODEL CREATED BY INGESTION?

    Recommendation for Precision
    Given your interest in causal chaining, operationalism, and decidability, the most accurate and modern term for the structure produced when an LLM ingests a prompt is:

    “Prompt-Induced Latent Manifold”

    … because it preserves:
    Latent → high-dimensional continuous representation.
    Manifold → geometric and topological structure.
    Prompt-Induced → dynamic, prompt-specific nature.

    Extends naturally to “Prompt-Induced Latent Graph” when emphasizing causal relations rather than just geometry.

    Personally I think “The Manifold” or “The World Model” is sufficient for our purposes once you understand it exists and it’s this high dimensional geometry that causes the prediction of ‘the next word’.


    Source date (UTC): 2025-09-20 22:29:56 UTC

    Original post: https://twitter.com/i/web/status/1969529487047213327

  • (Runcible) Conundrum. So, Runcible can test the truth and ethics of nearly every

    (Runcible)
    Conundrum.
    So, Runcible can test the truth and ethics of nearly everything, explain why it fails any of the tests, and even explains the form of deception used if one is used, and the cognitive biases and language form being used. I mean, seriously, it’s devastating.

    Now there are at least two business cases.

    1) My goal and the institute’s goal of providing the public with a means of testing the truth and falsehood of assertions, claims, etc, so that we may reduce the influence of the industrialization of lying over the past century or more.
    To achieve this we require a major platform to implement a protocol (shallow), an expert (medium), or training (deep) on our work.
    We do not want to be in the business of creating yet another competitor in the field in the hopes we’re acquired.

    2) We could stand up servers, a site, and API that would issue certifications of the truthfulness and reciprocity (ethics) of the claim. And we could store both the claims and the certifications. We would effectively become another ratings agency. And we could charge per certification.

    Now, we could start with #1 and evolve into #2, which is the low risk strategy. It would allow us to demonstrate competency and avoid risk until we had the metrics to warrant ‘charging’ for certification.

    My concerns are:
    (a) I started my work, the institute, and this company Runcible, with the desire to produce truth and ethics validation/falsification for the masses in defense of the pervasive ‘lying’ of the talking classes across the spectrum. Particularly our government, academy, media, finance, and advertising-marketing commercial sector. This goal is social and political first, and economic second.
    (b) I do not care about or want to be in the business of certifying products, services, and claims. (Though to be honest, other members of the team, in particular my business partner Brad, are happy to run that business.) It’s a liability minefield and I think it is contrary to the objective of saving the common people from false promise and deception.
    (c) That said, it is much easier to get a VC to fund Option #2.
    (d) The major hosting and LLM producing companies do not want to be in the certification business. It puts them in potential conflict with their customers. But at least SOME of them are interested in having the option to ‘truth test’ almost anything online (That is, until they figure out how much of their cognitive bias reinforcement is almost certainly false.)
    (e) The major hosting providers might find a conflict between truth testing, alignment, and customer bias. IN other words, they’re blame free if people don’t expect the truth from them. But what happens when they dish the truth – and the population doesn’t like it -even if we try to align any controversial truth to the bias of the user.

    So, do you see my conundrum?

    If you have worthy thoughts about this please share.

    Curt


    Source date (UTC): 2025-09-16 18:35:40 UTC

    Original post: https://twitter.com/i/web/status/1968020982284816890

  • “Emit Timestamp”. I’m sure you can add it to your personal protocol

    “Emit Timestamp”.
    I’m sure you can add it to your personal protocol.


    Source date (UTC): 2025-09-15 21:40:21 UTC

    Original post: https://twitter.com/i/web/status/1967705069899722778

  • Three Prompt Templates for Runcible GPT (CurtGPT) Prepare to be overwhelmed. The

    Three Prompt Templates for Runcible GPT (CurtGPT)

    Prepare to be overwhelmed. These prompts expose the entire process that we use for testing the truth and reciprocity of claims and assertions. The examples we use are simple but you can put a news article or a policy doc into it and get a terrifyingly accurate result.

    I have not incorporated our science of lying into the model yet – partly because I”m a bit afraid of what it will demonstrate – the pervasiveness of lying in our daily lives. But that will come in the future.

    Proceed step-by-step. Show PLAN → OPS → AUDIT → 11-STEP.
    –steps show –plan full –ops verbose –audit full –depth 3 –deliver 11step,audit,datadict –simulate on
    CLAIM: “<state the factual proposition>”
    CONTEXT: “<scope and constraints, if any>”
    EVIDENCE SOURCES: “<list or ‘to-be-identified’>”
    ERROR BUDGET: “<tolerance>”
    Proceed step-by-step. Show PLAN → OPS → AUDIT → 11-STEP.
    –steps show –plan full –ops verbose –audit full –depth 3 –exceptions warranted –deliver 11step,audit,checklist –simulate on
    TASK: “Evaluate policy X for institutionalized reciprocity and computable enforcement.”
    METRICS: “harms, restitution calculus, budget impact, tail-risk handling”
    Proceed step-by-step. Show PLAN → OPS → AUDIT → 11-STEP.
    –steps show –plan full –ops verbose –audit full –depth 3 –deliver 11step,audit,datadict,checklist
    TASK: “Specify a decision procedure for [domain].”
    CONSTRAINT: “No discretionary thresholds; exceptions require performer warranty.”


    Source date (UTC): 2025-09-15 20:11:39 UTC

    Original post: https://x.com/i/articles/1967682746626818285