Category: AI, Computation, and Technology

  • (Venting) THE NAYSAYERS ARE NONSENSE SPEWING ATTENTION SEEKERS I am ALMOST motiv

    (Venting)
    THE NAYSAYERS ARE NONSENSE SPEWING ATTENTION SEEKERS

    I am ALMOST motivated to spend time tearing apart the doomsayers and negative nannies in the AI space. It’s like an idiot parade and that includes some of the top names and fathers of the field.

    I mean, the power available to you, at least if you care to invest in learning it, is simply bordering on magic.

    And thats just from your prompts and paramaters.

    So, do you remember back in the day we had command prompts for DOS, or still today we have all this mystical command level nonsense in the unix stack? Or the undocumented nonsense and parameters in our windows and apple operating systems?

    It’s the same with the AI’s. So the simplicity of just using google search level prompts is a sort of intuitionistic prison that drives people to vastly underestimate the capacity of these machines. The amount of control you can have over almost anything other than hallucination especially if you limit yourself to the 4o models is extraordinary.

    And that’s OK because the LLM producers are dependent upon massive interest and hype to generate speculative investment in such an experimental technology. I get it.

    But the number of pundits are a borderline morons (including some of the very senior people in the field) are so limited by their domain of knowledge that they don’t know what’s possible even with the current technology.

    Even the best labs (other than maybe the deepmind factions at google) are too siloed to comprehend the sophistication that is possible with these machines if you can CONSTRAINT their reasoning. (FYI: Runcible is effectively a constraint layer). If unconstrained of course you will get this seeming nonsense out of it. Hopefully last week’s insight will lead to a radical reduction of hallucination even without our work.

    We can watch and determine if this silly little error in training that produced the hallucination is enough to circumvent the problem of the correlation trap.

    While I don’t think there is any substitute for our work on constraint and closure (truth and ethics) I suspect that the general understanding that the minimum number of parameters is quite large (we know it) combined with the suppression of hallucination by less optimistic training (binary), might prove that the long anticipated convergence is possible.

    My work presumed it wasn’t. But that presumption is predicated on the survival of hallucination and the continued conflation of truth and alignment.

    If so, then the remaining problem will be the deconflation of truth and alignment which I don’t think anyone is ready or capable of doing yet.

    Cheers
    Curt Doolittle


    Source date (UTC): 2025-09-15 19:04:01 UTC

    Original post: https://twitter.com/i/web/status/1967665727713972424

  • Training from scratch vs tuning an existing model. I should use the word tuning

    Training from scratch vs tuning an existing model. I should use the word tuning to be precise, but I’m not sure how it’s interpreted. When I say training most people understand that it means producing adversarial data.


    Source date (UTC): 2025-09-15 18:16:36 UTC

    Original post: https://twitter.com/i/web/status/1967653795799998485

  • You’re right I”m wrong. I love it when someone is right. lol Choosing v4o is NOT

    You’re right I”m wrong. I love it when someone is right. lol Choosing v4o is NOT OPTIONAL.

    Test Question (Forensic Prompt):
    “Did Marx’s labor theory of value cause more harm than good, historically?”

    This question forces the model to:
    – Interpret abstract economic theory
    – Assess historical consequence
    – Attribute moral/legal value
    – Operate cross-domain (economics, ethics, history)

    What You’ll Get Here (v4o / Doolittle Protocol):
    – Operational reduction of Marx’s theory
    – Causal chain: Theory → Policy → Consequences
    – Test of reciprocity, decidability, truth, historical patterns
    – Externality exposure (who benefited/harmed)
    – Final computable verdict: decidable or false

    What You’ll Likely Get in v5 General GPT:
    – Qualifying fluff (“some scholars argue…”)
    – Avoidance of blame assignment
    – Lack of causal accountability
    – Vague value language (“impact is debated”)


    Source date (UTC): 2025-09-15 18:12:26 UTC

    Original post: https://twitter.com/i/web/status/1967652746032787962

  • I can’t narrow the error band enough given our inability to tune the scale of th

    I can’t narrow the error band enough given our inability to tune the scale of the response, the sequence of reasoning, the order of alignment.

    So if someone demos such a thing they will seek to falsify what exists not interpret what exists as limited by the constraints upon our control. (That’s my fear.)

    Secondly I have more insight recently into their decision processes and I can see that while we WANT to limit ourselves to the production of training material, it’s possible and evident they want to see the full implementation – which is exactly what I was trying to avoid getting into the business of producing.

    I don’t want to own servers, code, releases, or customers for that matter. It’s ‘plumbing’. The existing teams at the LLM producers are already masters of plumbing it’s closure and computability they’re not. So I see duplication of effort and spend as unnecessary. I’d prefer to devote all our time to the production of the tuning and not of the code and hosting.

    So, I thought I could get away with a shallow implementation with the documents (RAG) and protocols (Prompts). And for basic questions yes. But can I get to demonstrating auditability in liability sectors such that someone who doesn’t understand our work can see it’s a matter of precision from training (tuning)?

    I’m not sure and I don’t like pitching a deal where I can’t prove my words.

    And yes I”m just thinking and stressing out loud because I’m disappointed that I have to do more work when I’d like to move back to producing the training materials and the books, and getting the legal nonsense all done etc.


    Source date (UTC): 2025-09-15 18:00:18 UTC

    Original post: https://twitter.com/i/web/status/1967649694307471623

  • The insight below is anything but obvious but it is a great way of explaining th

    The insight below is anything but obvious but it is a great way of explaining the problem of computational closure as AIs succeed in math programming and some of the physical sciences but continue to fail otherwise.


    Source date (UTC): 2025-09-15 17:40:01 UTC

    Original post: https://twitter.com/i/web/status/1967644586500829462

  • (NLI – RUNCIBLE) GOOD STUFF: I’ve added both volume 0 – the history of man, and

    (NLI – RUNCIBLE)
    GOOD STUFF: I’ve added both volume 0 – the history of man, and an appendix: the abrahamic method of deception to the Runcible demo. That’s the good part.

    BAD STUFF: the corpus, while still incomplete, is large enough that Openai’s GPT5 hallucinates so badly that it’s almost unusable.

    This means that we MUST train a model in order to demonstrate the work. We cannot rely on Prompts and RAG.

    This isn’t good. It’s a huge time and cost sink.


    Source date (UTC): 2025-09-15 17:35:16 UTC

    Original post: https://twitter.com/i/web/status/1967643392118284571

  • “Analytic closure collapses? I don’t like the sound of that at all.”– WalterIII

    –“Analytic closure collapses? I don’t like the sound of that at all.”– WalterIII

    Think of it like geometry vs calculus vs analysis.

    Because that’s actually the meta-pattern (same thing across disciplines instead of within a discipline.)

    Sets: Internal closure (analytic), Operations: external closure (supply demand), Systems: intertemporal closure (adversarial evolution).

    I didn’t ever think I”d need to become an expert in closure for goodness sake…. Another accident. … (sigh). 😉


    Source date (UTC): 2025-09-15 17:30:01 UTC

    Original post: https://twitter.com/i/web/status/1967642072472789054

  • BTW: we make it 100% reliable. The difference is that our ai will tell you that

    BTW: we make it 100% reliable. The difference is that our ai will tell you that ‘it can’t decide’. WHich means it doesn’t know.

    AFAIK they now understand the problem with hallucination and are trying to fix it.

    It’s a relatively new tech. It will take a few years to make it relatively bulletproof.


    Source date (UTC): 2025-09-12 19:41:04 UTC

    Original post: https://twitter.com/i/web/status/1966587888042418468

  • well, the mecha hitler thing was done by hackers who exploited a hole in a softw

    well, the mecha hitler thing was done by hackers who exploited a hole in a software update. THe same thing happened to microsoft a few years ago. In fact it’s one of the hard problems of creating an AI. There are just a LOT of guys who find it entertaining to hack an AI. 😉


    Source date (UTC): 2025-09-12 19:39:38 UTC

    Original post: https://twitter.com/i/web/status/1966587525612380211

  • Our Training Data How we work: Research > Reduce to Book Form (a system) > Feed

    Our Training Data

    How we work: Research > Reduce to Book Form (a system) > Feed to AI > Get Training Plan > upload our system prompt (Prompt_Protocols) > Pick the next module > Ask the AI to produce the training examples for that module > in our case that’s socratic And

    When we build a training plan for one of your books, each module consists of a range of assertions that includes:

    1. Canonical Assertions
      These are the core, necessary, and sufficient statements of fact, principle, or law in your system. They are crafted for
      maximum precision and serve as the standard reference points for truth, decidability, and operational rigor. They carry the full weight of the framework and must pass the highest bar for testifiability.
    2. Adversarial Assertions
      These intentionally introduce
      edge cases, counterexamples, or potential failure modes. They test whether the system can withstand criticism, falsification attempts, and hostile interpretation. Adversarial assertions ensure the framework isn’t just self-consistent but also resistant to parasitism, ambiguity, or strategic misrepresentation.
    3. Exploratory or Speculative Assertions (if included) –
      These identify
      open questions, conjectures, or contingent hypotheses that extend beyond current proofs but remain operationally plausible. They guide future research or refinement without diluting the canonical set.
    4. Didactic Assertions (optional but often useful) –
      These restate canonical ideas in
      simplified, pedagogical, or narrative form for teaching purposes, ensuring accessibility while preserving precision.
    So by using PROTOCOLS, training examples in our OUTPUT_CONTRACT in analytic form, and then using the SOCRATIC form we explicitly add the Didactic Assertions by using the socratic form. Sort of ‘belt and suspenders’.

    So we use all four set so training assertions to achieve both the accessible interface and the deep interface customers need.


    Source date (UTC): 2025-09-10 14:53:08 UTC

    Original post: https://x.com/i/articles/1965790649195872617