Overlay (“Governance Layer”)
Performs constraint, closure, decidability.
Source date (UTC): 2025-11-21 14:58:20 UTC
Original post: https://twitter.com/i/web/status/1991883884032909650
Overlay (“Governance Layer”)
Performs constraint, closure, decidability.
Source date (UTC): 2025-11-21 14:58:20 UTC
Original post: https://twitter.com/i/web/status/1991883884032909650
We have run Runcible on other models but it doesn’t really matter since our business model is to license it to foundation model producers rather than compete in the model space.
Runcible prefers a large number of parameters regardless of quality. So we spend most of our time testing it on OpenAI’a ChatGPT and Google’s Gemini. Grok is finally getting to where we will be able to run our layer on it. And we are most aligned with Grok’s vision. That said, we aren’t interested in entering the hardware space race so to speak. It’s a money game. I’d prefer to run on ‘other people’s money’ so to speak, and specialize in AI decidability (quality).
However, business pressures might force us to host models on Microsoft or Amazon simply because one of our revenu models (certifications) might be better served by self hosting, and so we’ve planned for the possibility. (We have done hundreds of millions of dollars of business with Microsoft and done shared investments witht them so we’d prefer to work with them simplyl because it’s easier for us. We understand them and how to get things done there.
The primary difference is that some routers (openai’s version 5) were inconsistent, even though 4o was flawless. 5.1 is much better and with Gemini 3 google has moved from absolutely useless to a competitor.
Thanks for asking
CD
Source date (UTC): 2025-11-21 04:27:14 UTC
Original post: https://twitter.com/i/web/status/1991725063977267584
WHY RUNCIBLE IS THE MISSING AI LAYER
Runcible does what LLMs cannot:
– imposes constraint and closure,
– forces adversarial falsification,
– evaluates reciprocity and coercion,
– models demonstrated interests,
– tests for constructability and possibility,
– produces warrantable, auditable, and accountable outputs.
Runcible converts a probabilistic engine into a decision system.
Runcible Governance is not optional.
It is structurally necessary.
http://
Runcible.com
Source date (UTC): 2025-11-21 04:12:59 UTC
Original post: https://twitter.com/i/web/status/1991721479646638187
FRONTIER MODELS ARE A COMMODITY.
They are trying to add features to compensate for failing to produce constraint, closure, and decidability.
There is no alternative to our Governance Layer.
Run it, or be left behind.
http://
Runcible.com
…
Source date (UTC): 2025-11-21 04:05:48 UTC
Original post: https://twitter.com/i/web/status/1991719672618840301
THE CONTRARIAN POINT INVESTORS MISS
Everyone else is trying to make LLMs more helpful.
We are making them correct.
Everyone else treats hallucination as a bug.
We treat it as a structural inevitability.
Everyone else tries to fix behavior with tuning.
We fix the architecture with governance.
This is why the majors will converge on our solution:
they have no path to safety, reliability, or liability without it.
http://
Runcible.com
Source date (UTC): 2025-11-21 04:03:38 UTC
Original post: https://twitter.com/i/web/status/1991719126839243126
EVERY LLM REQUIRES OUR RUNCIBLE GOVERNANCE LAYER
–“LLMs cannot provide decidability in possibility, reciprocity, or truth because their architecture lacks the closure, constraints, falsification tests, and liability conditions required for adjudication. They generate statistically plausible language, not operational judgments grounded in constructability, reciprocity, or testifiability. Possibility requires physical and procedural constraint; ethics requires a grammar of costs, coercion, and demonstrated interests; truth requires adversarial testing, scope definition, and warrant. None of these functions exist inside a probabilistic model. Runcible supplies the missing governance layer—formal protocols, constraint logic, adversarial evaluation, and liability accounting—that converts generative output into warrantable, testifiable, and ethically reciprocal decisions.”–
Source date (UTC): 2025-11-21 04:02:22 UTC
Original post: https://twitter.com/i/web/status/1991718807098868218
The RAG service is the memory and rulebase behind the Runcible governance layer.
It transforms the books into a queryable, authoritative body of law and logic from which the AI must retrieve evidence before generating answers.
As a result:
– The AI cannot hallucinate because it cannot improvise outside the validated corpus.
– Every answer is traceable, auditable, and warrantable, because the source text is fixed.
– Updating any book or rule instantly updates the AI’s behavior globally.
This is the core of Runcible’s differentiation: LLMs are stochastic; Runcible is governed.
Functional summary for investors:
The RAG layer is the knowledge spine that makes governed AI possible. It converts your books into enforceable constraints.
Source date (UTC): 2025-11-20 22:31:21 UTC
Original post: https://twitter.com/i/web/status/1991635503955972502
@grok
please define ‘hierarchical recursive memory’.
Source date (UTC): 2025-11-20 17:19:58 UTC
Original post: https://twitter.com/i/web/status/1991557140058894550
Two things. (a) yes my work is included in most of the major LLMs by now. It’s not all current, but the gist of it is there. (b) I have uploaded my corpus to openai, grok and now google. So it calls the RAG (local store) whenever I ask something about my work.
Source date (UTC): 2025-11-20 03:28:34 UTC
Original post: https://twitter.com/i/web/status/1991347914028040270
Non argument.
I’ve been involved in the field with the original people since 2009 and there still is little to no evidence that what we have done is durably meaningful.
Source date (UTC): 2025-11-18 20:00:12 UTC
Original post: https://twitter.com/i/web/status/1990872688177651902