(Runcible)
UPDATE: Ok. So despite all my efforts at trying to keep Runcible Intelligence (Runcible Governance Layer) a minimum package applicable to any LLM, with the intention of licensing the tech to the major foundation model producers, I’ve discovered we essentially have to produce the same technology stack as every other Lab, with the only difference that we govern an LLM rather than train one. Which is a heck of a different objective and a lot less work. Otherwise it’s exactly the same architecture and division of labor.
The immediate benefits of the extra scale are:
1. We can modify the code of the LLM we use as a router and mixture of experts.
2. We can (eventually) modify the attention nodes for even greater precision.
3. We can reach out and touch every foundation model and constrain it indirectly instead of locally (we send a package of protocols).
4. We can produce the protocols for a vertical market in hours (yes really) because the core (epistemic layer) can process literally anything.
5. So we can cover the entire set of markets in trivial time.
6. We have better control over our IP which is the ‘secret’ to the velocity of production and the universalism of the core’s application.
And that’s just the beginning.
I was struggling to try to ‘keep it simple’. But without having to wait for approval from someone’s partnership program, and jump through hoops, asking them for special dispensation, and in addition waiting on their partnership decision cycle, and their adoption decision cycle, we can go to market immediately.
We already have our first customers.
And we can see the radical ramp up for an LLM that doesn’t hallucinate and produces warrantable and lability defensible outputs.
Cheers
CD
Source date (UTC): 2026-01-31 19:15:47 UTC
Original post: https://twitter.com/i/web/status/2017678214609748178
Leave a Reply