(Runcible) RE: LLM R&D TL/DR; Yeah, well, LLMs are really pretty dumb and I’ve s

(Runcible) RE: LLM R&D
TL/DR;
Yeah, well, LLMs are really pretty dumb and I’ve sought and hit the limit of their abilities. And it’s been an exasperating journey of dashed hopes. 😉 (well, mostly).

Problem Space:
We produce:
a) the runcible governance layer which is my epistemology applied to computability. It is extraordinary. It overcomes the problems of LLMs but not the limitations. It’s a wrapper around the LLM that binds it to limited pathways through the latent space.

b) control over this layer has forced us to create and modify one of the open source models. We had hoped to ‘help’ openAI, but they are a bit ‘unfocused’ as a company and getting access to discuss at a high enough level amidst their many pressures is almost impossible on one hand and more costly than doing it ourselves on the others. So in effect we have a router llm, and a set of target LLMs, functioning as a mixture of experts. But realistically it’s just cost control.

c) We also produce Oversing, which is a mobile, tablet, desktop universal application platform for running organizations of any sale, but basically it’s a host for contexts that you use to collaborate with one another and an LLM.

Result:
This is a crazy project of enormous scope but it’s the end point of what people need going forward. Cooperation at scale.

No one has done this yet. It’s partly because they don’t know how to, and it’s partly because it’s hard. 😉

Context:
While I have built many tech companies, and consulted for the fortune 400, and made the Inc500 three times, I work in many disciplines, and my fundamental skills are epistemology, especially in high dimensional closure domains such as cognitive science, language, economics law, and evolution.
The simple version is reducible to the fact that I don’t make mistakes that are common in the STEM fields by trying to reduce to mathematics or algorithm, that which is not reducible further than operational prose.

So this is why I can solve this problem for LLMs.
It’s just … let’s say… a little exasperating – just like every previous tech …until we understood it. 😉

CD


Source date (UTC): 2026-02-10 21:46:29 UTC

Original post: https://twitter.com/i/web/status/2021340016082034787

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *