We have run Runcible on other models but it doesn’t really matter since our busi

We have run Runcible on other models but it doesn’t really matter since our business model is to license it to foundation model producers rather than compete in the model space.
Runcible prefers a large number of parameters regardless of quality. So we spend most of our time testing it on OpenAI’a ChatGPT and Google’s Gemini. Grok is finally getting to where we will be able to run our layer on it. And we are most aligned with Grok’s vision. That said, we aren’t interested in entering the hardware space race so to speak. It’s a money game. I’d prefer to run on ‘other people’s money’ so to speak, and specialize in AI decidability (quality).
However, business pressures might force us to host models on Microsoft or Amazon simply because one of our revenu models (certifications) might be better served by self hosting, and so we’ve planned for the possibility. (We have done hundreds of millions of dollars of business with Microsoft and done shared investments witht them so we’d prefer to work with them simplyl because it’s easier for us. We understand them and how to get things done there.
The primary difference is that some routers (openai’s version 5) were inconsistent, even though 4o was flawless. 5.1 is much better and with Gemini 3 google has moved from absolutely useless to a competitor.

Thanks for asking
CD


Source date (UTC): 2025-11-21 04:27:14 UTC

Original post: https://twitter.com/i/web/status/1991725063977267584

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *