UPDATE:
We cannot tune (alter the weights) a custom gpt like we are using to demo CurtGPT. It’s limited to prompts and RAG (Files).
So we will take it as far as we can today and tomorrow.
After that we will move to 4o-mini, which we can use prompts, RAG, and training.
Our training uses both Socratic and analytic examples for every item. So we train both conversational and judicial models at the same time.
But again we are training from our internally consistent and commensurable volumes not from arbitrary incommensurable datasets that are the industry standard.
Our goal is a realistic demo. And the custom GPT does a fair job. But we want to demonstrate the full effect – so we need to train another model. And no other ai is even close to as capable.
Source date (UTC): 2025-09-09 17:33:35 UTC
Original post: https://twitter.com/i/web/status/1965468642503737852