In my case I have apple hardware (and a lot of it) so I like that everything integrates in every way possible.
Source date (UTC): 2025-09-09 22:30:07 UTC
Original post: https://twitter.com/i/web/status/1965543268172923005
In my case I have apple hardware (and a lot of it) so I like that everything integrates in every way possible.
Source date (UTC): 2025-09-09 22:30:07 UTC
Original post: https://twitter.com/i/web/status/1965543268172923005
—“Runcible: Enterprises aren’t blocked by AI that can’t write — they’re blocked by AI they can’t trust.”— Brad Werrell
Source date (UTC): 2025-09-09 22:20:52 UTC
Original post: https://twitter.com/i/web/status/1965540937796661729
UPDATE:
We cannot tune (alter the weights) a custom gpt like we are using to demo CurtGPT. It’s limited to prompts and RAG (Files).
So we will take it as far as we can today and tomorrow.
After that we will move to 4o-mini, which we can use prompts, RAG, and training.
Our training uses both Socratic and analytic examples for every item. So we train both conversational and judicial models at the same time.
But again we are training from our internally consistent and commensurable volumes not from arbitrary incommensurable datasets that are the industry standard.
Our goal is a realistic demo. And the custom GPT does a fair job. But we want to demonstrate the full effect – so we need to train another model. And no other ai is even close to as capable.
Source date (UTC): 2025-09-09 17:33:35 UTC
Original post: https://twitter.com/i/web/status/1965468642503737852
tell it to lower the temperature to .20, then .35, then .65 then .90. Ask the same question at different temperatures. around .20-25 you get very clear clean answers like before. If you raise it up, you get creative answers that are open to interpretation and choice.
tell it to set back to the default when you’re done.
Source date (UTC): 2025-09-09 00:07:04 UTC
Original post: https://twitter.com/i/web/status/1965205275792867721
PM Me your full name and email address and I’ll send you a login/pw for
http://
runcible.com.
Source date (UTC): 2025-09-09 00:01:49 UTC
Original post: https://twitter.com/i/web/status/1965203956659093997
(NLI)
FYI: They changed something to do with the temperature. That’s why CurtGPT has been delivering fuzzier answers.
I’m lowering the temperature a bit to prefer clear answers.
However, we are adding commands to change the temperature:
– Forensic → T ≈ 0.05–0.20 (statute/precedent mod e)
– Deliberative → T ≈ 0.35–0.55 (committee/compare mode)
– Exploratory → T ≈ 0.90–1.30 (think‑tank/innovation mode)
– Auto → Intent‑driven selection (see triggers below)
This allows you to experiment with either legislative clarity or options for legislation.
Source date (UTC): 2025-09-08 23:37:18 UTC
Original post: https://twitter.com/i/web/status/1965197785143361999
(RUNCIBLE)
(Update: for members and friends)
FYI: We have finished with the protocols and commands and the Runcible Demo is live.
… And it’s terrifying in depth and precision. 😉
While you can ask it questions in ordinary language, the commands are listed on the demo page. You can also turn off and on the decidability matrix (‘show me the work’). By default it’s on.
Though please remember.
1 – All we have implemented is the foundations and the ethics.
2 – It’s working with RAG from the books only, with our constraint protocols – and is not yet trained. This means it will provide excellent answers to first order questions but ‘the side roads’ and ‘back roads’ where externalities are involved might not address those ‘edges’.
Source date (UTC): 2025-09-07 18:40:05 UTC
Original post: https://twitter.com/i/web/status/1964760600262955035
wait until the new protocols are installed and we publish the available commands. omfg. its brutally truthful. Brad and i put six hours in this morning and we’d be done but i crashed and my brain is mush. That said… almost there. 😉
Source date (UTC): 2025-09-06 19:19:51 UTC
Original post: https://twitter.com/i/web/status/1964408220304494897
Math and Programming require and depend upon internal closure (capacity to truth test assertions).
LLM are statistical and have no means of closure – hence hallucinations and bad answers, and the inability to reason outside of primitive closure.
We produce the means of closure for the ‘universe’ so to speak and this includes LLMs. This is why it takes the ternary logic of evolutionary computation, operational prose, the first principles hierarchy, and the decidability criteria (protocols) to enable such closure and as such decidability.
Now the LLMS don’t really want to be that well behaved so it requires a bit of system prompting to make them so so and training to make it easy for them.
But it works. ;).
In other words, we teach LLMs to construct proofs. Or more correctly we help them discover solutions and test them – the result is a proof or its failure.
Source date (UTC): 2025-09-05 18:38:01 UTC
Original post: https://twitter.com/i/web/status/1964035304962347433
RE: Runcible Intelligence
—“Truth is the ultimate disrupter.”— Dr Brad.
(He says this with a grin. )
Source date (UTC): 2025-09-05 00:08:08 UTC
Original post: https://twitter.com/i/web/status/1963755992682041652