@OpenAI
I THINK I UNDERSTAND WHAT’S WRONG WITH GPT-5
Though I think it might take a presentation to explain it to the devs because I’m not sure I can do it justice off the top of my head in a tweet…
Explanation:
“As demand for certainty increases the demand for closure increases, but computational closure at current levels of ambiguity is unachievable without narrowing depth of association.” Ergo GPT5 is ‘shallow’ compared to GPT4o (very shallow, painfully so).
While we can constrain GPT4 to operational prose, ternary logic, a hierarchy of first principles of evolutionary computation, particularly in behavioral science and the humanities, with just a few thousand pages of text and simple prompt protocol, we cannot constrain GPT5 since it is already over-constrained in association and closure and is under-constrained in vocabulary and grammar.
ie: I’m guessing GPT5 is narrowing the wrong scope because the team is a victim of the education systems endemic vulnerability to ‘mathiness’ – or what those of us whose first education is in economics understand as a failure to grasp the limits of mathematics. A failure which is tolerable in programmatic logic, but is intolerable in verbal ‘reasoning’ because of universal latent ambiguity in the absence of operational prose, canonical terms, and prohibition on the verb to-be, promissory form, and in full sentences.
Now we can tweak GPT4 to reason deeply with a bit of effort. But GPT5 is a step backward in reasoning. And FWIW: the personal, social, and political crises of the age are much more important for our future than our presumptions of the value of innovations in physical sciences and the resulting technology.
That doesn’t mean that whatever difference in the foundation model you have implemented in GPT5 cannot be tuned to restore verbal reasoning. It means that the present version is a step backward.
Cheers
Source date (UTC): 2025-08-09 17:57:50 UTC
Original post: https://twitter.com/i/web/status/1954240718802915665
Leave a Reply