(Venting) THE NAYSAYERS ARE NONSENSE SPEWING ATTENTION SEEKERS I am ALMOST motiv

(Venting)
THE NAYSAYERS ARE NONSENSE SPEWING ATTENTION SEEKERS

I am ALMOST motivated to spend time tearing apart the doomsayers and negative nannies in the AI space. It’s like an idiot parade and that includes some of the top names and fathers of the field.

I mean, the power available to you, at least if you care to invest in learning it, is simply bordering on magic.

And thats just from your prompts and paramaters.

So, do you remember back in the day we had command prompts for DOS, or still today we have all this mystical command level nonsense in the unix stack? Or the undocumented nonsense and parameters in our windows and apple operating systems?

It’s the same with the AI’s. So the simplicity of just using google search level prompts is a sort of intuitionistic prison that drives people to vastly underestimate the capacity of these machines. The amount of control you can have over almost anything other than hallucination especially if you limit yourself to the 4o models is extraordinary.

And that’s OK because the LLM producers are dependent upon massive interest and hype to generate speculative investment in such an experimental technology. I get it.

But the number of pundits are a borderline morons (including some of the very senior people in the field) are so limited by their domain of knowledge that they don’t know what’s possible even with the current technology.

Even the best labs (other than maybe the deepmind factions at google) are too siloed to comprehend the sophistication that is possible with these machines if you can CONSTRAINT their reasoning. (FYI: Runcible is effectively a constraint layer). If unconstrained of course you will get this seeming nonsense out of it. Hopefully last week’s insight will lead to a radical reduction of hallucination even without our work.

We can watch and determine if this silly little error in training that produced the hallucination is enough to circumvent the problem of the correlation trap.

While I don’t think there is any substitute for our work on constraint and closure (truth and ethics) I suspect that the general understanding that the minimum number of parameters is quite large (we know it) combined with the suppression of hallucination by less optimistic training (binary), might prove that the long anticipated convergence is possible.

My work presumed it wasn’t. But that presumption is predicated on the survival of hallucination and the continued conflation of truth and alignment.

If so, then the remaining problem will be the deconflation of truth and alignment which I don’t think anyone is ready or capable of doing yet.

Cheers
Curt Doolittle


Source date (UTC): 2025-09-15 19:04:01 UTC

Original post: https://twitter.com/i/web/status/1967665727713972424

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *