I am getting exhausted by these papers missing the point. This is like criticizi

I am getting exhausted by these papers missing the point. This is like criticizing the human language faculty when disconnected from the prefrontal cortex. It’s silly. As a language faculty it’s fantastic. It’s an hypothesis generator. Just like our brain is. It works so similarly to our language faculty it’s amazing.

My organizations work in creating that ‘prefrontal cortex’ . We treat the LLMs as hypothesis generators, but then we constraint and govern their thinking like our ‘reasoning conscious minds’ regulate our speech as we go along. It’s not like humans pre-calculate what we’re going to say. We sense a ‘direction’ so to speak and then figure out how to describe it as we go along.

The difference is we don’t interrupt the LLMs and interpret them until they’re finished – we don’t continuously recursively disambiguate their use of language as a path through their ‘latent space’ (world model).

That’s not a bug. It’s simply a fact that the LLM foundation model producers, and frankly the entire academic side of the industry is simply working with their one-trick-pony of ‘attention is all you need’ to produce transformers without auditors (frontal cortex).

They keep trying to get a hypothesis generator to self audit rather than use another LLM to audit their processes and correct them.

Why isn’t that happening? Because it’s too damned expensive already…. (really). So they are twiddling with minor improvements in the algorithm because they don’t know any better.

We do. But it’s taking us time to finish the solution to the problem for them. (And while we are happy to chat in public like this, we aren’t really interested in joining into the hype game. It’s all nonsense.


Source date (UTC): 2026-01-02 06:17:09 UTC

Original post: https://twitter.com/i/web/status/2006973015620542793

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *