RE: “LLM users consistently underperformed at neural, linguistic, and behavioral

RE: “LLM users consistently underperformed at neural, linguistic, and behavioral levels”

1) The test results are obvious. My concern is that the presumption about it might be a form of luddism. Meaning what patterns will we learn this way vs what patterns did we learn under the ‘sciencing’ of education? And when we ‘scienced’ education what patterns did we learn in the cognitive model before that (rational philosophy). And what patterns before that (narrative wisdom and theology).

So I haven’t been able to synthesize a future prediction out of this experience, but my presumption is that we will yet again divide the spectrum of human thinking by the GRAMMAR of the PATTERNS (Paradigms) made possible by the capacity of AI’s to SYNTHESIZE patterns that are more universal than the siloed division of cognition today.

I mean, my work is unification of the sciences and reduction to first principles independent of silo (discipline). The AI’s fundamentally do the same thing – by accident.

So what happens if we think in first principles like we thought in scientific laws? We got a standard deviation in demonstrated intelligence out of the last transition. Even if we still have people stuck in theological, philosophical, empirical, grammars and ‘scientific silos’.

2) Regarding the paper, I don’t understand why the results would show anything other than the recall effort – which is what they show. THey do NOT show the long term memory effect of using LLMs by how they ‘think differently’ (recognize different patterns) over time. It might be (I assume it’s true) that we prefer LLMs to do ‘recall work’ the same way we prefer calculators vs pencil and paper, and pencil and paper vs doing math in our heads. It’s not clear this matters.
Or better said, I’m not YET clear it matters. Because previous revolutions in using instrumentation to assist us in calculation have merely moved us up a cognitive hierarchy of increasing complexity in causality rather than mere depth in the practice of recall.
Conversely, it appears, (china india – math vs anglosphere – law, commerce, tech, innovation vs germanosphere – engineering, social engineering, continental philosophy) that depth in these different sets of grammars (logics) has a rather profound effect over populations and time.

I mean, for most of human history, some subset of people in the community literally memorized everything necessary to know for group survival. Are we smarter or dumber by relying on writing reading and mathematics?

We are smarter for generalizing the world in to scientific laws.

Why won’t we be ‘smarter’ by generalizing more of the world, and the universe, into an even smaller set of first principles and rules?

CurtD
NLI


Source date (UTC): 2025-06-18 00:55:06 UTC

Original post: https://twitter.com/i/web/status/1935139174287556678

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *