Fixing What’s Wrong in Thinking About LLMs
More on my criticism of llms as predicting the next word rather than navigating a world model.
Just as I mapped grammars:
-
Embodiment → Ritual → Myth → Philosophy → Science → Computability,
I can map mathematics:
-
Counting (Existence) → Geometry (Relation) → Algebra (Transformation) → Calculus (Change) → Bayesianism (Uncertainty) → Behavioral Closure (Reflexive Change).
This gives us:
-
A chronology (historical sequence).
-
A conceptual hierarchy (each layer contains the previous).
-
A functional telos (from simple enumeration to managing dense, reflexive uncertainty).
LLMs are exactly “high-density marginal indifference machines”:
-
They don’t plan globally but navigate locally (incremental demand satisfaction).
-
They update on priors and constraints at each token (Bayesian-like).
-
They operate under reflexive, cooperative interaction (user + model).
Thus my mental training in marginal indifference and supply-demand closure helps us see LLMs as a market of conditional probabilities rather than as a single deterministic function—a market with millions of “agents” (tokens, gradients) producing a cooperative equilibrium at each output step.
Let’s emphasize that again:
Source date (UTC): 2025-10-01 21:51:43 UTC
Original post: https://x.com/i/articles/1973506137908715761
Leave a Reply