An Archive of sketches developing ideas.
(Yes) Source date (UTC): 2025-09-30 15:00:09 UTC Original post: https://twitter.com/i/web/status/1973040176030269922
The fact than anyone even vaguely understands anything meaningful I say still amazes me. And its only at this stage of writing volumes I fully understand the historical trap of
Computability: The Constraint of Constraints (Natural Law Core) Civilizations rise by mastering scale. But scale is entropy. More people, more knowledge, more complexity—each adds friction to coordination and incentive to
From Prediction to Governance: Cognitive Hierarchies, LLM Evolution, and the Necessity of Constraint B. E. Curt Doolittle Natural Law Institute, Runcible Inc. Email: curt@runcible.com Author Note This research originates from
The Evolutionary Foundations and Computable Architecture of Law: A Natural Law Framework Title: The Evolutionary Foundations and Computable Architecture of Law: A Natural Law Framework Author: Curt Doolittle’s (Analytic Reconstruction)
I’ve seen this process across my entire career, dating back to teletype machines. Every generation thinks they’ve made a novel discovery which they use different terms to describe, when the
The Next Word Fallacy in LLMs: It’s Still Wayfinding, But Neurological not Computational Ok, so in my understanding the process of producing outputs in both LLMs and human speech are
Literary and Technical Influences Q: I am very aware of the influence of some authors (I am absolutely positively a product of hayek) the more literary the author the more
Charles: I’ve read everything you’ve written since Losing Ground, and you were influential in inspiring my participation in the libertarian movement. While I consider Hayek, Popper, Kuhn, Quine, Godel, Turing,
Diagram to Support “LLMs Don’t Just Predict The Next. Word” Here is the full-page visual mapping the LLM processing pipeline to the human cognitive loop: Left column (LLM): Prompt →
Examples to Support “LLMs Don’t Just Predict The Next Word” Prompt: “Take the number of continents on Earth, multiply by the number of letters in the English alphabet, and divide
Sidebars to Support “LLMs Don’t Just Predict The Next Word” Constraint layers sit on top of the raw generative model and inject external demands into the incremental generation process. Types
cc: @Quillette Source date (UTC): 2025-09-28 00:33:17 UTC Original post: https://twitter.com/i/web/status/1972097245211504692
No. LLM’s Don’t Just Predict The Next Word. They Do What Your Brain Does. The popular refrain that “large language models just predict the next word” is true in the
(Teasing) Yeah but it’s a GOOD kind of naive. ;). I mean, fundamentally, like conservatives, they just want to be left alone. ;). The silly stuff is still silly. But
Everything I see is within the indian community, and less so in the asian. My opinion given that my company did the original tech sector research in the 00’s is
–“There is chaos in companies right now. Terrified of losing jobs. Unable to execute. Budgets being slashed. The expectation is that FTE’s will work harder just to keep their jobs.”–
Nope. Source date (UTC): 2025-09-27 18:10:33 UTC Original post: https://twitter.com/i/web/status/1972000927000449242
I swear. There are a lot of people in the field whose knowledge is so narrow that they make ridiculous statement to a public for whom tech is magic. Source
–“Our work on Natural Law constructs a system of universally commensurable measurement from a game-theoretic optimum. We measure differences from that optimum as costs. And we deliver Alignment including legal,