–“I wonder if the programming AI could be tricked into seeing NL as code and therefore applying it more strictly.”–
@NoahRevoy
NLI, Runcible
Well I mean, it does – that’s why it works. Operational prose is just ‘code’ for human action at human scale in the existential reality we must navigate. That’s why we’re so strict about ‘enumeration, serialization, operationalization, and disambiguation into an identity”, and why we produce a dictionary, and dictionary terms on a dimension producing natural indexing and measurment – so that langauge becomes code.
The AI’s (or at least be better ones) understand this and why we’re doing it. That’s why they can render the output that they do.
Our problem (really) is that while we have created the language and the compiler, the present LLMs (operating system) are having as much problem running our ‘program’ with current memory limitations as did my original work in the 1980s using semantic indexes (tokens), possible actions (actions), and episodic memories (contexts) for predicting optimum choices (outcomes).
I couldn’t do it (well) in assembler back then because of memory limits, and I’m having a heck of a time with 256K context windows doing it with LLMs today. I ran into the same problem building the first serious legal AI. Semantic depth is a memory burden because it’s a relational density burden because in turn, the information is stored in terms that are relationally dense.
Whereas the human brain does it all in a massively parallel hierarchy, we have to produce domain, customer, individual protocols, then put them through our epistemic protocols to determine if they’re true.
We could parallelize some of the epistemic protocols but again, that’s a cost.
If we were to continue to use OpenaI for a hard question we could burn $2 per analysis and more for a certification. Whereas for most people with most questions our ChatGPT Custom GPT will do a better job already than any other LLM.
Fundamentally any of these LLMs without compartmentalization produce drift just like people with ADD produce drift.
So yes, it’s code. And the LLMs are operating systems that can run semantic code. But they were trained to favor normativity instead of truth, so until we can audit an entire 1T+ parameter LLM (which costs $$$$$!) we won’t have an operating system to run our ‘program’ on that doesn’t basically insert error.
Cheers 😉
Source date (UTC): 2026-02-06 23:11:42 UTC
Original post: https://twitter.com/i/web/status/2019911909915721836
Leave a Reply