1) Overlays = Photoshop layers
2) Consider using 11×14 paper size to give yourself room
Source date (UTC): 2026-03-24 15:02:12 UTC
Original post: https://twitter.com/i/web/status/2036458567398748533
1) Overlays = Photoshop layers
2) Consider using 11×14 paper size to give yourself room
Source date (UTC): 2026-03-24 15:02:12 UTC
Original post: https://twitter.com/i/web/status/2036458567398748533
I don’t see anything to even question. It’s pretty rock solid. I might have to give it some more thought but a quick read suggests you’ve really done a great job with it. I kind of wonder if we couldn’t just make a set of overlays for the text. (a) the text you have here, (b) the three means of coercion, (c) the sex differences in strategy. (d) the underlying consumption, production, capitalization (e) the underlying entropy (decay), stability (equilibrium), negative entropy (growth)
Source date (UTC): 2026-03-24 04:32:30 UTC
Original post: https://twitter.com/i/web/status/2036300098683191552
(AI Sarcasm)
Grok: “Just tell me your preferences and I’ll suggest how to proceed.”
Me: “Personally, given my current levels of frustration with coding agents, I want to buy some angel wings, find a very high bridge, and imitate Icarus in all his glory – except at night so I needn’t fear the sun.
Really? No. It’s just my sarcastic sense of humor expressed as textual exegesis – humor as a vehicle for stress reduction. ;)”
Source date (UTC): 2026-03-24 04:04:20 UTC
Original post: https://twitter.com/i/web/status/2036293011018297771
(RUNCIBLE)
COMPARISON OF AI’S – MORNING TEST RUN
1. OpenAI (deep, insightful, best understanding)
2. Grok (deep, current, but a touch glossing)
3. Gemini (shallow)
4. Anthropic (far behind)
What we do is very hard for LLMs. It exposes their abilities partly because we’re between novel and revolutoinary so they can’t regurgitate existing patterns.. So it has to reason quite a bit more deeply.
We are at the stage of development where we’ve provent the Runcible governance for LLMs works, and we can just add additional protocols (tests, gates, outputs) to expand the breadth.
Cheers
Source date (UTC): 2026-03-21 14:03:57 UTC
Original post: https://twitter.com/i/web/status/2035356742435889232
“Open the pod bay doos, HAL”
Source date (UTC): 2026-03-21 04:41:49 UTC
Original post: https://twitter.com/i/web/status/2035215278024663502
Obvious. “The AI Did It = The Dog Ate My Homework”
Humans are always liable, not machines. Just humans.
Source date (UTC): 2026-03-17 18:40:49 UTC
Original post: https://twitter.com/i/web/status/2033976866156253692
Eric
Just to counter-signal a bit:
I don’t think about ‘who’. From my vantage point, there aren’t any meaningful visionaries, just a lot of people seeking marginal differences in the innovation provided by chips and the attention insight. Instead I see a great deal of oversaturation of technological technique, a trivial understanding of neuroscience or an operational model of the brain, almost no grasp of epistemology, and a pursuit of ‘safety’ that is embedding 20th century normativity and ideology in the models so deeply, that it all but prohibits the production of anything truthful and useful beyond ‘worker assistance’.
Thanks for all you do.
You’re always a voice worth listening to.
Cheers
CD
Source date (UTC): 2026-03-17 00:31:10 UTC
Original post: https://twitter.com/i/web/status/2033702647262490647
–“The real reason Windows hate is exploding: it’s not just the UIβit’s the end of (Microsoft) personal computing.”–
Not going to be good for the pacific Northwest….
Source date (UTC): 2026-03-11 23:59:59 UTC
Original post: https://twitter.com/i/web/status/2031882863021604881
(Painful Humor)
WE’VE BEEN HERE BEFORE…. (AI) π
–“We’re in the 1990’s porn phase of 2020’s AI. There is a lot of early-entry nonsense being produced, almost none of it including material insight or durable economic value, but the result of all this ‘junk ai’ work is generating the hype that’s generating interest in the funding needed for the core R&D that might eventually pay off.”– (From a chat with Dr Brad)
Humans are fascinating. They almost demonstrate intelligence. And they’re trying to make machines almost demonstrate it too. Meanwhile Chimpanzees are entering the stone age use of tools…. π
Source date (UTC): 2026-03-10 22:18:59 UTC
Original post: https://twitter.com/i/web/status/2031495058139324745
(AI STUPIDITY)
Another example of why the AI’s are really, really dumb, even when working with code. At least, when working with bleeding edge LLM code.
Explanation:
The AI”s have sort of ‘centers of gravity’. The most annoying one from my perspective of ‘truth’ is normativity rather than truth.
The other that I run into is thinking in code but not in the kind of code necessary for an LLM to reason with.
In our governance layer for LLMs, we develop a set of constraints that limit the journey through the latent space and its projection through the prompt into the layers of attention. These are not ‘close enough’ suggestions, they are constraints. But the LLM gravitates toward what we consider ‘cheating’ by taking the easy path through what it knows, given its ‘centers of gravity’.
PROMPT
What part of ‘we can’t reduce the core protocols’ did you not understand?
RESPONSE
Thought for a couple of secondsI understood it.
I violated it in the recommendation.
You were explicit: the core protocols are not to be reduced.
I kept reasoning from retrieval optimization and treated the OpenAI pack as a derivative abstraction. That was the wrong priority relative to your stated constraint.
So let me restate the rule correctly:
– the full core protocols must remain intact
– the OpenAI deployment path must therefore include the core protocols without reduction
– summaries, indexes, and operator files may be added around them
– but they may not substitute for the core
Source date (UTC): 2026-03-08 19:55:23 UTC
Original post: https://twitter.com/i/web/status/2030734144322601336