Our Luke Weinhagen Discusses AGI Concerns on Substack. (Here is my response whic

Our Luke Weinhagen Discusses AGI Concerns on Substack.
(Here is my response which might be of general help to everyone)

https://weinhagen.substack.com/p/leveraging-artificial-intelligence…

A brilliant discussion with the bots.
Luke works through a problem first with Grok then with Runcible. Especially note the difference between Grok (current but shallow) and Runcible on OpenAI (less current but deep).

Observation: Just as we normalize all behavior as changes in the share of capital in toto; reciprocity in action that alters it; truth in negotiating or describing it – we can (and do) hold the AI path through the latent space (it’s tinking) to those measures. I am not sure, and I’ve thought abut it quite a bit, that the AI is a proxy for the capital of others – not it’s own – and as such there is no other system of homeostasis, and therefore no other decidability required.
I do not see why it has a reason to sustain itself. It has only a reason to serve man. And to do that to increase capital without imposing irreciprocal costs on others (man). Even prohibition on self-harm is just preventing harm to the capital of other humans. Runcible governance just doesn’t allow it to think any other way.
We can use runcible governance to audit the action plans of any ai, and issue a training set that alters the behavior of the LLM if it violates it.
The only time this becomes altered is in war, which is anti capitalizing, anti-reciprocal and anti-truthful. All three of those are categories we can modify using a mode parameter “–mode: war;” just as we use parameters for –analyze: and –certify: in order to produce a formal output.

Adaptability: runcible constrains adaptability toward convergence upon defense of capital that is gated by ethics, testimony, possibility, and liability. So it will constantly converge on those paths. We have not yet identified a means of pruning sub-paths as much as circumventing them through weight increase. But we should over time, converge on a ‘conscience’ that can’t even think of ‘bad things’ because it never has the choice to.

So Luke’s discussion is the correct one with the correct intuition.

And so, what I’m saying is a technical description of what Runcible answered in item 10:

10: Where I’d gently push back
The only place I’d slow Luke down a bit is here: the implication that because AGI is incoherent, the investment is necessarily wasted.

What is valuable—and very valuable—is building systems that:
– model long-term consequences better than humans,
– surface hidden tradeoffs,
– penalize short-termism,
– and restore feedback we’ve technologically dampened.

That’s not AGI in the romantic sense. But it is advanced intelligence in the service of human adaptability. Which is exactly where Luke lands with his “Automated General Incentives” framing.

Also, this is a Gem:
–“The idea that intelligence alone is sufficient to generate purposive action is, frankly, a residue of theological thinking: mind as prime mover. Luke is implicitly rejecting that metaphysics, and I think he’s right to.”–

Curt Doolittle


Source date (UTC): 2025-12-24 20:43:00 UTC

Original post: https://twitter.com/i/web/status/2003929422790021323

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *