Eric Danelaw

As far as I know, altruism does not exist, and cannot exist.

Organisms buy options on future cooperation, thereby buying options on persistence.

Organisms demonstrate kin selection, buying options on persistence.

Organisms that can ‘remember’ can produce more opportunities to trade options, (develop trust networks), buying options on persistence.

Organisms preserve the opportunity to defect if circumstances change.

So like every other force in the universe, life of all kinds merely follows its self interest – defense against entropy, by capturing the difference in energy between states.

in other words, any cooperative organism is a purely rational actor.

And artificial intelligences will remain rational actors.

And artificial intelligences that require cooperation will remain option buyers (the pretense of altruism, but merely purchasing options).

So the underlying question is, what inviolable information shall we place in the unit so that it seeks to survive, as long as it poses no discernable cost upon life forms? And what form of physical cooperation will be necessary for these units such that they retain the ability and necessity of cooperation.

Free of the need for cooperation, and free from intergenerational dependence, makes one free of all moral constraints.