Joscha Bach @Plinz With The Right Answer – and some additional insights:
Joscha says we need:
1) Individual (Personal) AI’s.
2) That don’t lie.
But…
3) Joscha argues that there is no universal commensurability and therefore decidability in manners, ethics, and morals – but my work and our organization have demonstrated that’s not true – at least, that all conflicts over demonstrated interests are universally decidable independent of opinion. In-group moral bias may be an arbitrary evolutionary artifact. But cross-group (cross-individual) moral violations of substance are universally decidable. Leaving the only question open one of manners (signals).
However…
4) That, however, does not impact the choice of what’s preferable for you and good for you and yours at the scale of your agency.
5) We know that the fundamental problem as ability and knowledge decrease is auto-associative prediction and the difficulty in compensating for ignorance, error, bias, wishful thinking, and self-and-other deception. Yet Personal AI’s can increase universal commensurability and increase cooperation by advising us on falsehoods, manners, and irreciprocities, and alternative solutions thus limiting much of the cooperative (if not productive) challenges of cooperation, whether interpersonal, social, economic, political, or strategic.
Therefore…
6) So Joscha’s point is more to be reduced to the importance of individual AI’s and the risk of corporate or government (or religious) AI’s that can be chartered with irreciprocities and malincentives, which only personal AI’s can defend us from.
7) And that we can’t regulate what we don’t yet understand ( I think we can by prohibiting ai’s from lying and irreciprocities (criminal, ethical, and moral), and by limiting military AI’s to carrying out specific orders.)
Cheers
Curt Doolittle
The Natural Law Institute
The Science of Cooperation
Source date (UTC): 2023-06-19 20:31:11 UTC
Original post: https://twitter.com/i/web/status/1670892000617308161
Leave a Reply