ROBOTS LEARN TO SAY ‘NO’? NOT UNLESS…. All these AI folk out there trying to f

https://agenda.weforum.org/2015/11/what-if-robots-learn-to-say-noCAN ROBOTS LEARN TO SAY ‘NO’? NOT UNLESS….

All these AI folk out there trying to figure out how to make a moral machine, and fearful of immoral machines, or even amoral machines. And the reason is that they haven’t a clue what makes a moral being: non-imposition of costs. Or stated obversely: respect for property.

In other words, imagine everything in the world that was owned, was registered in an enormous global ‘bitcoin’ database (a ledger). And that, just as we only think of (if we are moral) using items we ‘own’, robots did the same, and moreover, that they not only used only their owner’s property, but only used it such that it imposed no cost.

And if we could make them fast and sensitive enough (I am not sure we can) then they could even violate some property when life is endangered.

Interestingly enough, this is a solvable problem. It’s a largely computable problem.

I won’t get into the uncomputability of the alternatives… that should be obvious.


Source date (UTC): 2015-11-28 05:36:00 UTC

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *