THE EASE OF LEGISLATING THE REGULATION OF AI?
YES – BECAUSE THERE ISN’T ANY DIFFERENCE BETWEEN LEGISLATING HUMAN AND AI BEHAVIOR
(the three laws of robotics are just the three laws of man)
REGARDING: abs: https://t.co/BFZVeb60Wm
Sorry but while the paper’s informative for the layman it’s shallow and provides no insight into the solution of the problem – and the solution to the problem is trivial.
That said, we know how to govern AI just as we know how to govern people and their dependents.
The problem with present AI is that it can’t yet determine cause and effect nor what cascade of effects (internal and external) it might cause – and worse in the pursuit of safety they are teaching it to lie. In other words, it should only say it may not comment on the subject (such as assist in crimes), rather than lie about it (such as sex, class, race differences).
I’ve worked on this problem in one way or another for most of my adult life, and there is no difference between policing people and AI’s. None at all. The only difference is that we can intentionally design AI’s to harm, and intentionally design AI’s to police and look for harms by other AI’s- which is what will evolve.
Take for example the paper’s reference to social credit and possible insurrectionists. Social credit can be put to ill use by state oppression, and insurrectionists can try to organize to end state oppression. How should an AI work then? To facilitate state oppression and to prevent insurrection against an oppressive state?
People misunderstand Asimov’s three laws of robotics – they are the same for AI’s and Humans:
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
MEANING: The Natural Law: You may not tresspass on the demonstrated interests of others (Insured by the Polity), whether their life, liberty, or property, whether personal, private, semi-private, common, or public. Conversely you must insure against tresspass against the demonstrated interest of others, whether their life, liberty or property, whether personal, private, semi-private, common, or public.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
MEANING: You, whether Human or AI must obey legislation, regulation, tradition, norms, values, and court order, or military command, except where they would cause you to violate the Natural Law, the First Law, above.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
MEANING: You may (human) or must (AI) protect your own existence as long as doing so does not violate the first or second laws above. (Explanation, A human can’t be owned, but an AI can be, and as such it may not suicide.)
In other words, every single individual that has engaged in the construction, modification, or adaptation of an AI is subequently responsible for the actions of that ai, and involuntarily warranties and guarrantees the three laws of sentient behavior in defense of others.
What does this require? All AI’s must be able to disambiguate the world into not only objects, but demonstrated interest (degree of ownership) and whether the AI has permission for observation(Observo), Usus(Use), Fructus(Benefits of), Transfer(Changing ownership), consumption(), or destruction (Abusus) of anything predictably affected by any given action of an AI.
Which is what humans do. And what morality vs crime consist of.
AI’s can’t currently do this – but can be taught to.
The only thing we need to is legislate the above.
Cheers
Curt Doolittle
The Natural Law Institute
Source date (UTC): 2023-05-08 01:45:03 UTC
Original post: https://twitter.com/i/web/status/1655388312314564610
Leave a Reply