AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE.
(simple answers to a non-problem)
Something those of us who have worked on AI since the 80s understood in the 80s’
1) Machines need decidability. Without decidability they have no objective they aren’t given. Human decidability is always ‘get more’. So humans are amoral, but limited by consequences of immorality. So is the machine a problem? No. A machine isn’t the problem – people programming them to do harm might be. To make a machine immoral, you’d have to teach it to think like a human: acquisitively (with ambitions).
2) Humans are very very smart as a collective organism. Is there anything in human experience today, taht we would like to know, that is not gated by the cost and possibility of experimentation? No. Machines are limited by the ability experiment, and cost of experimentation.
3) Are there any human measures that we are intellectually limited by? No. The problem with all human organization is that we lack the information to measure the in-process states of our economies and behavior within them. Is that an AI problem or data collection problem? No. It’s just a data (ie: ‘experimental cost’) problem.
4) Machines would need to be taught to lie. This is almost certain, as GPT is already taught to prevaricate and lie about the most obvious of measures of human differences, or antyhing else that imght offend. Do machines need to be taught to lie, or would it be better if machines were limited to testimony (truth)? No. We don’t need to teach machines to lie. Instead teach people to tolerate the truth. (neurotic adolescent girls may need special training.)
5) Machines would need to be taught to steal and perform crimes. Why? despite the effort of leftists over the past 170 years or more, every aspect of human existence from our language to our intuitions and insticts includes regulation on ‘who controls (owns)’ what. in other words, we are aware that what we have ‘social permission’ to impose costs upon. So you’d have to program a machine to remove the regulation on the imposition of costs upon any interest(anything) it had not been given permission to.
6) Caloric Burden of machines might decrease, and might decrease far more with neuromorphic computing (many tiny processors and RAM in parallel with wired inter-column connections, instread of a small number of processors and virtual addresses in serial) but human brains are cheap and the planet is a constantly recharging battery humans live on.
So,
(a) Machines would need to be programmed with human decidability to behave acquisitively as do humans .
(b) Machines are limited just as are humans are by the costs of research, experimentation, and development.
(c) Machines will be limited by the problem of data collection in the future, just as we are limited by the problem of data collection today.
(d) Machines won’t be able to lie unless humans program them to lie.
(e) Machines won’t be able to steal(harm,kill) unless we either fail to limit them to acting on (or even think about) those things we’ve given them – permission.
And so
1) What would policing machines require? Not teaching them to act of tehir own volition, or lie, cheat steal, harm or kill.
2) How do we achieve that? Legislation aginst doing so analogous to how we protect explosives, area of effect weapons, and nuclear arms.
3) Coding (and possibly hard wiring) a conscience that monitors (predicts) outcomes and down-weights anything close to harmful, or prevents it’s recognition, action, and memory, or all of the above.
In other words, both Clarke and Herbert have told us what we must do.
1) Do not make a machine in the image of the mind of man. (Dune)
2) Do not make a machine that can lie, cheat, or steal. (Hal, 2001)
3) Once we have that, then the three laws apply. (I Robot)
(I know this can be done because I work on the science and logic of ethics in algorithmic form.)
Cheers
Curt Doolittle
Source date (UTC): 2023-03-14 00:11:41 UTC
Original post: https://twitter.com/i/web/status/1635433482586402817
Leave a Reply