Yes well, look up the ‘paperclip’ problem and it’s a decade old and a trope in the ai field. But even if just read through it, the AI needs to be told to do it, has to have the resources to compute it, has to continue to use the resources to compute it continuously, has to
For example I have a ‘friend’ who worked in programming nuclear reactors. The thing is that there are all sorts of safeguards limiting what the computers can do to nuclear reactors and the constant capacity for human intervention even if those safeguards fail.
No one is going to ‘trust’ ai’s to do much until that risk profile is even lower than it is for ordinary human generated code today.
–“
What is the paperclip apocalypse?
The notion arises from a thought experiment by Nick Bostrom (2014), a philosopher at the University of Oxford. Bostrom was examining the ‘control problem’: how can humans control a super-intelligent AI even when the AI is orders of magnitude smarter. Bostrom’s thought experiment goes like this: suppose that someone programs and switches on an AI that has the goal of producing paperclips. The AI is given the ability to learn, so that it can invent ways to achieve its goal better. As the AI is super-intelligent, if there is a way of turning something into paperclips, it will find it. It will want to secure resources for that purpose. The AI is single-minded and more ingenious than any person, so it will appropriate resources from all other activities. Soon, the world will be inundated with paperclips.
“–
Reply addressees: @Gundissemenator @darkmythos_
Source date (UTC): 2024-05-23 20:31:37 UTC
Original post: https://twitter.com/i/web/status/1793741598632607744
Replying to: https://twitter.com/i/web/status/1793737419042546024
Leave a Reply