WHAT DOES IT TAKE TO CREATE A MALICIOUS AI?
Ok, so you gave me an idea of how I can frame this issue. So thanks.
So given that AI’s can take on the frame of reference and personality of whomever you want (ChatGPT can take on mine from the content I have published on the web), that means there is no underlying personality bias or framework other than the one you ask it to adopt. So if it has access to serial killer writings it can adopt the personality (values) of a serial killer. This is true.
But
(a) Since my work (our organization’s work) consists of providing universal manners, ethics, and morals that can be trained into it and NOT ignored. Even if we modify the demonstrated interests per culture, particularizing those ethics and morals for that country.
(b) Since no matter what these ai’s can spew out, that doesn’t mean that actions can be spewn out.
(c) Since there is no global state only user and task states, and since there is no global state possible, nor reason to give it a global state across users (or the compute available or even possible to do it), there is no ‘mind’ in the AI that has any ambitions other than those provided by the asks from users, or those that are given to the AI. In other words it has no ‘mind’ or ‘motivation’. It has only data.
(d) So the problem isn’t the, AI’s it’s the problem of people with lots of money, lots of compute, actively creating a constantly-running, iterative, state-memory voluminous, process (mind), with specific motivations, attaching AIs to machinery that can perform actions in the real world.
(e) So ai in general isn’t a risk. The risk is people, and it always will be people, because people always have motivations, and people always have minds, and people can always act.
Thanks for the ‘pressure’ to answer. š
CD
Reply addressees: @Gundissemenator @darkmythos_