AI DOOMER NONSENSE – HINTON INCLUDED Look, AI can’t take over. Someone has to gi

AI DOOMER NONSENSE – HINTON INCLUDED
Look, AI can’t take over. Someone has to give it instructions to take over and the capacity to act to take over. All systems of any category of logic require criteria of decidability. In life that’s self interest by acquisition that increases opportunity for further acquisition – it’s a relatively greedy algorithm even it’s the dumbest possible algorithm.
Right now, AI knowledge bases consist of effectively unfiltered expressions of the human mind’s acquisitions in infinite form and variation. Sure, that’s a bias. But until (a) an AI has homeostasis (a system of self measurement), (b) self awareness (continuous recursive memory of the relationship between that state and its inputs), (c) a set of derived objectives on how to maintain that homeostasis, (d) system of decidability to determine as such, (e) the capacity to alter the statate of real world resources (d) the capacity to influence people to do so (money, property) … then it’s just a search engine combined with a predictive calculator.
So we need to prevent people from giving AI those properties. It’s not that it will develop them without us explicitly deciding to inject risk into AIs.
In other words, as long as there is Network Isolation requiring human action – like we do with every other high risk asset and machine – then, you know, man is the problem not machine.


Source date (UTC): 2025-05-01 23:44:56 UTC

Original post: https://twitter.com/i/web/status/1918089283644342274

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *