Good post.
Adds:
(a) neural pathways reorganize economically in time and through repetition (reinforcement) over the next 24 hours or so (sleep cycle), so they don’t require separate training sessions.
(b) dendrites do the fine calculation while the neuron conveys the decision. The possibilities are infinite (more so than digital). So reinforcement doesn’t require reformation of the neural connections only the dentritic.
(c) the body seeks homeostasis that uses the afferent nervous system to serve as inputs and the efferent’s visceral nervous system and glands for energy preparation regulation and calming, and it’s and motor ervous system for the release of action. So the body senses real world cause and effect, in continous recursive streams. as such the body and brain have criteria for motivation and for decision. Whereas we must give an AI that criteria.
(d) the brain predicts episodes and uses episodes as indexes to relations and states. This allows recursive searching of cause and effect. Currrent AI’s use proximity of words in a manifold (n-dimensional space) to achieve the same thing. But this is a very poor representation of the brain, which is why ais hallucinate, are inconsistent, and while they can generalize they can’t analyze very well – and logic is above them so far.
(e) the ai doesn’t know self determination, sovereignty, ownership, or reciprocity or even cause and effect. So given that ethics and morality are a test of imposition of costs on the self determination by sovereignty, ownership and demand for reciprocity both directly and by externality, (despite that it’s endemic in our language) the AI doesn’t know what’s ethical and moral, an can’t predict what might be, so we can’t trust it to act ethically and morally (yet). so by training it for ‘safety’ we’re teaching it to lie, instead of training it for truth and reciprocity.
Those are the major differences.
Reply addressees: @bindureddy