Good. Accurate. Would say that I think at least some of us who are aware of the three shortcomings that confirm your opinion. hardware, world model, self-training, sufficient recursion of prediction.

 

(a) neural nets today can categorize and predict within a trained domain. in other words, they aren’t ai’s their robots (machines)

(b) adversarial neural nets can only improve that process

(c) the hardware is inverted from the brain which has many millions of tiny processors (columns) working in parallel vs serial or batches of serial processing.

(d) the brain works on sequences in time that test for coherence of prediction between ‘nodes’ (groups of neurons, columns, macro columns)

(e) the coherent predictions across these subsystems survive competition with one another for integration,

(f) integration of relatively simultaneous predictions produces our experience of a moment.

(g) the brain creates an index of coherence producing an episodic memory out of location, place, borders, landmarks, objects, head direction, eye direction, the direction of movement, rate of turn, and rate of movement.

(h) it is these episodes that survive the test of coherence over time in a continuous stream of input that we auto-associate with one another, producing predictions.

(i) we ‘wayfind’ by recursion.

(j) we develop a hierarchy of recursion, and eventually what we call consciousness if enough recursion is possible, across enoug neurons, with enough biological economy to maintain that neural activity.

 

So thats a more precise manner of explainin the authors correct assessment that all we have done is produce hardware cheap enough to accomplish what all of us working on AI in the 80s knew already. And thanfully tools that make development cheap enough. But really, Baysian systems are just another form of database for the categorization of stimuli.