Apr 28, 2020, 9:44 AM

Yep. The problem is the entire industry is built for central computation limited by frequency (heat) rather than distributed association and prediction limited only by numbers(cool) – and what we need is billions of trivial circuits whose primary difference from neurons(really dendrites-synapses) is in creating many local logical (addresses) rather than physical (dendritic-synaptic) connections, storing trivial (sparse) bits of memory in sequences. In addition the context (episode) creates an index in time AND space and each fragment of information is locally co-associated with those positions.

Every neuron, micro-column, macro-column, region of the brain is trivially simple, but in concert they produce in parallel what cannot be done by increasing frequency(and heat).

Graphics processors are architected for parallel processing and so we ‘hijacked’ them in the 2000’s for AI use. And since the human brain uses triangles and hexagons for producing its world model, the graphics processor does solve HALF of the underlying problem: the neocortex is a doubling (folding over) with six layers, of the entorhinal cortex (three layers), dividing the responsibility of identity (top) and relative position (bottom), with the outputs passed forward in the cognitive hierarchy in a vast market competition for coherence.

So it’s not that we don’t (at least now, because this is all recent knowledge) know how to create general intelligence (I certainly do). Its that (as turing said) we built the machines for math (top down) instead of thinking (bottom up) with math as merely one of the grammars (logics) resulting from it.

So the current issue as I understand it, is that we cannot achieve in software what we need hardware for. So we need a manhattan project to produce thinking machines only because the industry is constructed for the opposite aim, and current (primitive) neural networks can categorize but only do so with vast amounts of information and manual tuning.

In this illustration from the attached web page, we see the limit of what current AI is able to do: categorize, and only after lots of training and tuning. This means application specific hardware because the hardware is constructed ‘incorrectly’ still for the task of general intelligence.

Conversely there are many functions where we do not want a general intelligence – which exchanges increase in possibility of error for decrease in cost of adaptation. Robots are dangerous because they’re not intelligent, but there are many cases where not-intelligent danger, and intelligent danger are a trade off.

So we have market demand for (a) simple software problems (b) application specific ai problems, and (c) general ai problems.’