As far as I know, the binding problem is a non problem.

(a) there is no reason we cannot produce the three generations of artificial intelligence other than the capital investment in the hardware, and heat problems. (AI (lite, tasks), General AI, Conscious AI).

(b) As far as I know the storage of sparse data problem is solvable and assists in the hardware and heat problem. And as far as I know operational language provides the symbols necessary to store such data, and formal language provides the input output and grammar. As far as I know the limit of imitating neurons is symbol production (semantics and grammar). As far as I know n-dimensional manifolds allow commensurability, storage, and searching of all possible concepts open to human consideration, albeit, in a geometry we cannot imagine, but which a machine can make (easy) use of.

(c) as far as I know the “ethical/moral” AI problem is solvable through the combination of i) competition (a firmware conscience) given that hardware memory unlike wetware memory is open to continuous inspection – right down to the register level; and ii) inclusion of a title(property) dimension to all grammars. (our english grammar both explicitly and implicitly does so).

(d) to say that we can duplicate the human mind is rather … ridiculous for the simple reason that it’s the last possible thing we should want to do- humans are amoral, and choose moral and immoral actions pragmatically. It’s just pragmatic to act morally. Man is a super-predator, and there is no value in creating an artificially intelligent super-predator. Ergo, duplicating the human mind should be at best illegal, and at worst on the part of a war crime. Instead, machines requires a means of decidability,and a symbolic system, and an auditing system, rendered in human readable grammars. And we provide it with decidability. Not amoral decidability. But moral.

(from a previous post)

There are three different stages of Artificial Intelligence we have to discuss:

  1. Specific Artificial Intelligence (imitation intelligence)

SAI can perform routine tasks and do so better than people, and is bound by algorithmic limits.

Achieved by sufficient hardware and processing speed, algorithms, and existing software and databases.

vs

  1. General Artificial Intelligence (functional intelligence)

GAI can solve problems and make decisions, can be bound by limits and act morally.

Achieved by sufficient hardware, processing speed, algorithms, and I suspect new software and database structures (think video cards and geometry)

vs

  1. Conscious Artificial Intelligence (creative intelligence)

CAI can want, hypothesize, identify opportunities, theorize, create, invent, and learn, evolve, transcend, and circumvent limits and morality.

Achieved by what I suspect will require new hardware and embedded software, with new software and database structures (as above)