There is a large body of work on the risks of Artificial Intelligence, and the spectrum of methods of defending against one from policing, to forced forgetting, to But central to that work is the consensus that we are still quite far away from producing an Artifical General Intelligence (AGI).
That’s because we are demonstrably very, very, far from creating any form of general AI that can compete with even a small group of intelligent humans in the identification of patterns. We are barely at the brainstem level, and are nowhere near autonomous, conscious, cooperative(sympathetic), or theoretical levels.
Sure, there is clearly a multi-dimensional category of problems that we are forever going to need computational help in modeling and manipulating – but it is unclear if increasing dimensions exist in the universe or whether we cross a boundary where the universe does not model these phenomenon, they are purely mental relations between events of related behavior.
For example, in physics, in bio chemistry, in economics, and in mental phenomenon, we seem to be close to discovering the underlying number of physical dimensional relations (laws). We seem to have a pretty clear view of molecular and protein relations. We seem to have a pretty weak view of human cooperative relations. And we are almost nowhere in our understanding of mental conceptual relations.
However, it’s very unlikely, that even though each of those sets of relations increases in scale, that none of them increase infinitely in scale. So that at some point we can identify a minimum set of general rules for describing each of them.
My current opinion is that mathematicians understand now to model n-dimensional relations, but we just do not know the limits of the natural relations that we wish to model.
When we consider what humans can ‘think of’ and what ‘patterns that they can seek’ it seems to require an awful lot of information to identify a new pattern that adds a new dimension. I am fairly sure that artificial intelligence can help us do this.
But I will stick with the very obvious proposition, that for the identification of dimensions and the identification of patterns of relations, that our problem remains information gathering of sufficient precision to identify relations, not a problem of humans identifying relations.
In other words, we evolve conceptually very fast if we possess data that is obtained by tools, which is then reducible to analogy to experience such that we can create a mental model of it.
This is why I think we misconceptualize the scientific method. The scientific method simply asks us to perform due diligence upon our testimony to reduce bias and error, and prevent deception. The rest of the discipline requires the custom development of increasingly precise and diverse tools for the process of inspection and measurement. IN this sense, I group ALL crafts together in the pursuit of truth of some sort, and then categorize the three forms of coercion gossip/moral, remuneration/trade, force/law, as the three dimensions of coercion and the one dimension of truth (craft).
In other words, the scientific method is a MORAL set of rules that we can certainly impose as RULE OF LAW, enforcing those moral rules, and then as such all of us in all disciplines are bound by the scientific, moral, and legal constraint of truthfulness. And the we have only one discipline of knowledge(craft/investigation), and one of cooperation (negotiation/trade), and one of positive ambitions (gossip/rallying) and one of limits to those ambitions (force/law). I think this two axis view of social orders is probably about as close as we need for any analysis of human orders.
Now, back to artificial intelligence, I tend to look at the problem the same way:
– investigate/discover, (I would call this modeling rather than computation, just as I think Turing wanted computers to use ‘expensive’ logic rather than ‘cheap’ computation.)
– fantasize (search for patterns)
– (voluntary) trade, (search for opportunities)
– limits (test our limits)
And I would say that any artificial intelligence should possess those four ‘processors’, and only be introspectively AWARE of the results of those four. (It’s not as if the cpu is aware of the contents of the FPU for example. It just compares results.)
In other words, just as we cannot observe our brainstem process, our physical movement processes, or our search (intuitionistic) processes, but only our RESULTS from those processes, then feed back those results for further processing (recursive searching) I suspect there is almost no VALUE in a general intelligence engaging in introspective observation and permutation. In fact I am almost certain that this is the definition of general intelligence.
Now, there are two ways to handle limits. Either deny the observer (general intelligence) access to immoral, unethical, and illegal results, or weigh results so that it can ‘solve for’ (search for) methods of obtaining the same results by moral means;
And there is a big difference between identifying opportunities (finding a search pattern) and constructing a plan. And plans (workflow processes) are a known problem. While as humans we prefer to work in nodes (deliverables or ‘jobs’. or lists,) because context switching is very hard for us and reduces the value of our general intelligence in performing tasks, computers do not have this problem and they can process many threads of ‘workflows’ in parallel. Because we can only concieve of what we can keep ‘echoing’ between our short and long term memories,
To create a plan (a sequence of operations) any machine must produce some sort of data structure to accommodate it. We humans one of these structures as well but it is extremely limited – which is why we need numbers, writing, lists, sentences, paragraphs, stories, and plans. We actually repeat simple lists over and over in our imaginations in order to try to keep them. But machines do not have this problem of ‘losing context’ or discreet memory.
So we can also regulate the execution of the plan since a plan must occur in sequence in time. So it’s possible to create a conscience or judge that regulates the plan (attempts to falsify the optimistic theory), and that has no other interest other than falsifying the optimistic theory (plan).
And since unlike human minds, other machines can directly inspect the workings of another, it’s possible to police (bottom up) the theories (top down) of any aritficial general intelligence.
So my view is this:
(a) the funamental problem is one of data structures, not pattern finding. And that I believe that in order to be useful in any performance scenario, that these datastructures will be spatially n-dimensional, and searches will use pattern identification in n-dimensionsl pattners through them – identifying what’s missing as opportunity rather than what matches as ‘true’.
(b) that language, and the network of symbols that they refer to, accurately can describe the universe.
(c) that the four functions should be invisible to the consciousness that makes choices about which opportunities to explore (provides decidability).
(d) that a separate ai should police the plans (workflow).
(e) that it is unlikely that any intelligence with these constraints would infact be described as consious, but merely a very complex calculator.
(f) Like the movie Memento I think the first dangerous problem of it is storing information outside itself, and providing incentives to use that info to reconstruct such a plan – given some motivation to do so. And like many other scenarios, the second dangerous problem is seeking to make people happy rather than seeking to solve problems that are requested of it. Any machine that tries to make us happy will eventually harm us. Because we will give it the same incentive as drugs give us.
Curt Doolittle
The Propertarian Institute
Kiev Ukraine