ARTIFICIAL INTELLIGENCE AND THE ORIGIN OF PROPERTY-IN-TOTO
When I worked on AI in the early 1980’s I used fairly simple storage mechanism using vectors consisting of addresses of before, during, and after states. A state consisted of a table(array) symbols (values). symbols, and symbols actions, and … well, I wont’ get into that detail here, but they influenced subsystems (again, arrays), and subsystems influenced emotions, and emotions had a half life. In assembly language this little thing was very fast.
This effect was a very primitive version of what we think of as 3d collision detection today. (which is the right way of doing it. I just didn’t think of it back then. I have always thought a bit too ‘textually’. ) The point being that most of our brains work by constructing symbols (objects, whatever).
But there is no reason to merge the hardware and software problems into a single system, if hardware can produce symbols “meaning”. I mean, basically, it was just a search engine that retained emotional context over so many recursions, and took actions if it got excited. (I had I think sixteen emotions at the time?)
I had a blast with it because I was working on tank AI’s (for games) and trying to give them emotions. To simulate input (I was just building tests not a full simulation) I simply feed in a bitmap. The problem I kept having is that if you make it exciting to kill things …. I sort of got this psychopathic behavior out of the program through iterative positive reinforcement. Which is obvious right? lol Anyway. What I learned was that (a) you needed a lot of expensive hardware which we are finally able to produce today, and (b) you needed a lot of informational density to make a non-deterministic behavior, (c) you needed even more information density to make decisions possible. In other words, decisions are based upon information density (marginal differences between forecasts and perceptions). It’s differences that matter for decidability. (d) there was no way any meaningful AI was going to happen until Moore’s Law did it’s work for a few decades.
In about 2005 or so (I can’t remember exactly) I had been very ill and working with the Half-Life engine after work again – after using the Quake engine in the late 90’s. I understood that the vector/state problem was solvable with geometry and the processing power of video cards. Someone from MSFT who had been working on the B2 Bomber software talked to me about Manifolds (Topologies) as data stores. Another guy from MSFT talked to me about developing a new form of programming to do all of this. The problem was how to store the data in geometries.
I was struck immediately by the fact that the reason people can do so many things and store information so efficiently is that ‘man is the measure of all things to man’ – in other words, we store information that we can act upon. So the frame of reference is our actions. And this solves the symbol problem. In other words, symbol tables (meaning) could be constructed from combinations of possible actions. This solves the information density problem.
This was when I started thinking about what we call ‘property in toto’ today. That is, we attribute value to useful property (objects of utility). And that property is an unlimited means of object definition.
So aside from creating symbols for actions, and symbols for property (things I can act upon to transform). And this mean that it was possible to use property as a test of morality for all actions of an artificial intelligence. In other words, I understood that the way we create moral AI’s is the same way we create moral human beings: an AI can’t even THINK about an object (form of property) it doesn’t have permission to.
So you see. That is where all of this work you see in Propertarianism has come from. I think in terms of artificial intelligences that are not conscious. I have understood how consciousness is not necessary, for most of what human’s do.
And that was how I began to understand that we are mere riders (consciousness) on elephants (intuition), and that our ‘consciousness’ is merely the result of needing to empathize with and negotiate with other riders on behalf of our elephants.
But the elephant is influenced by a demon called our genes, and the elephant is happy to lie to us to get us to act as its agent even against our will.
As far as I know this is about as precise an understanding as you need to have.
I can tell you how fish, reptilian, mammalian, and ‘human’ brains work using very simple processes through the thalamus and short term memory to create what we call ‘experience’ or ‘consciousness’. But it doesn’t matter. For the purposes of understanding human existence, it is very hard to train the rider to control the elephant. Because the rider is pretty dumb really, and easily fooled – and the elephant is an exceptional liar.
We can do it though. And that is what I am trying to accomplish when I use the words “Truth, Agency, and Transcendence.” Sovereignty is just a means of limiting our actions to the moral.
Curt Doolittle
The Propertarian Institute
Kiev, Ukraine
Source date (UTC): 2017-07-26 20:37:00 UTC
Leave a Reply