Category: AI, Computation, and Technology

  • MORE AI THOUGHTS What made my early 80’s Artificial Intelligence design differen

    MORE AI THOUGHTS

    What made my early 80’s Artificial Intelligence design different at the time was storing information (knowledge) as ‘stories’: before during and after, where ‘during’ consisted of sets of before-during-after stories of opportunity, action reward(cost). Where ‘during’ consisted of actions that were possible and the costs. And where the ‘root’ or lowest ‘during’ was a single action.

    Now, this a very simple way of using three pointers to store full grammars in terms that were ‘understandable’ and ‘auditable’ to humans.

    And while that was a very simple method of creating symbolic networks, and in assembly language very fast, the principle is still the same: if AI’s are limited to expression of grammars in human language terms, they avoid the pitfalls of mathematical transformations and set transformations that are not symbolically auditable. In other words, you generate an ai that acts operationally instead of by formula. YOu would think it would be slower but it isn’t. If it needs to be faster and use ‘math’ then that is a problem of symbol creation in hardware not symbol comparison in software.

    Why does this matter? Because propertarianism provides the answer to moral ai design.


    Source date (UTC): 2017-07-31 20:19:00 UTC

  • ARTIFICIAL INTELLIGENCE AND THE ORIGIN OF PROPERTY-IN-TOTO When I worked on AI i

    ARTIFICIAL INTELLIGENCE AND THE ORIGIN OF PROPERTY-IN-TOTO

    When I worked on AI in the early 1980’s I used fairly simple storage mechanism using vectors consisting of addresses of before, during, and after states. A state consisted of a table(array) symbols (values). symbols, and symbols actions, and … well, I wont’ get into that detail here, but they influenced subsystems (again, arrays), and subsystems influenced emotions, and emotions had a half life. In assembly language this little thing was very fast.

    This effect was a very primitive version of what we think of as 3d collision detection today. (which is the right way of doing it. I just didn’t think of it back then. I have always thought a bit too ‘textually’. ) The point being that most of our brains work by constructing symbols (objects, whatever).

    But there is no reason to merge the hardware and software problems into a single system, if hardware can produce symbols “meaning”. I mean, basically, it was just a search engine that retained emotional context over so many recursions, and took actions if it got excited. (I had I think sixteen emotions at the time?)

    I had a blast with it because I was working on tank AI’s (for games) and trying to give them emotions. To simulate input (I was just building tests not a full simulation) I simply feed in a bitmap. The problem I kept having is that if you make it exciting to kill things …. I sort of got this psychopathic behavior out of the program through iterative positive reinforcement. Which is obvious right? lol Anyway. What I learned was that (a) you needed a lot of expensive hardware which we are finally able to produce today, and (b) you needed a lot of informational density to make a non-deterministic behavior, (c) you needed even more information density to make decisions possible. In other words, decisions are based upon information density (marginal differences between forecasts and perceptions). It’s differences that matter for decidability. (d) there was no way any meaningful AI was going to happen until Moore’s Law did it’s work for a few decades.

    In about 2005 or so (I can’t remember exactly) I had been very ill and working with the Half-Life engine after work again – after using the Quake engine in the late 90’s. I understood that the vector/state problem was solvable with geometry and the processing power of video cards. Someone from MSFT who had been working on the B2 Bomber software talked to me about Manifolds (Topologies) as data stores. Another guy from MSFT talked to me about developing a new form of programming to do all of this. The problem was how to store the data in geometries.

    I was struck immediately by the fact that the reason people can do so many things and store information so efficiently is that ‘man is the measure of all things to man’ – in other words, we store information that we can act upon. So the frame of reference is our actions. And this solves the symbol problem. In other words, symbol tables (meaning) could be constructed from combinations of possible actions. This solves the information density problem.

    This was when I started thinking about what we call ‘property in toto’ today. That is, we attribute value to useful property (objects of utility). And that property is an unlimited means of object definition.

    So aside from creating symbols for actions, and symbols for property (things I can act upon to transform). And this mean that it was possible to use property as a test of morality for all actions of an artificial intelligence. In other words, I understood that the way we create moral AI’s is the same way we create moral human beings: an AI can’t even THINK about an object (form of property) it doesn’t have permission to.

    So you see. That is where all of this work you see in Propertarianism has come from. I think in terms of artificial intelligences that are not conscious. I have understood how consciousness is not necessary, for most of what human’s do.

    And that was how I began to understand that we are mere riders (consciousness) on elephants (intuition), and that our ‘consciousness’ is merely the result of needing to empathize with and negotiate with other riders on behalf of our elephants.

    But the elephant is influenced by a demon called our genes, and the elephant is happy to lie to us to get us to act as its agent even against our will.

    As far as I know this is about as precise an understanding as you need to have.

    I can tell you how fish, reptilian, mammalian, and ‘human’ brains work using very simple processes through the thalamus and short term memory to create what we call ‘experience’ or ‘consciousness’. But it doesn’t matter. For the purposes of understanding human existence, it is very hard to train the rider to control the elephant. Because the rider is pretty dumb really, and easily fooled – and the elephant is an exceptional liar.

    We can do it though. And that is what I am trying to accomplish when I use the words “Truth, Agency, and Transcendence.” Sovereignty is just a means of limiting our actions to the moral.

    Curt Doolittle

    The Propertarian Institute

    Kiev, Ukraine


    Source date (UTC): 2017-07-26 20:37:00 UTC

  • THE AI QUESTION AND THE ANSWER There are three different stages of Artificial In

    THE AI QUESTION AND THE ANSWER

    There are three different stages of Artificial Intelligence we have to discuss:

    1) Specific Artificial Intelligence (imitation intelligence)

    SAI can perform routine tasks and do so better than people, and is bound by algorithmic limits.

    Achieved by sufficient hardware and processing speed, algorithms, and existing software and databases.

    vs

    2) General Artificial Intelligence (functional intelligence)

    GAI can solve problems and make decisions, can be bound by limits and act morally.

    Achieved by sufficient hardware, processing speed, algorithms, and I suspect new software and database structures (think video cards and geometry)

    vs

    3) Conscious Artificial Intelligence (creative intelligence)

    CAI can want, hypothesize, identify opportunities, theorize, create, invent, and learn, evolve, transcend, and circumvent limits and morality.

    Achieved by what I suspect will require new hardware and embedded software, with new software and database structures (as above)

    CONSEQUENCES OF FIRST STAGE AI

    There are an awful lot of jobs that are currently done by hand that can be done better and automated.

    Certainly accounting, ar/ap/pr will fall to SAI rapidly and first.

    Certainly circuit board design and development can be automated.

    Certainly assembly of products can be automated and has been.

    Certainly packing and shipping are already being automated.

    Certainly delivery of goods can be automated.

    Certainly food service can be automated.

    Certainly forecasting can be automated

    Certainly hiring can be automated (and firing) (My product will help with this)

    Certainly ad-buying can be automated (easily).

    Certainly on the job training for most functions can be automated (My product is built toward that end).

    Certainly research can be automated.

    Certainly stock purchasing can be automated.

    I don’t see organizing people into communities (businesses) being automated for a while.

    I don’t see strategic planning at the ceo level being automated for a while.

    I don’t see ‘outwitting competition’ as being automated for a while.

    I don’t see ‘selling people’ as being automated.

    I don’t see ‘serving’ people as being automated.

    The growth of the problem will be limited by the cost of financing such equipment versus moving production to cheaper labor than the cost of financing the equipment.

    The primary economic problem is this: you have to produce something for a lot of people in order to pay for yourself but you can only serve so many people at a time.

    There will be too few ways of getting money in people’s hands to spend.

    There will be a ridiculous oversupply of people in the end.

    I have been trying to solve this problem for a while now and I think I understand the solution.

    Most of what we will do is provide each other with entertainment, and others performing research.

    In other words, there will exist a 10% or 20% of the population with employable advantage and the rest of the people will be effectively pets.

    Curt Doolittle

    The Propertarian Institute

    Kiev Ukraine


    Source date (UTC): 2017-07-26 19:02:00 UTC

  • ( Fun: I used to play Counter Strike somewhat competitively under “Liberal!Nyuli

    ( Fun: I used to play Counter Strike somewhat competitively under “Liberal!Nyulism” and you can still see my maps if you google the name. )


    Source date (UTC): 2017-07-23 14:26:00 UTC

  • ARTIFICIAL INTELLIGENCE: EASTERN MORAL CRITICISM VS ANGLO INTELLECTUAL CRITICISM

    ARTIFICIAL INTELLIGENCE: EASTERN MORAL CRITICISM VS ANGLO INTELLECTUAL CRITICISM

    (act in harmony with nature: eastern and western versions)

    I’m going to “anglicize” his statements because the buddhism is hard for me to wade through.

    (a) the movement is unnatural because they didn’t account for costs. if the algorithm accounted for a maximum watts of output in the human form: 400 for short periods, 100 for maybe an hour, and 60-75 for an 8 hour work day. or roughly 1200-1500 watts per day before exhaustion (or death). starting from a standing position, an algorithm ‘should’ eventually develop walking by trial and error, and hemispheric mirroring of successes. THe general problem with mathematical analysis, logical analysis, philosophical analysis, and all models that derive from them, is an insufficient allocation of costs which nature does not and cannot tolerate without immediate failure.

    So where Hayao Miyazaki speaks in eastern ethics of shame, an anglo like myself speaks in anglo ethics of stupidity.

    This in itself is an interesting observation of our cultural differences.

    Not that one or the other is better.

    Ours just provides more actionable results.

    Both of us are saying the same thing: “ACT IN HARMONY WITH NATURE” in different ways.


    Source date (UTC): 2017-07-16 14:20:00 UTC

  • CAN AN AI TESTIFY? —“Can AI perform statements?”— Skye Stewart Brilliant que

    CAN AN AI TESTIFY?

    —“Can AI perform statements?”— Skye Stewart

    Brilliant question. The question is, who is speaking? The AI, or the developers, or the information providers, or the managers of it?

    In propertarian ethics, an AI is always owned like a pet. We may not harm it but that does not mean we grant it peerage. (I am not sure we can).

    But that said, even if we grant an AI rights by proxy of ownership like we do corporations, (which is what we will do), then can we punish an AI for false testimony? Can an AI make false testimony? Can an Ai speak without due diligence? Or would we have to punish the programmers that produce an AI that could lie or couls speak without due diligence?

    As far as I know you have to give an AI a means of decidability, and that humans have many incentives to produce falsehoods and ai’s have none of them. Our problem is instead, reducing error in GENERAL AI’s (remember that all current ai is not general ai). And to do that we need vast stores of information, and human-speed search and retrieval across all those domains.

    My personal view is that AI’s cannot report but not testify. AI’s can report but it is their producers and owners it proxies for.


    Source date (UTC): 2017-07-02 11:24:00 UTC

  • BLOCKCHAIN DREAMS (from elsewhere) (It’d be interesting to talk about my positio

    BLOCKCHAIN DREAMS

    (from elsewhere)

    (It’d be interesting to talk about my position of the future of blockchain, and multiple currencies, and the unlikelihood of any ‘libertarian’ vision that circumvents payment for transactions that finance the insurer of last resort for finance and trade. There is value in saving outside of the fiat currency – particularly in fractional shares of commodity money (gold/silver/platinum). But fiat currency will not go away. And the near impossibility of converting digital currency to real world goods, services, and information, is only going to get worse. So I view the current status of blockchain as the state refraining from interference in order to fund off-book research and development of more effective poly-fiat currencies. After all, the central problem the state faces is it needs to produce multiple fiat currencies that are tradeable for only subcategories of goods and services, and it needs to distribute them directly to individuals by bypassing the financial system, which serves as a distributor of fiat currencies unnecessarily. )

    Here is the deal. Won’t happen. Even the openness of the internet is going to end shortly. so the cowboy days of the internet wild west are coming to a close. We’re just funding the government’s R&D on post hard currency direct from the treasury, bypassing the financial sector. And nothing more.


    Source date (UTC): 2017-06-26 10:06:00 UTC

  • APPARENTLY I NEED TO DO A QUICK OVERVIEW OF POSSIBLE PLOTS IN FICTION WRITING –

    APPARENTLY I NEED TO DO A QUICK OVERVIEW OF POSSIBLE PLOTS IN FICTION WRITING – AND THE NARRATIVE AS PROGRAMMING LANGUAGE.

    So, I guess I just took it for granted that you can’t get out of university without knowing that there is only one type of story (Transcendence) and only so many plots (>6,<30), only so many character types (>6,<12), so many archetypes/heroes (~12), so many virtues (>4, <20), so many emotions (~8), so many senses (5 or 6), so many gender strategies (female, young male, mature male), and only one on purpose (acquisition).

    I thought it was fairly common knowledge that the Thesaurus is organized by sense perception (grammar) and that human language consists of a very small set of analogies to experience and is very simple – it’s just possible to load and frame with increasing ‘color’.,

    I suppose it’s obvious that the use of increasingly loaded language causes more associations between more senses and more memories and invokes greater free association (ideas) that we call thinking, imagination, or waking dream, or dream state depending upon the amount of focus we exercise over it.

    This is pretty well worn territory.

    On the other hand…

    I didn’t assume it was obvious that what we could acquire – Property In Toto – was understood by anyone.

    I didn’t assume it was obvious that this set of variables constitutes (literally) a complete programming language for the human mind.

    I myself didn’t understand then, that all language consists of the communication of ‘measurements’ which do not differ substantially between humans other than perhaps in intelligence and experience. And consequently that ‘man’s abilities constitute a consistent set of weights and measures, and that all language consists of the trading of measurements that are testable by our senses.

    I didn’t assume people understood that there is only one method of communication: suggestion using partial information (measurements). And that the method of suggestion can be used to convey honesty or deception. Or that the only way to convert honesty to truth is through subsequent establishment of limits by examples of falsification.

    I didn’t assume people understood that the difference between ‘good’ myth and literature, and ‘evil’ myth and literature was the use of suggestion in combination with idealism or supernaturalism, or omniscience, or omnipotence to claim knowledge of causality they did not possess (idealism), and claims to authority they did not possess (supernaturalism, omniscience, and omnipotence.)

    I have come to understand that there are sources of knowledge and sources of ignorance, and some ‘theories’ or ‘ideas’ actually create ignorance, and very few theories or ideas convey knowledge.

    I have come to understand that the training of a people for higher and higher trust and greater and greater agency is only so good as the elimination of people of low trust and low agency from the population.

    Like a lighter and lighter wheel, spinning faster and faster, trust creates increasing and increasing fragility without equal increases in intolerance for error (evil).

    Curt Doolittle

    The Propertarian Institute

    Kiev, Ukraine


    Source date (UTC): 2017-06-16 11:33:00 UTC

  • Well, you just don’t grasp yet, that the discipline we call computer science is

    Well, you just don’t grasp yet, that the discipline we call computer science is more generally the logic of operations, and is superior to mathematics in that it is causal and mathematics is merely descriptive.


    Source date (UTC): 2017-06-15 12:45:00 UTC

  • IMPORTANT FOR FELLOW AI THINKERS (great find) 0) I would need to understand the

    IMPORTANT FOR FELLOW AI THINKERS

    (great find)

    0) I would need to understand the operational descriptions of the eleven dimensions, or whether through modeling they have discovered that intelligence requires at least 11 dimensions (which is creepy a bit because this is the same problem with string theory). I will work my way through their publications and see if I can contact anyone there for feedback.

    1) um. Their technique is ‘the proper’ way of describing ‘pure relations’ as geometry (similar to E8 for example), and this is the only way I have discovered of constructing AI’s:through intermediary phenomenon in topological spaces. (what we call lie groups in mathematics). Or what this article refers to geometry and holes.

    2) In the mid 2000s I was working with a few people (from the B2 bomber software team, and microsoft developer and tools) on the use of topology (euclidian spaces), to create software that would spawn processes (agents) that would search topologies (relations in algebraic geometry) of different manifold (topical stores) to produce artificial intelligence, (defeating google)

    I was not in the health, financial, or mental condition to launch that huge an effort. It would have been too expensive. But the theory, must in fact, work. And it is, as far as I know, the solution to the problem.

    3) This work was helpful in my development of Acquisitionism (and later all of Propertarianism and Testimonialism, because to make comparisons possible across all the various topologies one needed a semantic system to provide consistent categories of measure. That system is “PROPERTY”.

    4) This is why I am not afraid of AI’s. We can create ‘consciences’ for AI’s just as easily as we do in humans, and it is the MARKET FOR SOLUTIONS that serve both the desire (acquisition) and the conscience (non-aggression) that allow us to produce non-dangerous artificial intelligences.

    Curt Doolittle

    The Propertarian Institute

    Kiev, Ukraine


    Source date (UTC): 2017-06-12 17:57:00 UTC