Theme: AI

  • you can do all these things because you are, frankly, irrelevant. Those of us th

    you can do all these things because you are, frankly, irrelevant. Those of us that are relevant need AUDIENCES. And we cannot get AUDIENCES on gab when the features necessary for serious discussion are not available.


    Source date (UTC): 2017-08-18 16:28:33 UTC

    Original post: https://gab.com/curtd/posts/5148891310886609

  • Because I want to nag the people behind GAB out of being an tenth rate twitter c

    Because I want to nag the people behind GAB out of being an tenth rate twitter clone, and into a platform that can compete with the mainstream.


    Source date (UTC): 2017-08-18 16:25:38 UTC

    Original post: https://gab.com/curtd/posts/5148873810886422

  • The only value in nn’s is in symbol detection, and from there on out, the cost r

    The only value in nn’s is in symbol detection, and from there on out, the cost reduction in the use of manifolds (geometry of relations) is so much cheaper, faster, more powerful, and controllable, and auditable that it will defeat training in every discipline that has any limitations on closure. So much like our brains, it is more logical to create hardware that produces a limited number of symbols, and then use software to synthesize them. Our brains do this at very low cost by making profound discounts to cost, relying on memory substitution rather than input retention. This is why we are able to act quickly in some circumstances, but why we have such high error rates in anything of any complexity. So AI will be beneficial above and below human scale, but not so much *at human scale* because nn’s can defeat us in perception, and GAI’s may defeat us at reason. But those two technologies NN/ML, and Algorithmic Artificial Intelligence will perform better than NN’s unless there is some tremendous innovation in hardware that allows it to compete with the 20w of power consumption of the human brain, or the 200 watts of an ordinary processor processing symbolic data (akin to language) fed to it by extraordinarily good measurements, provided by some variation of nn and Bayesian hardware.


    Source date (UTC): 2017-08-17 10:59:00 UTC

  • THE CONSEQUENCE OF CRYPTO-CURRENCIES: PYRAMID STOCKS I love you all, and I know

    THE CONSEQUENCE OF CRYPTO-CURRENCIES: PYRAMID STOCKS

    I love you all, and I know we all seek power by means we understand. But you know, cryptocurrencies are bullshit. All you are doing is issuing growth stocks on speculation, and the government can easily suppress it. Now, to say that there are a lot of suckers who will chase a bullshit growth stock is nothing but saying that there are suckers for every possible pyramid scheme.

    All you are doing is self financing the next fiat currency system (which is fine) – the world needs a method of circumventing the financial system.

    Now, all fiat currency is stock in the state. And all fiat currency depends upon growth of some sort (or at least, relative stability) – in the sense that it is more stable than other market assets because its demand-price can be regulated by the state.

    But you gotta understand. There is no future for private currencies. Just the opposite. The future will largely consist of even more fiat money exchange, with commodity money, like today, a more volatile insurance against rapid depreciation of the fiat currency.

    If for some reason the blockchain (ledger) system succeeds, and all the techincal problems are solved (very questionable), the state will merely regulate it such that it is first an extension of the fiat currency, and then a partial replacement for it, and then a full replacement for it.

    So, in effect, the crypto-currency research and development is assisting the state in developing the technology with which to deprive us of hard currency and our ability to conduct exchanges invisibly and outside of the government interference.

    all little boys want secret boxes – but little boys have nothing to put in them. Big boys don’t use secret boxes. They leave the stuff for all to see, because they possess the arms to defend it.

    The palaces in Minos, like Sparta, had no walls.


    Source date (UTC): 2017-08-16 10:25:00 UTC

  • Do you think that people write 750 word arguments in little chunks, or do they s

    Do you think that people write 750 word arguments in little chunks, or do they sketch and edit then post a completed and cogent idea?

    Gab will not succeed as a twitter clone no matter how well intentioned.


    Source date (UTC): 2017-08-11 16:19:39 UTC

    Original post: https://gab.com/curtd/posts/5088357910615015

  • Untitled

    https://hackernoon.com/the-ai-hierarchy-of-needs-18f111fcc007

    Source date (UTC): 2017-08-09 06:46:00 UTC

  • MORE AI THOUGHTS What made my early 80’s Artificial Intelligence design differen

    MORE AI THOUGHTS

    What made my early 80’s Artificial Intelligence design different at the time was storing information (knowledge) as ‘stories’: before during and after, where ‘during’ consisted of sets of before-during-after stories of opportunity, action reward(cost). Where ‘during’ consisted of actions that were possible and the costs. And where the ‘root’ or lowest ‘during’ was a single action.

    Now, this a very simple way of using three pointers to store full grammars in terms that were ‘understandable’ and ‘auditable’ to humans.

    And while that was a very simple method of creating symbolic networks, and in assembly language very fast, the principle is still the same: if AI’s are limited to expression of grammars in human language terms, they avoid the pitfalls of mathematical transformations and set transformations that are not symbolically auditable. In other words, you generate an ai that acts operationally instead of by formula. YOu would think it would be slower but it isn’t. If it needs to be faster and use ‘math’ then that is a problem of symbol creation in hardware not symbol comparison in software.

    Why does this matter? Because propertarianism provides the answer to moral ai design.


    Source date (UTC): 2017-07-31 20:19:00 UTC

  • ARTIFICIAL INTELLIGENCE AND THE ORIGIN OF PROPERTY-IN-TOTO When I worked on AI i

    ARTIFICIAL INTELLIGENCE AND THE ORIGIN OF PROPERTY-IN-TOTO

    When I worked on AI in the early 1980’s I used fairly simple storage mechanism using vectors consisting of addresses of before, during, and after states. A state consisted of a table(array) symbols (values). symbols, and symbols actions, and … well, I wont’ get into that detail here, but they influenced subsystems (again, arrays), and subsystems influenced emotions, and emotions had a half life. In assembly language this little thing was very fast.

    This effect was a very primitive version of what we think of as 3d collision detection today. (which is the right way of doing it. I just didn’t think of it back then. I have always thought a bit too ‘textually’. ) The point being that most of our brains work by constructing symbols (objects, whatever).

    But there is no reason to merge the hardware and software problems into a single system, if hardware can produce symbols “meaning”. I mean, basically, it was just a search engine that retained emotional context over so many recursions, and took actions if it got excited. (I had I think sixteen emotions at the time?)

    I had a blast with it because I was working on tank AI’s (for games) and trying to give them emotions. To simulate input (I was just building tests not a full simulation) I simply feed in a bitmap. The problem I kept having is that if you make it exciting to kill things …. I sort of got this psychopathic behavior out of the program through iterative positive reinforcement. Which is obvious right? lol Anyway. What I learned was that (a) you needed a lot of expensive hardware which we are finally able to produce today, and (b) you needed a lot of informational density to make a non-deterministic behavior, (c) you needed even more information density to make decisions possible. In other words, decisions are based upon information density (marginal differences between forecasts and perceptions). It’s differences that matter for decidability. (d) there was no way any meaningful AI was going to happen until Moore’s Law did it’s work for a few decades.

    In about 2005 or so (I can’t remember exactly) I had been very ill and working with the Half-Life engine after work again – after using the Quake engine in the late 90’s. I understood that the vector/state problem was solvable with geometry and the processing power of video cards. Someone from MSFT who had been working on the B2 Bomber software talked to me about Manifolds (Topologies) as data stores. Another guy from MSFT talked to me about developing a new form of programming to do all of this. The problem was how to store the data in geometries.

    I was struck immediately by the fact that the reason people can do so many things and store information so efficiently is that ‘man is the measure of all things to man’ – in other words, we store information that we can act upon. So the frame of reference is our actions. And this solves the symbol problem. In other words, symbol tables (meaning) could be constructed from combinations of possible actions. This solves the information density problem.

    This was when I started thinking about what we call ‘property in toto’ today. That is, we attribute value to useful property (objects of utility). And that property is an unlimited means of object definition.

    So aside from creating symbols for actions, and symbols for property (things I can act upon to transform). And this mean that it was possible to use property as a test of morality for all actions of an artificial intelligence. In other words, I understood that the way we create moral AI’s is the same way we create moral human beings: an AI can’t even THINK about an object (form of property) it doesn’t have permission to.

    So you see. That is where all of this work you see in Propertarianism has come from. I think in terms of artificial intelligences that are not conscious. I have understood how consciousness is not necessary, for most of what human’s do.

    And that was how I began to understand that we are mere riders (consciousness) on elephants (intuition), and that our ‘consciousness’ is merely the result of needing to empathize with and negotiate with other riders on behalf of our elephants.

    But the elephant is influenced by a demon called our genes, and the elephant is happy to lie to us to get us to act as its agent even against our will.

    As far as I know this is about as precise an understanding as you need to have.

    I can tell you how fish, reptilian, mammalian, and ‘human’ brains work using very simple processes through the thalamus and short term memory to create what we call ‘experience’ or ‘consciousness’. But it doesn’t matter. For the purposes of understanding human existence, it is very hard to train the rider to control the elephant. Because the rider is pretty dumb really, and easily fooled – and the elephant is an exceptional liar.

    We can do it though. And that is what I am trying to accomplish when I use the words “Truth, Agency, and Transcendence.” Sovereignty is just a means of limiting our actions to the moral.

    Curt Doolittle

    The Propertarian Institute

    Kiev, Ukraine


    Source date (UTC): 2017-07-26 20:37:00 UTC

  • THE AI QUESTION AND THE ANSWER There are three different stages of Artificial In

    THE AI QUESTION AND THE ANSWER

    There are three different stages of Artificial Intelligence we have to discuss:

    1) Specific Artificial Intelligence (imitation intelligence)

    SAI can perform routine tasks and do so better than people, and is bound by algorithmic limits.

    Achieved by sufficient hardware and processing speed, algorithms, and existing software and databases.

    vs

    2) General Artificial Intelligence (functional intelligence)

    GAI can solve problems and make decisions, can be bound by limits and act morally.

    Achieved by sufficient hardware, processing speed, algorithms, and I suspect new software and database structures (think video cards and geometry)

    vs

    3) Conscious Artificial Intelligence (creative intelligence)

    CAI can want, hypothesize, identify opportunities, theorize, create, invent, and learn, evolve, transcend, and circumvent limits and morality.

    Achieved by what I suspect will require new hardware and embedded software, with new software and database structures (as above)

    CONSEQUENCES OF FIRST STAGE AI

    There are an awful lot of jobs that are currently done by hand that can be done better and automated.

    Certainly accounting, ar/ap/pr will fall to SAI rapidly and first.

    Certainly circuit board design and development can be automated.

    Certainly assembly of products can be automated and has been.

    Certainly packing and shipping are already being automated.

    Certainly delivery of goods can be automated.

    Certainly food service can be automated.

    Certainly forecasting can be automated

    Certainly hiring can be automated (and firing) (My product will help with this)

    Certainly ad-buying can be automated (easily).

    Certainly on the job training for most functions can be automated (My product is built toward that end).

    Certainly research can be automated.

    Certainly stock purchasing can be automated.

    I don’t see organizing people into communities (businesses) being automated for a while.

    I don’t see strategic planning at the ceo level being automated for a while.

    I don’t see ‘outwitting competition’ as being automated for a while.

    I don’t see ‘selling people’ as being automated.

    I don’t see ‘serving’ people as being automated.

    The growth of the problem will be limited by the cost of financing such equipment versus moving production to cheaper labor than the cost of financing the equipment.

    The primary economic problem is this: you have to produce something for a lot of people in order to pay for yourself but you can only serve so many people at a time.

    There will be too few ways of getting money in people’s hands to spend.

    There will be a ridiculous oversupply of people in the end.

    I have been trying to solve this problem for a while now and I think I understand the solution.

    Most of what we will do is provide each other with entertainment, and others performing research.

    In other words, there will exist a 10% or 20% of the population with employable advantage and the rest of the people will be effectively pets.

    Curt Doolittle

    The Propertarian Institute

    Kiev Ukraine


    Source date (UTC): 2017-07-26 19:02:00 UTC

  • ARTIFICIAL INTELLIGENCE: EASTERN MORAL CRITICISM VS ANGLO INTELLECTUAL CRITICISM

    ARTIFICIAL INTELLIGENCE: EASTERN MORAL CRITICISM VS ANGLO INTELLECTUAL CRITICISM

    (act in harmony with nature: eastern and western versions)

    I’m going to “anglicize” his statements because the buddhism is hard for me to wade through.

    (a) the movement is unnatural because they didn’t account for costs. if the algorithm accounted for a maximum watts of output in the human form: 400 for short periods, 100 for maybe an hour, and 60-75 for an 8 hour work day. or roughly 1200-1500 watts per day before exhaustion (or death). starting from a standing position, an algorithm ‘should’ eventually develop walking by trial and error, and hemispheric mirroring of successes. THe general problem with mathematical analysis, logical analysis, philosophical analysis, and all models that derive from them, is an insufficient allocation of costs which nature does not and cannot tolerate without immediate failure.

    So where Hayao Miyazaki speaks in eastern ethics of shame, an anglo like myself speaks in anglo ethics of stupidity.

    This in itself is an interesting observation of our cultural differences.

    Not that one or the other is better.

    Ours just provides more actionable results.

    Both of us are saying the same thing: “ACT IN HARMONY WITH NATURE” in different ways.


    Source date (UTC): 2017-07-16 14:20:00 UTC