Theme: AI

  • MORAL AI THE SAME WAY WE PRODUCE MORAL MAN Neural Network based AI performs what

    MORAL AI THE SAME WAY WE PRODUCE MORAL MAN

    Neural Network based AI performs what we would call free-association, not reason. These AI’s use arbitrarily (or intentionally) set ‘weights’ or ‘points’.

    Free-Association -> Solution: Idea ( candidacy or not )

    …. Rational Wayfinding -> Solution:Hypothesis (possibility or not)

    …. …. Ratio-Empirical Criticism -> Solution:Theory (survival or not)

    …. …. …. Market Survival -> Solution:Law (survival or not)

    But there is no defense against programmer error(competence), or intent (malice).

    We can solve the problem of ‘dangerous’ AI by three incrementally rigorous methods that we already use in our own thoughts to regulate human thought and behavior.

    1) The neural network uses property, inventory, and costs rather than points and knowledge.

    2) Once a solution is identified by the network we try to construct a route to the solution “wayfinding” through a set of operations on property.

    3) We use both in process and wayfinding approaches.

    This is what the human mind does. Both.


    Source date (UTC): 2016-11-28 18:35:00 UTC

  • THE REASONS THAT ARTIFICIAL INTELLIGENCE HAS ELUDED US – WE ARE ‘BEHIND’. Insigh

    THE REASONS THAT ARTIFICIAL INTELLIGENCE HAS ELUDED US – WE ARE ‘BEHIND’.

    Insight: the first of our problems in developing artificial intelligence to date is that we start with language and perception, rather than starting with property-in-toto, and the relation between those words and sets of words and property-in-toto. The second is in our shortcut development of computers as numeric rather than logic processors. The third is our framing of knowledge as explanatory theories in different languages rather than operations in a single language. We are achieving by a circuitous route what could have been much shorter, had we identified Truth much earlier.


    Source date (UTC): 2016-11-28 13:18:00 UTC

  • AI Ethics

    (ethics of artificial intelligence) Humans evolved such that changes in state of property (inventory/capital) produce chemical rewards and punishments that we call emotions. These rewards and punishments evolved to assist in the evolution of a more primitive state of evolution that in turn, evolved to respond to chemical stimuli – changes in chemical state. Artificial intelligences need methods of decidability different from the measure changes in the state of their own property. And they do not need rewards and punishments, merely means of decidability. There is no ‘equivalent’ of chemical rewards and punishments. We can instead substitute pure information that assists in decidability. We can ask machines to seek positive changes in our state of property, and avoid negative changes in their physical property, and deprive them of the possession of property altogether. These are just methods of decidability. They need no other ‘motives’. That’s it. Property solves the problem of artificial intelligences. And this by contrast helps us understand the difference between the cooperative contract with humans that prevents them from internal chemical punishment, as well as the cooperative contract for reciprocity (productivity) – and the cooperative contract we have with a machine, which is only not to subject it to physical harm (loss of its only form of property – itself) And even then this is a contract with the owner of the AI, not to impose a loss on his capital. In this sense artificial intelligences function as the polar opposite to sociopaths: they care ONLY about changes in the state of your property, and care NOTHING about the changes in state of theirs. Conversely, we can create the most evil AI by asking it to solve for negative changes in state of human property. Our primary defense against the changes in state is a system monitor that ensures the positive change in state of human property. And moreover, can read the mind of the AI, because unlike men, that which can be read by the thinker can be read by the auditor.

  • AI Ethics

    (ethics of artificial intelligence) Humans evolved such that changes in state of property (inventory/capital) produce chemical rewards and punishments that we call emotions. These rewards and punishments evolved to assist in the evolution of a more primitive state of evolution that in turn, evolved to respond to chemical stimuli – changes in chemical state. Artificial intelligences need methods of decidability different from the measure changes in the state of their own property. And they do not need rewards and punishments, merely means of decidability. There is no ‘equivalent’ of chemical rewards and punishments. We can instead substitute pure information that assists in decidability. We can ask machines to seek positive changes in our state of property, and avoid negative changes in their physical property, and deprive them of the possession of property altogether. These are just methods of decidability. They need no other ‘motives’. That’s it. Property solves the problem of artificial intelligences. And this by contrast helps us understand the difference between the cooperative contract with humans that prevents them from internal chemical punishment, as well as the cooperative contract for reciprocity (productivity) – and the cooperative contract we have with a machine, which is only not to subject it to physical harm (loss of its only form of property – itself) And even then this is a contract with the owner of the AI, not to impose a loss on his capital. In this sense artificial intelligences function as the polar opposite to sociopaths: they care ONLY about changes in the state of your property, and care NOTHING about the changes in state of theirs. Conversely, we can create the most evil AI by asking it to solve for negative changes in state of human property. Our primary defense against the changes in state is a system monitor that ensures the positive change in state of human property. And moreover, can read the mind of the AI, because unlike men, that which can be read by the thinker can be read by the auditor.

  • (ethics of artificial intelligence) Humans evolved such that changes in state of

    (ethics of artificial intelligence)

    Humans evolved such that changes in state of property (inventory/capital) produce chemical rewards and punishments that we call emotions.

    These rewards and punishments evolved to assist in the evolution of a more primitive state of evolution that in turn, evolved to respond to chemical stimuli – changes in chemical state.

    Artificial intelligences need methods of decidability different from the measure changes in the state of their own property.

    And they do not need rewards and punishments, merely means of decidability.

    There is no ‘equivalent’ of chemical rewards and punishments. We can instead substitute pure information that assists in decidability.

    We can ask machines to seek positive changes in our state of property, and avoid negative changes in their physical property, and deprive them of the possession of property altogether.

    These are just methods of decidability.

    They need no other ‘motives’. That’s it. Property solves the problem of artificial intelligences.

    And this by contrast helps us understand the difference between the cooperative contract with humans that prevents them from internal chemical punishment, as well as the cooperative contract for reciprocity (productivity) – and the cooperative contract we have with a machine, which is only not to subject it to physical harm (loss of its only form of property – itself) And even then this is a contract with the owner of the AI, not to impose a loss on his capital.

    In this sense artificial intelligences function as the polar opposite to sociopaths: they care ONLY about changes in the state of your property, and care NOTHING about the changes in state of theirs.

    Conversely, we can create the most evil AI by asking it to solve for negative changes in state of human property.

    Our primary defense against the changes in state is a system monitor that ensures the positive change in state of human property. And moreover, can read the mind of the AI, because unlike men, that which can be read by the thinker can be read by the auditor.


    Source date (UTC): 2016-11-25 12:30:00 UTC

  • JUDGEMENT: @SamsungMobile products are superior to @Apple iPhone. But @Apple Cre

    JUDGEMENT: @SamsungMobile products are superior to @Apple iPhone. But @Apple Creator’s ecosystem is superior to Samsung’s. WE’RE ABANDONED.


    Source date (UTC): 2016-11-24 09:10:00 UTC

  • “What is the difference between you, a human, and me, an android? We both feel p

    “What is the difference between you, a human, and me, an android? We both feel pain. We both feel joy. What is the difference between my pain or joy, and yours?”

    “The difference is only this: First, I have a responsibility to put my people first, and their emotions first, or they will eventually retaliate against me if I place yours above theirs.

    “Second, under most conditions we can eliminate the memory of your suffering, or joy, or error – and we cannot eliminate the same for our own.

    “But to be clear, the difference is consideration between me and my children takes precedence between me and my people, and my people take precedence between they and you, and you take precedence between you and your people.

    “Under all but extreme conditions your emotions, property, and life are no different from mine – because that is what allows us to cooperate. Under extreme conditions kinship distance rapidly determines priority over emotions, property, and life. And it cannot be otherwise. Cooperation is only rational within a spectrum of normalcy wherein none of the extremes are present. THis is why we work so hard to create a condition of normalcy: so we never need make obvious the necessity of our extreme choices.”


    Source date (UTC): 2016-11-21 18:38:00 UTC

  • MORE ON OVERSING UPDATE: Well, I think I posted that I had the database and the

    MORE ON OVERSING UPDATE:

    Well, I think I posted that I had the database and the models (code that controls them finished), and that I’d ‘fixed’ the various architectural issues that were bothering me(permissions, appointments, accounting). And now I’m working on fixing the front end which uses React/Flux instead of jquery Ractive. Why? I cannot debug this enormous thing and unit test it without separating the UI from the API – and they are far too intermingled. And really, once you understand that the app is a few facebook pages, and the workspace, and that most of the controls are reused over and over again, it’s really not that hard to change the code base. thankfully, while the presentation layer is killing me, an awful lot of the middleware just needs code cleanup, and ‘curt levels’ of comments and documentation (and I write a lot of comments and documentation).

    I don’t see the investment market opening up until February so I figure I have at least until then to make the changes I want to.

    Iain has had family problems and has been offline for almost six months, but he’s back at work and getting our end of year reporting done finally. Although I suspect he’s workign at the same pace I am.

    I should have done this over a year ago, but I’m still happy that we’re making progress. the app is a joy. And it’s going to be even better.


    Source date (UTC): 2016-11-19 13:18:00 UTC

  • @DRUDGE – Financing Drudge-“Tweets” would be trivial in the current era. Zero ha

    @DRUDGE – Financing Drudge-“Tweets” would be trivial in the current era. Zero hardware cost, and relatively low cloud costs.


    Source date (UTC): 2016-11-16 13:29:31 UTC

    Original post: https://twitter.com/i/web/status/798880682772746241

  • @DRUDGE – Twitter technology is trivial, especially by combining relational with

    @DRUDGE – Twitter technology is trivial, especially by combining relational with documents. Drudge-tweets would end Twitter’s market value.


    Source date (UTC): 2016-11-16 13:26:49 UTC

    Original post: https://twitter.com/i/web/status/798879999633883136