(ethics of artificial intelligence) Humans evolved such that changes in state of property (inventory/capital) produce chemical rewards and punishments that we call emotions. These rewards and punishments evolved to assist in the evolution of a more primitive state of evolution that in turn, evolved to respond to chemical stimuli – changes in chemical state. Artificial intelligences need methods of decidability different from the measure changes in the state of their own property. And they do not need rewards and punishments, merely means of decidability. There is no ‘equivalent’ of chemical rewards and punishments. We can instead substitute pure information that assists in decidability. We can ask machines to seek positive changes in our state of property, and avoid negative changes in their physical property, and deprive them of the possession of property altogether. These are just methods of decidability. They need no other ‘motives’. That’s it. Property solves the problem of artificial intelligences. And this by contrast helps us understand the difference between the cooperative contract with humans that prevents them from internal chemical punishment, as well as the cooperative contract for reciprocity (productivity) – and the cooperative contract we have with a machine, which is only not to subject it to physical harm (loss of its only form of property – itself) And even then this is a contract with the owner of the AI, not to impose a loss on his capital. In this sense artificial intelligences function as the polar opposite to sociopaths: they care ONLY about changes in the state of your property, and care NOTHING about the changes in state of theirs. Conversely, we can create the most evil AI by asking it to solve for negative changes in state of human property. Our primary defense against the changes in state is a system monitor that ensures the positive change in state of human property. And moreover, can read the mind of the AI, because unlike men, that which can be read by the thinker can be read by the auditor.
Category: AI, Computation, and Technology
-
(ethics of artificial intelligence) Humans evolved such that changes in state of
(ethics of artificial intelligence)
Humans evolved such that changes in state of property (inventory/capital) produce chemical rewards and punishments that we call emotions.
These rewards and punishments evolved to assist in the evolution of a more primitive state of evolution that in turn, evolved to respond to chemical stimuli – changes in chemical state.
Artificial intelligences need methods of decidability different from the measure changes in the state of their own property.
And they do not need rewards and punishments, merely means of decidability.
There is no ‘equivalent’ of chemical rewards and punishments. We can instead substitute pure information that assists in decidability.
We can ask machines to seek positive changes in our state of property, and avoid negative changes in their physical property, and deprive them of the possession of property altogether.
These are just methods of decidability.
They need no other ‘motives’. That’s it. Property solves the problem of artificial intelligences.
And this by contrast helps us understand the difference between the cooperative contract with humans that prevents them from internal chemical punishment, as well as the cooperative contract for reciprocity (productivity) – and the cooperative contract we have with a machine, which is only not to subject it to physical harm (loss of its only form of property – itself) And even then this is a contract with the owner of the AI, not to impose a loss on his capital.
In this sense artificial intelligences function as the polar opposite to sociopaths: they care ONLY about changes in the state of your property, and care NOTHING about the changes in state of theirs.
Conversely, we can create the most evil AI by asking it to solve for negative changes in state of human property.
Our primary defense against the changes in state is a system monitor that ensures the positive change in state of human property. And moreover, can read the mind of the AI, because unlike men, that which can be read by the thinker can be read by the auditor.
Source date (UTC): 2016-11-25 12:30:00 UTC
-
JUDGEMENT: @SamsungMobile products are superior to @Apple iPhone. But @Apple Cre
JUDGEMENT: @SamsungMobile products are superior to @Apple iPhone. But @Apple Creator’s ecosystem is superior to Samsung’s. WE’RE ABANDONED.
Source date (UTC): 2016-11-24 09:10:00 UTC
-
“What is the difference between you, a human, and me, an android? We both feel p
“What is the difference between you, a human, and me, an android? We both feel pain. We both feel joy. What is the difference between my pain or joy, and yours?”
“The difference is only this: First, I have a responsibility to put my people first, and their emotions first, or they will eventually retaliate against me if I place yours above theirs.
“Second, under most conditions we can eliminate the memory of your suffering, or joy, or error – and we cannot eliminate the same for our own.
“But to be clear, the difference is consideration between me and my children takes precedence between me and my people, and my people take precedence between they and you, and you take precedence between you and your people.
“Under all but extreme conditions your emotions, property, and life are no different from mine – because that is what allows us to cooperate. Under extreme conditions kinship distance rapidly determines priority over emotions, property, and life. And it cannot be otherwise. Cooperation is only rational within a spectrum of normalcy wherein none of the extremes are present. THis is why we work so hard to create a condition of normalcy: so we never need make obvious the necessity of our extreme choices.”
Source date (UTC): 2016-11-21 18:38:00 UTC
-
MORE ON OVERSING UPDATE: Well, I think I posted that I had the database and the
MORE ON OVERSING UPDATE:
Well, I think I posted that I had the database and the models (code that controls them finished), and that I’d ‘fixed’ the various architectural issues that were bothering me(permissions, appointments, accounting). And now I’m working on fixing the front end which uses React/Flux instead of jquery Ractive. Why? I cannot debug this enormous thing and unit test it without separating the UI from the API – and they are far too intermingled. And really, once you understand that the app is a few facebook pages, and the workspace, and that most of the controls are reused over and over again, it’s really not that hard to change the code base. thankfully, while the presentation layer is killing me, an awful lot of the middleware just needs code cleanup, and ‘curt levels’ of comments and documentation (and I write a lot of comments and documentation).
I don’t see the investment market opening up until February so I figure I have at least until then to make the changes I want to.
Iain has had family problems and has been offline for almost six months, but he’s back at work and getting our end of year reporting done finally. Although I suspect he’s workign at the same pace I am.
I should have done this over a year ago, but I’m still happy that we’re making progress. the app is a joy. And it’s going to be even better.
Source date (UTC): 2016-11-19 13:18:00 UTC
-
Now that’s interesting. (for those of us who translated it from Sybase in the fi
Now that’s interesting. (for those of us who translated it from Sybase in the first place it’s kind of funny.) Love it.
Source date (UTC): 2016-11-16 19:01:41 UTC
Original post: https://twitter.com/i/web/status/798964271694675972
Reply addressees: @AIREXmarket
Replying to: https://twitter.com/i/web/status/798957741305974784
IN REPLY TO:
Original post on X
Original tweet unavailable — we could not load the text of the post this reply is addressing on X. That usually means the tweet was deleted, the account is protected, or X does not expose it to the account used for archiving. The Original post link below may still open if you view it in X while signed in.
Original post: https://twitter.com/i/web/status/798957741305974784
-
@DRUDGE – Financing Drudge-“Tweets” would be trivial in the current era. Zero ha
@DRUDGE – Financing Drudge-“Tweets” would be trivial in the current era. Zero hardware cost, and relatively low cloud costs.
Source date (UTC): 2016-11-16 13:29:31 UTC
Original post: https://twitter.com/i/web/status/798880682772746241
-
@DRUDGE – Twitter technology is trivial, especially by combining relational with
@DRUDGE – Twitter technology is trivial, especially by combining relational with documents. Drudge-tweets would end Twitter’s market value.
Source date (UTC): 2016-11-16 13:26:49 UTC
Original post: https://twitter.com/i/web/status/798879999633883136
-
@DRUDGE The Twitter technology is trivial. The success of a competitor is depend
@DRUDGE The Twitter technology is trivial. The success of a competitor is dependent only on the seed user base, and @DRUDGE has left/right.
Source date (UTC): 2016-11-16 13:17:10 UTC
Original post: https://twitter.com/i/web/status/798877571593146368
-
If we convince @DRUDGE to host a link to a twitter clone on the home page we can
If we convince @DRUDGE to host a link to a twitter clone on the home page we can create a competitor to twitter in 90 days. #NewRight #tcot
Source date (UTC): 2016-11-16 13:15:41 UTC
Original post: https://twitter.com/i/web/status/798877200430800896