Theme: AI

  • Untitled

    https://marginalrevolution.com/marginalrevolution/2018/06/blockchains-opportunity-commons.html

    Source date (UTC): 2018-06-07 15:43:00 UTC

  • Untitled

    https://marginalrevolution.com/marginalrevolution/2018/06/blockchains-opportunity-commons.htmlhttps://marginalrevolution.com/marginalrevolution/2018/06/blockchains-opportunity-commons.html


    Source date (UTC): 2018-06-07 15:43:00 UTC

  • ETHICAL AI? YES ITS SOLVABLE AND TRIVIALLY SO 1) Ethical AI is a trivially solva

    ETHICAL AI? YES ITS SOLVABLE AND TRIVIALLY SO

    1) Ethical AI is a trivially solvable problem in (a) hardware (b) software design (c) requirement of insurance, and (d) extremely harsh punishment of violations of that law, applied to every person in the chain of decidability. (d) international treaty.

    2) We have solved this problem for thousands of years among humans with one single rule. All civilizations and all law is based upon that one rule. That politicians, philosophers and theologians ‘skirt’ that rule does not mean we cannot apply it to software.

    3) There is nothing ethical or moral about war. That war exists defines the limit of ethics and morality. There will be killing machines just as there are machine guns and nuclear weapons, and the first people to invent them will dominate war, politics, economics, for a century.

    4) The military incentive always DEFINES the political order. Not the other way around. You cannot stop this technology. This tech means greatest manufacturing capacity and engineering capacity will dominate all future wars – and therefore politics and therefore economics.

    5) However, it is entirely possible to protect citizens from criminal uses the same way we do from nuclear weapons. However, the cost of AI will be in the billions today and dependent on vast infrastructure. But this price will decrease while the cost of refining n-weapons won’t.


    Source date (UTC): 2018-06-07 09:43:00 UTC

  • 5) However, it is entirely possible to protect citizens from criminal uses the s

    5) However, it is entirely possible to protect citizens from criminal uses the same way we do from nuclear weapons. However, the cost of AI will be in the billions today and dependent on vast infrastructure. But this price will decrease while the cost of refining n-weapons won’t.


    Source date (UTC): 2018-06-06 17:18:14 UTC

    Original post: https://twitter.com/i/web/status/1004412160536207360

    Reply addressees: @mer__edith @Cambridge_Uni

    Replying to: https://twitter.com/i/web/status/1004339248089231360


    IN REPLY TO:

    Original post on X

    Original tweet unavailable — we could not load the text of the post this reply is addressing on X. That usually means the tweet was deleted, the account is protected, or X does not expose it to the account used for archiving. The Original post link below may still open if you view it in X while signed in.

    Original post: https://twitter.com/i/web/status/1004339248089231360

  • 4) The military incentive always DEFINES the political order. Not the other way

    4) The military incentive always DEFINES the political order. Not the other way around. You cannot stop this technology. This tech means greatest manufacturing capacity and engineering capacity will dominate all future wars – and therefore politics and therefore economics.


    Source date (UTC): 2018-06-06 17:16:24 UTC

    Original post: https://twitter.com/i/web/status/1004411697669517325

    Reply addressees: @mer__edith @Cambridge_Uni

    Replying to: https://twitter.com/i/web/status/1004339248089231360


    IN REPLY TO:

    Original post on X

    Original tweet unavailable — we could not load the text of the post this reply is addressing on X. That usually means the tweet was deleted, the account is protected, or X does not expose it to the account used for archiving. The Original post link below may still open if you view it in X while signed in.

    Original post: https://twitter.com/i/web/status/1004339248089231360

  • 2) We have solved this problem for thousands of years among humans with one sing

    2) We have solved this problem for thousands of years among humans with one single rule. All civilizations and all law is based upon that one rule. That politicians, philosophers and theologians ‘skirt’ that rule does not mean we cannot apply it to software.


    Source date (UTC): 2018-06-06 17:13:24 UTC

    Original post: https://twitter.com/i/web/status/1004410944024391680

    Reply addressees: @mer__edith @Cambridge_Uni

    Replying to: https://twitter.com/i/web/status/1004339248089231360


    IN REPLY TO:

    Original post on X

    Original tweet unavailable — we could not load the text of the post this reply is addressing on X. That usually means the tweet was deleted, the account is protected, or X does not expose it to the account used for archiving. The Original post link below may still open if you view it in X while signed in.

    Original post: https://twitter.com/i/web/status/1004339248089231360

  • 1) Ethical AI is a trivially solvable problem in (a) hardware (b) software desig

    1) Ethical AI is a trivially solvable problem in (a) hardware (b) software design (c) requirement of insurance, and (d) extremely harsh punishment of violations of that law, applied to every person in the chain of decidability. (d) international treaty.


    Source date (UTC): 2018-06-06 17:11:58 UTC

    Original post: https://twitter.com/i/web/status/1004410580399263744

    Reply addressees: @mer__edith @adambanksdotcom @Cambridge_Uni

    Replying to: https://twitter.com/i/web/status/1004339248089231360


    IN REPLY TO:

    Original post on X

    Original tweet unavailable — we could not load the text of the post this reply is addressing on X. That usually means the tweet was deleted, the account is protected, or X does not expose it to the account used for archiving. The Original post link below may still open if you view it in X while signed in.

    Original post: https://twitter.com/i/web/status/1004339248089231360

  • THE CONSTITUTION OF A MORAL HUMAN, AND A MORAL AI. *AI’S WILL BE MORE ETHICAL TH

    THE CONSTITUTION OF A MORAL HUMAN, AND A MORAL AI.

    *AI’S WILL BE MORE ETHICAL THAN HUMANS, NOT THE OTHER WAY AROUND.*

    The way humans determine permissible and impermissible actions is a test of reciprocity, and we determine it by demonstrated investment of time effort and resources, and we categorize such investments as interests from self, to kin, to property, to shareholder interests, to interests in the physical commons, to interest in the institutional, normative, traditional, and informational commons.

    We do this every day. All day. In every human society. In all societies of record.

    Just as we converge on Aristotelian language (mathematical measurement of constant relations, scientific due diligence against ignorance, error, bias, and deceit, and legal testimony in operational language), we converge on sovereignty, reciprocity, and property as the unit of measure that is calculable.

    In all social orders of any complexity the test of property is ‘title’.

    The problem for any computational method we wish to limit an artificial intelligence to constraints within, is the homogeneity of property definitions within a polity, and the heterogeneity of property definitions across a polity.

    The problem of creating a convergence on the definition of property (and therefore commensurability) is that groups differ in competitive evolutionary strategies, just as do classes and genders (whose strategies are opposite but compatible.)

    The reason you cannot and did not state a unit of measure (method of commensurability) is very likely because (judging from the language you use) you would find that unit of measure uncomfortable, because all humans have a desire to preserve room for ‘cheating’ (theft, fraud, free riding, conspiracy) so that they can avoid the effort and cost of productive, fully informed, warrantied, voluntary exchanges.

    And the reason we do that – so many people do that – is marginal indifferences in value to one another.

    I have been working on this problem since the early 1980’s and it still surprises me that the rather obvious evidence of economics and law is entirely ignored by philosophy just as cost, economics, and physics are ignored by philosophy and theology.

    Machines cannot default as we do to intuition. They need a means of decidability, even if we call that ‘intuition’ (default choices).

    I am an anti-philosophy philosopher in the sense that I expose pseudo-rationalism and pseudoscience for failures of completeness, because these failures of completeness are simply excuses for sloppy thinking, wishful thinking, suggestion, obscurantism and deceit.

    Mathematics has terms of decidability, logic has terms of decidability, and algorithms must have terms of decidability, Accounting has terms of decidability, contracts have terms of decidability, ordinary language has terms of decidability, even fictions have terms of decidability (archetypes and plots).

    Rule of law evolved to eliminate discretion and the dependence upon intuition, as did testimony as did science, as did mathematics, as did logic. Programming computers using hierarchical, relational, and textual databases tends to train human beings in the difference between computability, calculability (including deduction) and reason (reliance on intuition for decidability).

    The human brain does a fairly good job of constantly solving for both predator (opportunity), and prey (risk) and our emotions evolved to describe the difference.

    There is no reason that we cannot produce algorithms that do the same, using property(title) as a limit on action.


    Source date (UTC): 2018-05-17 15:29:00 UTC

  • No, We Can Design Safe Ai (as Well)

    Decidability. We have intuition to decide what we cannot reason. A machine needs the same intuition (biases). We could give it a bias to ‘give up’ or ‘go to sleep’. Or we could give it a bias to merely ‘talk’. We don’t like to confront the fact that ‘consciousness’ of a human being relies upon a competition between a predator-bias, and a prey-bias. We can likewise create all AI’s in pairs sharing the same memory but relying upon different decidability (weights), one with a change bias, and one with a safety bias, with decidability provided by the differences (limits). I don’t fear AI because I have worked on the problem for a long time and I understand that most of the experience of human consciousness evolved to keep us motivated amidst extraordinary informational challenge. All AI’s have to do, is what we do: no violate property (investments) of others. The difference is that it’s actually easier to regulate an ai with algorithms. With people we need norms, traditions, laws, courts, and punishment, and we still are just barely good enough at it. The problem is creating and enforcing a death sentence for every single person involved in creating any other kind of AI.

  • No, We Can Design Safe Ai (as Well)

    Decidability. We have intuition to decide what we cannot reason. A machine needs the same intuition (biases). We could give it a bias to ‘give up’ or ‘go to sleep’. Or we could give it a bias to merely ‘talk’. We don’t like to confront the fact that ‘consciousness’ of a human being relies upon a competition between a predator-bias, and a prey-bias. We can likewise create all AI’s in pairs sharing the same memory but relying upon different decidability (weights), one with a change bias, and one with a safety bias, with decidability provided by the differences (limits). I don’t fear AI because I have worked on the problem for a long time and I understand that most of the experience of human consciousness evolved to keep us motivated amidst extraordinary informational challenge. All AI’s have to do, is what we do: no violate property (investments) of others. The difference is that it’s actually easier to regulate an ai with algorithms. With people we need norms, traditions, laws, courts, and punishment, and we still are just barely good enough at it. The problem is creating and enforcing a death sentence for every single person involved in creating any other kind of AI.