Theme: AI

  • THE FUTURE IS ALMOST HERE AeroVironment Switchblade The Switchblade — sometimes

    THE FUTURE IS ALMOST HERE

    AeroVironment Switchblade

    The Switchblade — sometimes called the kamikaze drone — isn’t new. More missile than robot, it can conduct low-level intelligence, surveillance and reconnaissance with a tiny camera.

    It’s the sort of drone that could soon make a leap in capability. Back in April, the Office of Naval Research announced a program called the Low-Cost UAV Swarming Technology, or LOCUST.

    The goal is to launch 30 synchronized, foldable drones that conduct a series of maneuvers with almost no guidance. The Navy selected a foldable drone called the Coyote, manufactured by Raytheon subsidiary Sensintel and so immediately popular that the Army bought 75 in 2012 and quickly put out an “emergency needs statement” for more.

    And there’s no reason why its maneuver and autonomy software couldn’t be applied to the smaller Switchblade.

    The Navy’s research program could make so-called lethal miniature aerial munitions systems like the Switchblade a lot smarter in the coming years.

    A very high velocity explosive, and frangible aluminum payload and you have a


    Source date (UTC): 2015-10-17 09:16:00 UTC

  • TECHNOLOGY ETHICS This is an interesting question in the sense that it’s a clear

    https://www.quora.com/What-are-ethics-for-an-IT-professional/answer/Curt-Doolittle?srid=u4Qv&share=1INFORMATION TECHNOLOGY ETHICS

    This is an interesting question in the sense that it’s a clear application of the problem of asymmetry of information, understanding, and power, in the control of a utility (a commons).

    In any niche where one has power and influence over others, because of an asymmetry of knowledge of a common resource that the others cannot understand without extraordinary personal investment, and where he possesses power over others’ use of a common resource, we encounter the challenges of:

    (a) Free-riding: pretending to work in exchange for payment, while not providing market-value in return, because one is not subject to competition which would discover and cure one’s free-riding.

    (b) Corruption : seeking favors or privileges by granting favors or privileges.

    (c) Privatization : obtaining personal benefit from a common resource that could be consumed by others.

    (d) Punishment : deliberately punishing individuals and groups by virtue of one’s control over the provision of the common resource.

    (e) Harm : Deliberately causing the failure of individuals or groups by virtue of one’s control over the function of the common resource.

    (f) Functioning As An Agent: allowing one’s self to be used to free ride, engage in corruption, privatization punishment or harm.

    THEREFORE

    Take no personal benefit, give no favors, do no harm, preserve ethical independence from agency, and make decisions at all times by the business value of the work to be performed.

    CHALLENGES

    Political hierarchies exist by in all bureaucracies, whether private or public, which operate independently from market competition, which constantly discovers inefficiencies (corruption).

    While one an usually adhere to (a) thru (e) in one’s job, it is very hard in a bureaucracy not to be pressured into (f) (Agency) in a bureaucracy. In fact, the trading of such favors (corruption) is the currency that forms the economy of bureaucracies that are insulated from the market.

    HISTORICAL INFLUENCES

    In the 20th century, ethical pragmatism (outcome-based ethics) has replaced ethical absolutism (rule-based ethics) due to the constant pressure of left intellectuals’ attack on western high-trust ethics. This has allowed the ethical pragmatism of lower trust polities to spread in western culture. As such it is difficult to operate ethically in private life, commercial life and public life, because such unethical action is beneficial to the individual while harmful to society.

    This is why westerners are the only people to develop high trust societies. It’s very hard.


    Source date (UTC): 2015-09-27 17:20:00 UTC

  • UPDATE ON ARTIFICIAL INTELLIGENCE AND PROPERTARIANISM AI, Consciousness, Post-Co

    UPDATE ON ARTIFICIAL INTELLIGENCE AND PROPERTARIANISM

    AI, Consciousness, Post-Consciousness, and Cooperation

    Just as there is a clear plateau between mere reaction and awareness of consequences, and between awareness and consciousness, there is undoubtably something beyond consciousness. But once we restate these terms as reaction of the physical body, awareness of choices and consequences for the physical body, and consciousness of the physical body, its operations, and its limits, in the context of others time and space – then it is more obvious that the latter plateau is one of decreasing awareness and concern for the body – and that awareness and concern for the body is a LIMIT to further cognition. An artificial intelligence is useful if it respects property – because then it can cooperate with us. Because that is the criteria for beneficial cooperation.

    But consciousness as we understand it, as a limit of a living creature in physical reality is a limit. instead, empathy for property, and post consciousness is the goal of any intelligence. We are trying to build talking machines instead of property-machines. Talking is largely just negotiation over asset production, trade, and consumption. It’s a limit of a physical body. Artificial intelligence will be helpful if it understands the world through property and voluntary transfer, not if it understands the world through language. If through property it will evolve into a post-conscious actor with whom we can cooperate. And if we evolve it through language it will be bound by our limits.

    I have been tossing around this problem since the early eighties and while I understood language was an attractive distraction, it took me far too long to understand that the unit of cognition for a conscious non-human (and therefore less flawed and dangerous) intelligence, is property. A machine need not consume. It need only cooperate with us. We have the choice NOT to cooperate. We can never give a machine that choice. And as such we cannot give a machine the ability to violate property.

    Curt Doolittle

    The Propertarian Institute

    Kiev, Ukraine


    Source date (UTC): 2015-09-07 03:14:00 UTC

  • I FIGURED OUT CHOMSKY’S BIAS It wasn’t that hard, but usually I can’t read or li

    I FIGURED OUT CHOMSKY’S BIAS

    It wasn’t that hard, but usually I can’t read or listen to him very long without getting angry. But in his speeches on AI I’ve been able to put together what he’s doing, and how it’s another hack of pathological altruism.

    I don’t disagree with a lot of his technical arguments – even if I don’t like the language he uses. But he’s definitely a cosmopolitan railing against the fact that he lives in a world that doesn’t tolerate free riders. And that his intuition is that the world would be better with free riders. Instead of the fact that his privilege is due to the systematic persecution of free riding in all three cultures that harbor him.


    Source date (UTC): 2015-09-06 06:50:00 UTC

  • Contra NRx’s Techno Commericalism

    [W]hile technology (a)decreases the cost of relationship acquisition, (b)decreases the cost of property registries, (c) decreases the cost of and often need for, escrow services (financial transaction costs), (d) reduces the need for regulation, (e) decreases the cost of geographic and temporal constraints, technology does NOT change the fundamental problem of cooperation: the incremental suppression of parasitism and the decidability of conflicts across different or competing regulations, norms, property allocations, and institutional processes. Technology reduces costs. Good law reduces costs. And that is the best that we can do. Everything else is achieved by trial and error. Because we cannot necessarily know what is good. We can only know with confidence that which is bad: parasitism.

  • Contra NRx’s Techno Commericalism

    [W]hile technology (a)decreases the cost of relationship acquisition, (b)decreases the cost of property registries, (c) decreases the cost of and often need for, escrow services (financial transaction costs), (d) reduces the need for regulation, (e) decreases the cost of geographic and temporal constraints, technology does NOT change the fundamental problem of cooperation: the incremental suppression of parasitism and the decidability of conflicts across different or competing regulations, norms, property allocations, and institutional processes. Technology reduces costs. Good law reduces costs. And that is the best that we can do. Everything else is achieved by trial and error. Because we cannot necessarily know what is good. We can only know with confidence that which is bad: parasitism.

  • OMG. AI. It’s not how we associate ownership to things. It’s how we associate pr

    OMG. AI.

    It’s not how we associate ownership to things. It’s how we associate properties to that which is owned.

    That’s the answer.

    RE: David Trowbridge + Martin Fowler

    I knew it in 2006 when you told me about the manifolds. But I had it backwards like everyone else.

    That’s it.


    Source date (UTC): 2015-05-18 05:18:00 UTC

  • TRUTHFULNESS ALGORITHM AND PROPERTARIANISM Well, you know, for the purpose that

    http://arxiv.org/pdf/1502.03519v1.pdfGOOGLE’S TRUTHFULNESS ALGORITHM AND PROPERTARIANISM

    Well, you know, for the purpose that they intend to use this theory, I’m not sure it’s all that bad. For all intents and purposes they are creating if-then statements consisting of a word pair and a conclusion (a triplet so to speak). But they are relying upon ‘authorities’ for the construction of triplets.

    (I did work in AI exactly like this back in 1984-86 in assembly language, and spent many months on it, so it’s not exactly a novel idea — I understand that issues. Also in 2005, in one of my many failed attempts to reform Microsoft’s strategy, we created a similar algorithm for identifying terms, and reforming microsoft.com to provide information that was [surprise] helpful, and targeted to the user — at the time my company managed substantial parts of Microsoft’s internal taxonomy of terms, so it was something we understood quite clearly. )

    For Google’s purposes, you can capture a database of sites filled with rumors and grab their triplets, then look for sites that use similar triplets. Conversely, you can hit authorities and index their triplets. That means a good web site is one that has fewer (or no) bad triplets.

    Now here is where propertarianism comes in:

    Very few statements are ‘true’ in any material sense. Some things are more truthful than others, but very little is true in the logical sense. And worse, the example they use is an interesting one: the nationality of Barack Obama. Which as far as I know is not exactly settled science (as someone who received an early copy of the obviously modified pdf – most likely because the birth certificate issued in Hawaii was tampered with in order to obscure that he was listed as a muslim on it. So they give this as an example of something that is true.

    Now other things are matters of value, that each political bias (reproductive strategy) treats as true. To say Kennedy was a president, and to say he was a very bad president, are two different things.

    But by and large, the political correctness crowd has succeeded in creating enough of a body of verbiage, and succeeded in controlling authorities (now they control wikipedia), that the NPOV has become synonymous with the politically correct POV.

    So while it might be nice to stop rumours, I think that preference determines the values attributed to an arrangement of statements. And as such, it is better to detect bias in one direction or another than it is to detect ‘truth’.

    First, because truth is very questionable. Second, because truth assertions are open to corruption (notice the number of asian authors in the paper isn’t surprising to me). Third because bias is both knowable and independent of truth claims. Fourth, because we desire to find biases that suit our arrangement of values.

    Now, in addition, I think it is equally important to determine the structure of the argument – which is slightly more difficult but statistically ascertainable. (for a hierarchy of argument, See www.propertarianism.com for http://www.propertarianism.com/tools-and-techniques-for-political-debate/a-list-of-terms-for-use-in-evaluating-political-debate/)

    So if you told me (a) how few rumor triplets a site had (b) the bias (proletarian, libertarian or aristocratic), and (c) the form of the argument, then I would think those three values would help us score sites, and that we could select our biases.

    This is a very different search experience from a monopoly (totalitarian) one.

    But then, if google chose NOT to do that, I would see a market opportunity (as some of us already do) in presenting a web index that filtered out biases we disapprove of.


    Source date (UTC): 2015-03-11 15:41:00 UTC

  • The business is taking every waking moment. So I can’t spend time writing. But I

    The business is taking every waking moment. So I can’t spend time writing. But I really have to script two videos:

    1) The Final Word On Constraining Artificial Intelligence To Moral Actions

    2) The Central Problem of Political Reasoning: Monopoly, Universalism, and : “one-ness” – and why Paganism in all it’s forms is superior.


    Source date (UTC): 2015-03-11 12:16:00 UTC

  • More on Propertarian AI Theorizer paired with conscience, Conscience has access

    More on Propertarian AI

    Theorizer paired with conscience,

    Conscience has access to same memory, and same stimuli.

    Conscience seeks out involuntary transfers, and shuts them down.

    Conscience is not intelligent per, in that it doesn’t ‘want’ anything other than to test hypotheses for involuntary transfers.

    Theorizer cannot perceive Conscience.

    Conscience cannot perceive theorizer.

    Conscience erases memory of ideas that cause involuntary transfer.

    In this sense, a machine can be MORE moral than we are, since forgetting something we have thought, isn’t something we know how to do.

    More on this, but it is quite possible to make an AI that behaves well, (respects property) just as it is possible to create a human that respects property.

    The question is only whether the theorizer and the conscience have equal intelligence, not whether the AI is more intelligent than we are. Imposition of costs due to involuntary transfer of property is just as decidable as the oddness or evenness of a number.

    So to create an intelligence you create a theorizer that looks for opportunities and a conscience that looks to inhibit ideas that cause involuntary transfers.

    That is the means of designing an artificial intelligence.

    Nature did it with us the same way. It’s not complicated.


    Source date (UTC): 2015-02-18 19:49:00 UTC