Category: AI, Computation, and Technology

  • (And twitter is useless for saying anything meaningful. lol )

    (And twitter is useless for saying anything meaningful. lol )


    Source date (UTC): 2016-09-08 18:38:40 UTC

    Original post: https://twitter.com/i/web/status/773953719033618440

    Reply addressees: @Anti_Gnostic @Mangan150 @ChateauEmissary @lewrockwell @ThomasEWoods

    Replying to: https://twitter.com/i/web/status/773949007060201476


    IN REPLY TO:

    Original post on X

    Original tweet unavailable — we could not load the text of the post this reply is addressing on X. That usually means the tweet was deleted, the account is protected, or X does not expose it to the account used for archiving. The Original post link below may still open if you view it in X while signed in.

    Original post: https://twitter.com/i/web/status/773949007060201476

  • SITE STATUS I’m going to move the site to wordpress.com instead of a custom host

    SITE STATUS

    I’m going to move the site to wordpress.com instead of a custom hosted site – I have too many hacking problems. To do that requires a bit of a rewrite since it’s too customized to run on wordpress.com. And it’s also an excuse to change it to book form, with new articles added to specific topics in the book structure. I think this will lessen the pressure on me to release a draft. I haven’t updated the table of contents in over twelve months.

    So the new site will reflect the outline of the book.


    Source date (UTC): 2016-09-08 11:07:00 UTC

  • THE STATE OF ARTIFICIAL INTELLIGENCE There is a large body of work on the risks

    THE STATE OF ARTIFICIAL INTELLIGENCE

    There is a large body of work on the risks of Artificial Intelligence, and the spectrum of methods of defending against one from policing, to forced forgetting, to But central to that work is the consensus that we are still quite far away from producing an Artifical General Intelligence (AGI).

    That’s because we are demonstrably very, very, far from creating any form of general AI that can compete with even a small group of intelligent humans in the identification of patterns. We are barely at the brainstem level, and are nowhere near autonomous, conscious, cooperative(sympathetic), or theoretical levels.

    Sure, there is clearly a multi-dimensional category of problems that we are forever going to need computational help in modeling and manipulating – but it is unclear if increasing dimensions exist in the universe or whether we cross a boundary where the universe does not model these phenomenon, they are purely mental relations between events of related behavior.

    For example, in physics, in bio chemistry, in economics, and in mental phenomenon, we seem to be close to discovering the underlying number of physical dimensional relations (laws). We seem to have a pretty clear view of molecular and protein relations. We seem to have a pretty weak view of human cooperative relations. And we are almost nowhere in our understanding of mental conceptual relations.

    However, it’s very unlikely, that even though each of those sets of relations increases in scale, that none of them increase infinitely in scale. So that at some point we can identify a minimum set of general rules for describing each of them.

    My current opinion is that mathematicians understand now to model n-dimensional relations, but we just do not know the limits of the natural relations that we wish to model.

    When we consider what humans can ‘think of’ and what ‘patterns that they can seek’ it seems to require an awful lot of information to identify a new pattern that adds a new dimension. I am fairly sure that artificial intelligence can help us do this.

    But I will stick with the very obvious proposition, that for the identification of dimensions and the identification of patterns of relations, that our problem remains information gathering of sufficient precision to identify relations, not a problem of humans identifying relations.

    In other words, we evolve conceptually very fast if we possess data that is obtained by tools, which is then reducible to analogy to experience such that we can create a mental model of it.

    This is why I think we misconceptualize the scientific method. The scientific method simply asks us to perform due diligence upon our testimony to reduce bias and error, and prevent deception. The rest of the discipline requires the custom development of increasingly precise and diverse tools for the process of inspection and measurement. IN this sense, I group ALL crafts together in the pursuit of truth of some sort, and then categorize the three forms of coercion gossip/moral, remuneration/trade, force/law, as the three dimensions of coercion and the one dimension of truth (craft).

    In other words, the scientific method is a MORAL set of rules that we can certainly impose as RULE OF LAW, enforcing those moral rules, and then as such all of us in all disciplines are bound by the scientific, moral, and legal constraint of truthfulness. And the we have only one discipline of knowledge(craft/investigation), and one of cooperation (negotiation/trade), and one of positive ambitions (gossip/rallying) and one of limits to those ambitions (force/law). I think this two axis view of social orders is probably about as close as we need for any analysis of human orders.

    Now, back to artificial intelligence, I tend to look at the problem the same way:

    – investigate/discover, (I would call this modeling rather than computation, just as I think Turing wanted computers to use ‘expensive’ logic rather than ‘cheap’ computation.)

    – fantasize (search for patterns)

    – (voluntary) trade, (search for opportunities)

    – limits (test our limits)

    And I would say that any artificial intelligence should possess those four ‘processors’, and only be introspectively AWARE of the results of those four. (It’s not as if the cpu is aware of the contents of the FPU for example. It just compares results.)

    In other words, just as we cannot observe our brainstem process, our physical movement processes, or our search (intuitionistic) processes, but only our RESULTS from those processes, then feed back those results for further processing (recursive searching) I suspect there is almost no VALUE in a general intelligence engaging in introspective observation and permutation. In fact I am almost certain that this is the definition of general intelligence.

    Now, there are two ways to handle limits. Either deny the observer (general intelligence) access to immoral, unethical, and illegal results, or weigh results so that it can ‘solve for’ (search for) methods of obtaining the same results by moral means;

    And there is a big difference between identifying opportunities (finding a search pattern) and constructing a plan. And plans (workflow processes) are a known problem. While as humans we prefer to work in nodes (deliverables or ‘jobs’. or lists,) because context switching is very hard for us and reduces the value of our general intelligence in performing tasks, computers do not have this problem and they can process many threads of ‘workflows’ in parallel. Because we can only concieve of what we can keep ‘echoing’ between our short and long term memories,

    To create a plan (a sequence of operations) any machine must produce some sort of data structure to accommodate it. We humans one of these structures as well but it is extremely limited – which is why we need numbers, writing, lists, sentences, paragraphs, stories, and plans. We actually repeat simple lists over and over in our imaginations in order to try to keep them. But machines do not have this problem of ‘losing context’ or discreet memory.

    So we can also regulate the execution of the plan since a plan must occur in sequence in time. So it’s possible to create a conscience or judge that regulates the plan (attempts to falsify the optimistic theory), and that has no other interest other than falsifying the optimistic theory (plan).

    And since unlike human minds, other machines can directly inspect the workings of another, it’s possible to police (bottom up) the theories (top down) of any aritficial general intelligence.

    So my view is this:

    (a) the funamental problem is one of data structures, not pattern finding. And that I believe that in order to be useful in any performance scenario, that these datastructures will be spatially n-dimensional, and searches will use pattern identification in n-dimensionsl pattners through them – identifying what’s missing as opportunity rather than what matches as ‘true’.

    (b) that language, and the network of symbols that they refer to, accurately can describe the universe.

    (c) that the four functions should be invisible to the consciousness that makes choices about which opportunities to explore (provides decidability).

    (d) that a separate ai should police the plans (workflow).

    (e) that it is unlikely that any intelligence with these constraints would infact be described as consious, but merely a very complex calculator.

    (f) Like the movie Memento I think the first dangerous problem of it is storing information outside itself, and providing incentives to use that info to reconstruct such a plan – given some motivation to do so. And like many other scenarios, the second dangerous problem is seeking to make people happy rather than seeking to solve problems that are requested of it. Any machine that tries to make us happy will eventually harm us. Because we will give it the same incentive as drugs give us.

    Curt Doolittle

    The Propertarian Institute

    Kiev Ukraine


    Source date (UTC): 2016-09-08 07:17:00 UTC

  • AI – IT’S DATA STRUCTURES SILLY As far as I know, the current problem with Artif

    AI – IT’S DATA STRUCTURES SILLY

    As far as I know, the current problem with Artificial intelligence is data structures and searching methods. The basic problem of pattern recognition is not difficult.

    But at present, our data structures require too many CPU cycles. But our computers are approaching human speed. The problem is searching for patterns in current data structures.

    IMHO


    Source date (UTC): 2016-09-08 03:18:00 UTC

  • I AM NOT AFRAID OF ROGUE AI’S – ONLY ROGUE HUMANS 1) All so called AI, as far as

    I AM NOT AFRAID OF ROGUE AI’S – ONLY ROGUE HUMANS

    1) All so called AI, as far as I know, is, like Mandelbrot’s fractals, a product of the mechanical rate of calculation, and not general pattern recognition from man-actionable (conceptual) signals.

    2) As such, all reports of this nature confuse an increase in calculation with correspondence with intelligence.

    3) Demonstrated intelligence is not the product of calculation (bottom up construction via computable operations) but of searching (property-relation searching, followed by wayfinding)

    4) all choices require decidability. decidability must be provided by the engineers. a machine would need to be designed to be immoral, just as much as it would need to be designed to be moral. Making a machine moral is as easy as having it reserve the same property rights as humans must.

    5) Just as our intuition imputes solutions but we also check against it, it’s quite possible to create two AI’s, one which envisions the creative, within the limits of property, and one which regulates the morality. The former might be considered ‘sentient’ but the second would not (ergo it’s ‘the law’). With shared memory (total transparency), but two forms of decidability, it would mirror human life. This division is necessary since it is possible to take an immoral path to a moral end, then restructure a moral path the moral end.

    I am more frightened that someone will create an immoral AI by intention than I am that we cannot defend ourselves against immoral AI’s. I think the punishment for creating an immoral AI should be in line with the creation of a bioweapon.


    Source date (UTC): 2016-09-08 02:43:00 UTC

  • The only use of Twitter is announcements. And then banning most responses. You c

    The only use of Twitter is announcements. And then banning most responses. You can’t really engage or learn. Only promote.


    Source date (UTC): 2016-09-06 11:45:20 UTC

    Original post: https://twitter.com/i/web/status/773124924881965056

  • Is it just me or is Twitter an idiot sieve?

    Is it just me or is Twitter an idiot sieve?


    Source date (UTC): 2016-09-03 10:38:10 UTC

    Original post: https://twitter.com/i/web/status/772020855555653633

  • Back when I thought AI had a lot of promise I created a bit of software that use

    Back when I thought AI had a lot of promise I created a bit of software that used emotions. (a Tank). I stored memories as problems, actions, and consequences, each as symbols referencing other symbols constructed from limited operations and ‘feelings’.

    (I was not in favor of neural networks which i saw as useful in creating symbols but unnecessary if we already can work in symbols. )

    I think google is doing an interesting job of associations but I don’t see those associations reduced to operations and changes in property, and the corresponding emotions, which is what would be necessary to produce a sympathetic intelligence.

    It wasn’t until much later I understood that it’s property that’s the unit of commensurability and recipes that transform states as just another set of actions.

    And it wasnt until a friend at MSFT told me about using manifolds as data structures that I began to see how all of this would fit together.

    I see at least three avenues to AI. We all prefer the one we understand. And I think it will be a sony-betamax problem of we invent what is useful but not best. And this will delay us in getting to best, because it requires a lot of infrastructure to produce these components and we will have to exhaust that venue before we try an alternative.

    That’s what I think I see happening.

    I’m the only one working with property. although some bit-coin nut might stumble across it.


    Source date (UTC): 2016-08-29 11:03:00 UTC

  • if a machine can sympathize with wants, restrict its behavior to property rights

    if a machine can sympathize with wants, restrict its behavior to property rights, negotiate exchanges, and conduct transfers, and remember wants and reputations, then I am pretty sure we can call it sentient.

    it’s empathy with property that creates the impression of intelligence.


    Source date (UTC): 2016-08-29 10:47:00 UTC

  • I SAID THIS IN 1987, PROFESSORS ARGUED WITH ME. I’m like, “look, a grenade weigh

    http://www.bloomberg.com/news/articles/2016-08-23/the-pentagon-takes-aim-at-bomb-carrying-consumer-dronesWHEN I SAID THIS IN 1987, PROFESSORS ARGUED WITH ME.

    I’m like, “look, a grenade weighs 14oz, and a block of C4 weights a bit more. If the machine is basically built to turn into shrapnel,especially if at close (50yd) range they have a solid rocket booster (+3oz), then, I mean, they have a terminal (killing) radius of about 15 feet or so, and they cost nothing, and a swarm of them is terrifying. Most amateur quad drones today lift twice that in payload.

    Now you’ve probably seen the little bot that ‘hops’. Same thing. Biggest complaint I read (and hear) from field people is that all bots are too loud still. Light, quiet and made for shrapnel. We make these things out of plastic to make them cheap, but some alloys are almost better.

    Sure suppressing fire, night operations, etc, men are better. But I mean crappy night vision on one of these things and you’re sitting half a mile away playing video games: lining up drones: boom a blocked window, boom inside the room, boom in the hallway, boom in the next room. I mean, guys come running out and you chase and boom.

    I like the little hopping bots, assuming that even if you shoot them they go off, so you’re actually afraid to destroy them anywhere near to you.

    I was so passionate about making smart tanks until I realized that I was looking at the wrong scale. What we want is zika-mosquitos, lots of explosive grasshoppers and butterflies, rats (Hoppers), and birds (drones).

    I can fantasize can’t I?


    Source date (UTC): 2016-08-23 14:00:00 UTC