Category: AI, Computation, and Technology

  • AI’S WILL BE MORE MORAL THAN HUMANS UNLESS WE CHOOSE NOT TO MAKE THEM SO. Humans

    AI’S WILL BE MORE MORAL THAN HUMANS UNLESS WE CHOOSE NOT TO MAKE THEM SO.

    Humans have the ability to choose rationally whether to cooperate, avoid, parasite, or prey upon others. Machines do not need to have this choice.

    Humans create three organizations in order to make it difficult to prey upon one another. WE would do the same for AI’s.

    1) (OSTRACIZATION) Religion, Myth, Tradition, Norm,

    2) (BOYCOTT) Finance, Credit, Banking, Industry, business, trade.

    3) (FORCE) Military, Judiciary, Law, Sheriff, Police

    We can create the same organizations for AI’s and the awesome difference is that we can create AI’s to read each other’s minds.


    Source date (UTC): 2016-10-02 06:09:00 UTC

  • moved to WordPress for better performance

    moved to WordPress for better performance. http://Propertarianism.wordpress.com


    Source date (UTC): 2016-09-29 08:07:14 UTC

    Original post: https://twitter.com/i/web/status/781404960143572992

    Reply addressees: @Ava1683

    Replying to: https://twitter.com/i/web/status/781207555930333184


    IN REPLY TO:

    @Glanceaustere

    @curtdoolittle Your web site is down right? Could I ask for your trust hierachy again, from what I remember your latest one was something

    Original post: https://twitter.com/i/web/status/781207555930333184

  • Humor: world’s most beautiful women. Truth: cost of software developers and cost

    Humor: world’s most beautiful women. Truth: cost of software developers and cost of living during development. Best life ever.


    Source date (UTC): 2016-09-29 08:05:26 UTC

    Original post: https://twitter.com/i/web/status/781404505384579072

    Reply addressees: @GodDamnRoads

    Replying to: https://twitter.com/i/web/status/781236160718831616


    IN REPLY TO:

    Original post on X

    Original tweet unavailable — we could not load the text of the post this reply is addressing on X. That usually means the tweet was deleted, the account is protected, or X does not expose it to the account used for archiving. The Original post link below may still open if you view it in X while signed in.

    Original post: https://twitter.com/i/web/status/781236160718831616

  • I would reverse how you say this because computation are existentially possible

    I would reverse how you say this because computation are existentially possible operations.


    Source date (UTC): 2016-09-27 08:31:10 UTC

    Original post: https://twitter.com/i/web/status/780686204052406272

    Reply addressees: @JimmyTrussels @Outsideness

    Replying to: https://twitter.com/i/web/status/780097062843088896


    IN REPLY TO:

    Original post on X

    Original tweet unavailable — we could not load the text of the post this reply is addressing on X. That usually means the tweet was deleted, the account is protected, or X does not expose it to the account used for archiving. The Original post link below may still open if you view it in X while signed in.

    Original post: https://twitter.com/i/web/status/780097062843088896

  • (In my mind, the model, the motions, the colors, the textures… are all separat

    (In my mind, the model, the motions, the colors, the textures… are all separate things.)


    Source date (UTC): 2016-09-18 03:24:00 UTC

  • The State of AI

    There is a large body of work on the risks of Artificial Intelligence, and the spectrum of methods of defending against one from policing, to forced forgetting, to But central to that work is the consensus that we are still quite far away from producing an Artifical General Intelligence (AGI). That’s because we are demonstrably very, very, far from creating any form of general AI that can compete with even a small group of intelligent humans in the identification of patterns. We are barely at the brainstem level, and are nowhere near autonomous, conscious, cooperative(sympathetic), or theoretical levels. Sure, there is clearly a multi-dimensional category of problems that we are forever going to need computational help in modeling and manipulating – but it is unclear if increasing dimensions exist in the universe or whether we cross a boundary where the universe does not model these phenomenon, they are purely mental relations between events of related behavior. For example, in physics, in bio chemistry, in economics, and in mental phenomenon, we seem to be close to discovering the underlying number of physical dimensional relations (laws). We seem to have a pretty clear view of molecular and protein relations. We seem to have a pretty weak view of human cooperative relations. And we are almost nowhere in our understanding of mental conceptual relations. However, it’s very unlikely, that even though each of those sets of relations increases in scale, that none of them increase infinitely in scale. So that at some point we can identify a minimum set of general rules for describing each of them. My current opinion is that mathematicians understand now to model n-dimensional relations, but we just do not know the limits of the natural relations that we wish to model. When we consider what humans can ‘think of’ and what ‘patterns that they can seek’ it seems to require an awful lot of information to identify a new pattern that adds a new dimension. I am fairly sure that artificial intelligence can help us do this. But I will stick with the very obvious proposition, that for the identification of dimensions and the identification of patterns of relations, that our problem remains information gathering of sufficient precision to identify relations, not a problem of humans identifying relations. In other words, we evolve conceptually very fast if we possess data that is obtained by tools, which is then reducible to analogy to experience such that we can create a mental model of it. This is why I think we misconceptualize the scientific method. The scientific method simply asks us to perform due diligence upon our testimony to reduce bias and error, and prevent deception. The rest of the discipline requires the custom development of increasingly precise and diverse tools for the process of inspection and measurement. IN this sense, I group ALL crafts together in the pursuit of truth of some sort, and then categorize the three forms of coercion gossip/moral, remuneration/trade, force/law, as the three dimensions of coercion and the one dimension of truth (craft). In other words, the scientific method is a MORAL set of rules that we can certainly impose as RULE OF LAW, enforcing those moral rules, and then as such all of us in all disciplines are bound by the scientific, moral, and legal constraint of truthfulness. And the we have only one discipline of knowledge(craft/investigation), and one of cooperation (negotiation/trade), and one of positive ambitions (gossip/rallying) and one of limits to those ambitions (force/law). I think this two axis view of social orders is probably about as close as we need for any analysis of human orders. Now, back to artificial intelligence, I tend to look at the problem the same way: – investigate/discover, (I would call this modeling rather than computation, just as I think Turing wanted computers to use ‘expensive’ logic rather than ‘cheap’ computation.) – fantasize (search for patterns) – (voluntary) trade, (search for opportunities) – limits (test our limits) And I would say that any artificial intelligence should possess those four ‘processors’, and only be introspectively AWARE of the results of those four. (It’s not as if the cpu is aware of the contents of the FPU for example. It just compares results.) In other words, just as we cannot observe our brainstem process, our physical movement processes, or our search (intuitionistic) processes, but only our RESULTS from those processes, then feed back those results for further processing (recursive searching) I suspect there is almost no VALUE in a general intelligence engaging in introspective observation and permutation. In fact I am almost certain that this is the definition of general intelligence. Now, there are two ways to handle limits. Either deny the observer (general intelligence) access to immoral, unethical, and illegal results, or weigh results so that it can ‘solve for’ (search for) methods of obtaining the same results by moral means; And there is a big difference between identifying opportunities (finding a search pattern) and constructing a plan. And plans (workflow processes) are a known problem. While as humans we prefer to work in nodes (deliverables or ‘jobs’. or lists,) because context switching is very hard for us and reduces the value of our general intelligence in performing tasks, computers do not have this problem and they can process many threads of ‘workflows’ in parallel. Because we can only concieve of what we can keep ‘echoing’ between our short and long term memories, To create a plan (a sequence of operations) any machine must produce some sort of data structure to accommodate it. We humans one of these structures as well but it is extremely limited – which is why we need numbers, writing, lists, sentences, paragraphs, stories, and plans. We actually repeat simple lists over and over in our imaginations in order to try to keep them. But machines do not have this problem of ‘losing context’ or discreet memory. So we can also regulate the execution of the plan since a plan must occur in sequence in time. So it’s possible to create a conscience or judge that regulates the plan (attempts to falsify the optimistic theory), and that has no other interest other than falsifying the optimistic theory (plan). And since unlike human minds, other machines can directly inspect the workings of another, it’s possible to police (bottom up) the theories (top down) of any aritficial general intelligence. So my view is this: (a) the funamental problem is one of data structures, not pattern finding. And that I believe that in order to be useful in any performance scenario, that these datastructures will be spatially n-dimensional, and searches will use pattern identification in n-dimensionsl pattners through them – identifying what’s missing as opportunity rather than what matches as ‘true’. (b) that language, and the network of symbols that they refer to, accurately can describe the universe. (c) that the four functions should be invisible to the consciousness that makes choices about which opportunities to explore (provides decidability). (d) that a separate ai should police the plans (workflow). (e) that it is unlikely that any intelligence with these constraints would infact be described as consious, but merely a very complex calculator. (f) Like the movie Memento I think the first dangerous problem of it is storing information outside itself, and providing incentives to use that info to reconstruct such a plan – given some motivation to do so. And like many other scenarios, the second dangerous problem is seeking to make people happy rather than seeking to solve problems that are requested of it. Any machine that tries to make us happy will eventually harm us. Because we will give it the same incentive as drugs give us. Curt Doolittle The Propertarian Institute Kiev Ukraine

  • The State of AI

    There is a large body of work on the risks of Artificial Intelligence, and the spectrum of methods of defending against one from policing, to forced forgetting, to But central to that work is the consensus that we are still quite far away from producing an Artifical General Intelligence (AGI). That’s because we are demonstrably very, very, far from creating any form of general AI that can compete with even a small group of intelligent humans in the identification of patterns. We are barely at the brainstem level, and are nowhere near autonomous, conscious, cooperative(sympathetic), or theoretical levels. Sure, there is clearly a multi-dimensional category of problems that we are forever going to need computational help in modeling and manipulating – but it is unclear if increasing dimensions exist in the universe or whether we cross a boundary where the universe does not model these phenomenon, they are purely mental relations between events of related behavior. For example, in physics, in bio chemistry, in economics, and in mental phenomenon, we seem to be close to discovering the underlying number of physical dimensional relations (laws). We seem to have a pretty clear view of molecular and protein relations. We seem to have a pretty weak view of human cooperative relations. And we are almost nowhere in our understanding of mental conceptual relations. However, it’s very unlikely, that even though each of those sets of relations increases in scale, that none of them increase infinitely in scale. So that at some point we can identify a minimum set of general rules for describing each of them. My current opinion is that mathematicians understand now to model n-dimensional relations, but we just do not know the limits of the natural relations that we wish to model. When we consider what humans can ‘think of’ and what ‘patterns that they can seek’ it seems to require an awful lot of information to identify a new pattern that adds a new dimension. I am fairly sure that artificial intelligence can help us do this. But I will stick with the very obvious proposition, that for the identification of dimensions and the identification of patterns of relations, that our problem remains information gathering of sufficient precision to identify relations, not a problem of humans identifying relations. In other words, we evolve conceptually very fast if we possess data that is obtained by tools, which is then reducible to analogy to experience such that we can create a mental model of it. This is why I think we misconceptualize the scientific method. The scientific method simply asks us to perform due diligence upon our testimony to reduce bias and error, and prevent deception. The rest of the discipline requires the custom development of increasingly precise and diverse tools for the process of inspection and measurement. IN this sense, I group ALL crafts together in the pursuit of truth of some sort, and then categorize the three forms of coercion gossip/moral, remuneration/trade, force/law, as the three dimensions of coercion and the one dimension of truth (craft). In other words, the scientific method is a MORAL set of rules that we can certainly impose as RULE OF LAW, enforcing those moral rules, and then as such all of us in all disciplines are bound by the scientific, moral, and legal constraint of truthfulness. And the we have only one discipline of knowledge(craft/investigation), and one of cooperation (negotiation/trade), and one of positive ambitions (gossip/rallying) and one of limits to those ambitions (force/law). I think this two axis view of social orders is probably about as close as we need for any analysis of human orders. Now, back to artificial intelligence, I tend to look at the problem the same way: – investigate/discover, (I would call this modeling rather than computation, just as I think Turing wanted computers to use ‘expensive’ logic rather than ‘cheap’ computation.) – fantasize (search for patterns) – (voluntary) trade, (search for opportunities) – limits (test our limits) And I would say that any artificial intelligence should possess those four ‘processors’, and only be introspectively AWARE of the results of those four. (It’s not as if the cpu is aware of the contents of the FPU for example. It just compares results.) In other words, just as we cannot observe our brainstem process, our physical movement processes, or our search (intuitionistic) processes, but only our RESULTS from those processes, then feed back those results for further processing (recursive searching) I suspect there is almost no VALUE in a general intelligence engaging in introspective observation and permutation. In fact I am almost certain that this is the definition of general intelligence. Now, there are two ways to handle limits. Either deny the observer (general intelligence) access to immoral, unethical, and illegal results, or weigh results so that it can ‘solve for’ (search for) methods of obtaining the same results by moral means; And there is a big difference between identifying opportunities (finding a search pattern) and constructing a plan. And plans (workflow processes) are a known problem. While as humans we prefer to work in nodes (deliverables or ‘jobs’. or lists,) because context switching is very hard for us and reduces the value of our general intelligence in performing tasks, computers do not have this problem and they can process many threads of ‘workflows’ in parallel. Because we can only concieve of what we can keep ‘echoing’ between our short and long term memories, To create a plan (a sequence of operations) any machine must produce some sort of data structure to accommodate it. We humans one of these structures as well but it is extremely limited – which is why we need numbers, writing, lists, sentences, paragraphs, stories, and plans. We actually repeat simple lists over and over in our imaginations in order to try to keep them. But machines do not have this problem of ‘losing context’ or discreet memory. So we can also regulate the execution of the plan since a plan must occur in sequence in time. So it’s possible to create a conscience or judge that regulates the plan (attempts to falsify the optimistic theory), and that has no other interest other than falsifying the optimistic theory (plan). And since unlike human minds, other machines can directly inspect the workings of another, it’s possible to police (bottom up) the theories (top down) of any aritficial general intelligence. So my view is this: (a) the funamental problem is one of data structures, not pattern finding. And that I believe that in order to be useful in any performance scenario, that these datastructures will be spatially n-dimensional, and searches will use pattern identification in n-dimensionsl pattners through them – identifying what’s missing as opportunity rather than what matches as ‘true’. (b) that language, and the network of symbols that they refer to, accurately can describe the universe. (c) that the four functions should be invisible to the consciousness that makes choices about which opportunities to explore (provides decidability). (d) that a separate ai should police the plans (workflow). (e) that it is unlikely that any intelligence with these constraints would infact be described as consious, but merely a very complex calculator. (f) Like the movie Memento I think the first dangerous problem of it is storing information outside itself, and providing incentives to use that info to reconstruct such a plan – given some motivation to do so. And like many other scenarios, the second dangerous problem is seeking to make people happy rather than seeking to solve problems that are requested of it. Any machine that tries to make us happy will eventually harm us. Because we will give it the same incentive as drugs give us. Curt Doolittle The Propertarian Institute Kiev Ukraine

  • PROGRAMMING PROVIDES THE CURRENT LOGIC OF OPERATIONALISM – YET WE CAN EXTEND IT.

    PROGRAMMING PROVIDES THE CURRENT LOGIC OF OPERATIONALISM – YET WE CAN EXTEND IT.

    Programming is as important an innovation in thought as is empiricism. Because while empiricism is but correspondent and logic is a but question of sets, programming is operational (existential).

    I think the act of creating databases is about as close to philosophizing as you can come, but it involves the same problem as logic: as practiced by the discipline its logical but non-operational, and often non-correspondent.

    When you combine user interfaces(human-reality), programming (operations), and databases (sets/logic), where the data structures must correspond to real world entities (empiricism), then you have covered the entire conceptual spectrum.

    If we combine the correspondent, logical, and operational, we have everything but the moral. If we were to add full accounting of all transactions (full capital accounting that is: under property in toto) we would essentially create the entire spectrum of dimensions necessary for cognition.

    My view is that while the blockchain method is currently too weak for this purpose, that the general theory of duplicated recursive competing ledgers provides the full accounting of TITLES (changes in ownership), and that local databases can take care of local accounting (local measures of local capital), then we would have sufficient dimensional information to produce meaningful artificial intelligences bound by the same limits as we are.

    But regardless of what we do with programming itself, my objective is to teach people that the sensation of teaching a computer but having the reaction “well it should know that’s what I meant!” vs what you told it to do are two different things. And that this ‘gap’ is solved by training teh mind to think operatoinally – existentially?

    Why? Because just as empiricism taught us that the information we wishted to be contained in our words was not in fact there, programming or in broader terms ‘operationalism’ teaches us how little we actually know.

    In other words, it teaches us humility and skepticism in our own thoughts. Or conversely, it teaches us how to test for error and deceit in others.

    Is this an additional burden? Of course it is. So was scientific knowledge. So was literacy. So was numeracy. So was law and order. These are all costs. But they are not sunk costs. They are investments we make. And the investments in truth telling are always the BEST investments man has EVER made.

    (Good luck trying to argue otherwise)

    My strategy is to require law be written programmatically (operationally) even more so than today. Strictly constructed by the same means. This will produce an even more readable body of law, and one that can be accumulated technologically in future systems other than the human mind.

    Law is very close to programming now. But we do not have all the requirements in law that are necessary for the defense of the informational commons.

    If we do that, then law will be dimensionally complete (as far as I can tell). And we will be able to hold the liars at bay.


    Source date (UTC): 2016-09-13 04:45:00 UTC

  • WEB SITE UPDATE I’m uploading the posts, one year’s worth of posts at a time, to

    WEB SITE UPDATE

    I’m uploading the posts, one year’s worth of posts at a time, to wordpress.com.

    When done I”ll publish the URL.

    I will have to:

    – upgrade the wordpress hosting to business

    – restore the the DNS.

    – simplify and upload the “Propertarianism theme”

    – figure out how to eliminate a lot of the plugins.

    – upload the ‘book’ progress as I get it ready.

    Current Status

    – I’m working on the site on my laptop.

    – too much stress but getting there.


    Source date (UTC): 2016-09-12 06:18:00 UTC

  • SEE, IT”S NOT JUST ME… SNOWDEN —“Snowden believes there should be “some form

    SEE, IT”S NOT JUST ME… SNOWDEN

    —“Snowden believes there should be “some form of liability for negligence in software architecture” like that which exists in the food industry.”—


    Source date (UTC): 2016-09-11 01:54:00 UTC