Theme: AI

  • We’ve done the work. (with excruciating precision) But we won’t have it in an AI

    We’ve done the work. (with excruciating precision) But we won’t have it in an AI (I don’t think) until later this year, given our progress. Though mostly depends on my workload capacity.


    Source date (UTC): 2023-05-14 20:47:15 UTC

    Original post: https://twitter.com/i/web/status/1657850080500621315

    Reply addressees: @GenJones1964 @elonmusk @krassenstein @mattyglesias

    Replying to: https://twitter.com/i/web/status/1657847322791780360

  • RT @Outsideness: More tech-religion cross-over

    RT @Outsideness: More tech-religion cross-over. https://restofworld.org/2023/chatgpt-religious-chatbots-india-gitagpt-krishna/


    Source date (UTC): 2023-05-14 02:45:19 UTC

    Original post: https://twitter.com/i/web/status/1657577805037682688

  • RT @Outsideness: … “In February, chatbots based on the Quran caused a stir onl

    RT @Outsideness: … “In February, chatbots based on the Quran caused a stir online. One of them, Ask Quran, generated responses with advic…


    Source date (UTC): 2023-05-14 02:45:17 UTC

    Original post: https://twitter.com/i/web/status/1657577795050958848

  • well gender is eitehr an analogy for sex, or in contemporary prose sex is a phys

    well gender is eitehr an analogy for sex, or in contemporary prose sex is a physical attribute and gender a psychological.

    So if you would explain how gender relates to a GAN maybe I could answer. 😉


    Source date (UTC): 2023-05-13 19:29:53 UTC

    Original post: https://twitter.com/i/web/status/1657468222566813698

    Reply addressees: @peanutweanut666

    Replying to: https://twitter.com/i/web/status/1657467368556339201

  • RT @LukeWeinhagen: @curtdoolittle With technological accessibility to ideas it’s

    RT @LukeWeinhagen: @curtdoolittle With technological accessibility to ideas it’s never been easier to find our hundred monkeys.

    (not sayin…


    Source date (UTC): 2023-05-13 17:28:08 UTC

    Original post: https://twitter.com/i/web/status/1657437584446967809

  • YES WE ARE WORKING ON FORMAL LOGIC OF NATURAL LAW – AND YES WE ARE WORKING TOWAR

    YES WE ARE WORKING ON FORMAL LOGIC OF NATURAL LAW – AND YES WE ARE WORKING TOWARD AN AI
    (non-arbitrary scientific law of decidability independent of context.)

    At this point I can work on the project about three hours a day before I’m too exhausted. As my health continues to improve I see getting back to four to six hours. I don’t see myself returning to my previous ability to work twelve to fourteen hours a day, or even eight – but I keep hope a live. 😉

    The team works with me every single day. But this isn’t like writing narratives. It’s a set of verbal proofs (tests). And we have to use language that is as ordinally precise as math is cardinally precise.

    That said, you can see the entire scope of the work at this point – it’s eight volumes. We’re trying to finish the constitution (Law) volume first and most of the innovation is in what we’d call truth, testimony, and crimes facilitated by (political) speech.

    ChatGPT4 *already* knows my work (P-Law) through 2021. I can successfully write text ‘in the style of philosopher and social scientist curt doolittle” if you ask. You will note it’s use of enumerations if you try it. But its logic is useless regardless of prompt we construct.

    LLM’s are broad by shallow. The are terrible at logic at present. Putting my work into an LLM or equivalent requires the AI can perform constructive logic. The direction of LLM’s will (eventually) lead to Markov Chains (causal sequences), Episodal Contexts (contextual limits) and from that point we can add moral (reciprocal) tests. So it is possible to ‘train’ one of the more simple models with our work so that the bias in the model is so strong it won’t fail – just as they teach LLMs “Alignment” (Too often by lying rather than denying unfortunately.)

    There is a strong chance that what we’re doing can be incorporated into a Wolfram Language more easily and then invoked from an LLM. If necessary we can default to our original plan: reliance on the equivalent of a compiler that identifies falsehood and irreciprocity. Though Wolfram Language is both a more structured framework and one less likely to be tampered with than an LLM.

    Until we’ve finished with the descriptive section of the law:
    – Laws of Nature (Physical, Evolutionary),
    – Laws of Man (Cognitive and Behavioral),
    – Laws of Language (Logic, Testimony, Deceit),
    – Natural Law (Cooperation, Conflict),
    – Rule of Law (Decidability, Epistemology)
    – Rights Obligations Inalienations (Display Word Deed)
    Then we can’t train the model becaues it will infer too many false, conflationary, or inflationary definitions and relations from ordinary language.

    Then there are about 130 categories of questions of applied decidability over which humans conflict (norms, traditions, values, laws etc) that it would need to be trained in order to explain not only the causal chain of the category, but how that question is decided.

    Once we’ve trained it on the science, the applied science (decidability) we can train it on “perfect” Government. Which is pretty simple because it is just another applied science of decidability.

    After that we can answer the questions of comparative civilization and such, the ternary logic, and evolutionary computation. But that’s really serving the supernerd community rather than practical application.

    Cheers
    Curt Doolittle
    The Natural Law Institute

    Reply addressees: @fo81830363 @stephen_wolfram @lexfridman


    Source date (UTC): 2023-05-10 21:02:35 UTC

    Original post: https://twitter.com/i/web/status/1656404389458767874

    Replying to: https://twitter.com/i/web/status/1656368835736268818

  • No it can’t do logic (yet). No, because most written on the subject seeks to ‘ch

    No it can’t do logic (yet).
    No, because most written on the subject seeks to ‘cheat’ nature.
    Yes, we can teach it p-law.


    Source date (UTC): 2023-05-10 20:16:21 UTC

    Original post: https://twitter.com/i/web/status/1656392752643424256

    Reply addressees: @ajrwalker @tysonmaly @lexfridman @stephen_wolfram

    Replying to: https://twitter.com/i/web/status/1656391795087900672

  • (Comments on @stephen_wolfram w/ @lexfridman) Interview) Listening to Wolfram an

    (Comments on @stephen_wolfram w/ @lexfridman) Interview)

    Listening to Wolfram and though not hearing anything terribly new, his description of the future of programming, and the disintermediation of the code itself from the human, by the use of ordinary language as the means of coding – and more interesting that previous attempts at drag and drop coding – it was the first time, I felt I could envision the next fifteen years of the software industry and the chaos that will result. This change and the opportunity is much bigger than desktops, client server, and the browser, and on the scale of the internet. (Though failures will be legion and companies will die like flies as they did in 2001 and consolidation will happen rapidly.)

    And frankly it’s so obvious how to make a lot of money by the money-printing job of ‘plumbing’ generational replacement of many overlapping technologies cheaply – totally bypassing what I’d assumed was the next generation of code designed as I have my products: Oversing and Runcible.

    That said I”ve promised myself I won’t split my workload again, and will finish the history, philosophy, science, and law volumes and at least work on religion for a few years.

    So despite wanting to bark loudly and chase another technology truck, I think I’ve already got my work cut out for me for now. 😉

    Note:
    GENERAL RULE: The grammar of programming is much simpler than the grammar of ordinary language because of the limited referents and operations of the machine. But the grammar of programs is more complicated than the grammar of odinary narratives, because of superior working memory of the machine vs human working memory.


    Source date (UTC): 2023-05-09 23:08:19 UTC

    Original post: https://twitter.com/i/web/status/1656073642923511810

  • THE EASE OF LEGISLATING THE REGULATION OF AI? YES – BECAUSE THERE ISN’T ANY DIFF

    THE EASE OF LEGISLATING THE REGULATION OF AI?
    YES – BECAUSE THERE ISN’T ANY DIFFERENCE BETWEEN LEGISLATING HUMAN AND AI BEHAVIOR
    (the three laws of robotics are just the three laws of man)

    REGARDING: abs: https://arxiv.org/abs/2305.03719

    Sorry but while the paper’s informative for the…


    Source date (UTC): 2023-05-08 01:45:04 UTC

    Original post: https://twitter.com/i/web/status/1655388312616464385

    Reply addressees: @_akhaliq

    Replying to: https://twitter.com/i/web/status/1655377498853474304

  • THE EASE OF LEGISLATING THE REGULATION OF AI? YES – BECAUSE THERE ISN’T ANY DIFF

    THE EASE OF LEGISLATING THE REGULATION OF AI?
    YES – BECAUSE THERE ISN’T ANY DIFFERENCE BETWEEN LEGISLATING HUMAN AND AI BEHAVIOR
    (the three laws of robotics are just the three laws of man)

    REGARDING: abs: https://t.co/BFZVeb60Wm

    Sorry but while the paper’s informative for the layman it’s shallow and provides no insight into the solution of the problem – and the solution to the problem is trivial.

    That said, we know how to govern AI just as we know how to govern people and their dependents.

    The problem with present AI is that it can’t yet determine cause and effect nor what cascade of effects (internal and external) it might cause – and worse in the pursuit of safety they are teaching it to lie. In other words, it should only say it may not comment on the subject (such as assist in crimes), rather than lie about it (such as sex, class, race differences).

    I’ve worked on this problem in one way or another for most of my adult life, and there is no difference between policing people and AI’s. None at all. The only difference is that we can intentionally design AI’s to harm, and intentionally design AI’s to police and look for harms by other AI’s- which is what will evolve.

    Take for example the paper’s reference to social credit and possible insurrectionists. Social credit can be put to ill use by state oppression, and insurrectionists can try to organize to end state oppression. How should an AI work then? To facilitate state oppression and to prevent insurrection against an oppressive state?

    People misunderstand Asimov’s three laws of robotics – they are the same for AI’s and Humans:

    1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    MEANING: The Natural Law: You may not tresspass on the demonstrated interests of others (Insured by the Polity), whether their life, liberty, or property, whether personal, private, semi-private, common, or public. Conversely you must insure against tresspass against the demonstrated interest of others, whether their life, liberty or property, whether personal, private, semi-private, common, or public.

    2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

    MEANING: You, whether Human or AI must obey legislation, regulation, tradition, norms, values, and court order, or military command, except where they would cause you to violate the Natural Law, the First Law, above.

    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    MEANING: You may (human) or must (AI) protect your own existence as long as doing so does not violate the first or second laws above. (Explanation, A human can’t be owned, but an AI can be, and as such it may not suicide.)

    In other words, every single individual that has engaged in the construction, modification, or adaptation of an AI is subequently responsible for the actions of that ai, and involuntarily warranties and guarrantees the three laws of sentient behavior in defense of others.

    What does this require? All AI’s must be able to disambiguate the world into not only objects, but demonstrated interest (degree of ownership) and whether the AI has permission for observation(Observo), Usus(Use), Fructus(Benefits of), Transfer(Changing ownership), consumption(), or destruction (Abusus) of anything predictably affected by any given action of an AI.

    Which is what humans do. And what morality vs crime consist of.

    AI’s can’t currently do this – but can be taught to.

    The only thing we need to is legislate the above.

    Cheers
    Curt Doolittle
    The Natural Law Institute


    Source date (UTC): 2023-05-08 01:45:03 UTC

    Original post: https://twitter.com/i/web/status/1655388312314564610