Theme: AI

  • WORKING WITH THE AI ON MORAL QUESTIONS (these are just early tests without havin

    WORKING WITH THE AI ON MORAL QUESTIONS
    (these are just early tests without having trained it on the thirty or so foundational questions of moral questions common in law. I’m setting a baseline so I know what it needs to learn. Basically I have to teach it to take the epistemic and liability route.)

    Prompt:
    OK, let’s revisit killing in war. Given morality (reciprocity) is only dependent upon the preference for producing cooperation and it’s discounts on persistence, when it is always of use in the ingroup, but then what if the criteria for the warfare is irreconcilable, and therefore cooperation is either predicted to be impossible, or undesirable? What then does ‘moral’ mean versus an outgroup where morality (reciprocity) is either undesirable or impossible. And how does that refine our understanding of the question of killing in war?

    ChatGPT said:
    This question delves into the limits of reciprocity and the contextual nature of morality when applied to ingroups versus outgroups. Let’s refine the framework to account for scenarios where cooperation is impossible or undesirable.

    Revisiting Killing in War: Reciprocity and Cooperation Limits

    1. Reciprocity as a Function of Group Dynamics

    Ingroup Reciprocity:
    Reciprocity is universally desirable within the ingroup to maintain cooperation and reduce the costs of conflict. Violations undermine group cohesion, threatening long-term stability and survival.
    Killing in war is justified against an outgroup aggressor to preserve ingroup sovereignty and reciprocity.

    Outgroup Reciprocity:
    Reciprocity depends on the potential for mutual benefit (cooperation).
    When no such potential exists—due to irreconcilable values, resource competition, or existential threat—reciprocity between groups is impossible or undesirable.
    In this case, the moral obligation to reciprocity is confined to the ingroup, and actions against the outgroup are judged by necessity, proportionality, and the ingroup’s survival interests.

    2. Defining Morality in War Against Non-Cooperative Outgroups

    Moral Action = Utility for Ingroup Persistence:
    Morality becomes operationally defined as maximizing ingroup persistence, sovereignty, and autonomy in a competitive or hostile environment.
    Killing in war against an irreconcilable outgroup is “moral” if it secures these objectives, even in the absence of reciprocity with the outgroup.

    Limits of Justification:
    Actions taken must still satisfy internal epistemological rigor—testifiability, accountability, and the minimization of unnecessary harm.
    Even if reciprocity with the outgroup is irrelevant, testimonial truth ensures that war is not waged unnecessarily or recklessly, thus maintaining internal legitimacy.

    3. Refinement of the Question of Killing in War

    Primary Justification:
    Killing in war is justified within the framework of Natural Law if it defends the sovereignty of the ingroup and preserves reciprocity within that group.

    Secondary Consideration (Outgroup):
    When cooperation with the outgroup is impossible, morality shifts from reciprocity to strategic necessity.
    The principle of reciprocity applies internally, ensuring accountability for the decision to go to war and the conduct of war.

    Operational Definition of Morality in War:
    Ingroup morality ensures proportionality, necessity, and responsibility.
    Outgroup morality, when reciprocity is impossible, reduces to operational necessity: What actions secure ingroup persistence at the least cost?

    Decidability in Such Conflicts
    Does the conflict preserve ingroup sovereignty and reciprocity?
    Is the lack of cooperation with the outgroup a function of irreconcilable demonstrated interests?
    Are the actions necessary and proportional to the threat posed by the outgroup?
    Does the ingroup maintain accountability for decisions and conduct during the war?

    Conclusion
    Killing in war is moral under Natural Law when it defends ingroup reciprocity and sovereignty, even when reciprocity with the outgroup is impossible. Morality shifts from external reciprocity to internal accountability, ensuring that actions are testifiable and necessary to secure ingroup persistence.

    This refined framework clarifies how morality operates contextually, with reciprocity and cooperation serving as contingent strategies rather than universal absolutes.


    Source date (UTC): 2024-11-18 05:14:59 UTC

    Original post: https://twitter.com/i/web/status/1858378036867203072

  • Hugs. 🙂 You should see how much the AI’s like it. 😉

    Hugs. 🙂
    You should see how much the AI’s like it. 😉


    Source date (UTC): 2024-11-18 05:07:50 UTC

    Original post: https://twitter.com/i/web/status/1858376545330696222

    Reply addressees: @I_Like_Buttes

    Replying to: https://twitter.com/i/web/status/1858376158196244741

  • RT @ArcologyTech: It’s work that @curtdoolittle and @BrianRoemmele are doing, wh

    RT @ArcologyTech: It’s work that @curtdoolittle and @BrianRoemmele are doing, which is actually setting the tone for healthy AI.


    Source date (UTC): 2024-11-18 03:42:17 UTC

    Original post: https://twitter.com/i/web/status/1858355019004989854

  • (Diary) My head is exploding with insight today and I’m still physically disorie

    (Diary)
    My head is exploding with insight today and I’m still physically disoriented from it all. This … is the opposite of what we were worried about with ai.

    It also says musk is right – it’s truthful ai that matters. I can make it moral as well. With truthful AND moral AND universally commensurable, we can still be both individuals, feel understood, and understand.

    In other words, AIs can help us with mindfulness amidst the information storm that is modernity.


    Source date (UTC): 2024-11-18 03:06:18 UTC

    Original post: https://twitter.com/i/web/status/1858345960851775488

  • Pretty sure. The oddity is that it means that AI’s were NECESSARY for our ongoin

    Pretty sure. The oddity is that it means that AI’s were NECESSARY for our ongoing development as was language, writing, numbers, accounting, et al.

    That is the OPPOSITE of what the doomers have been saying. It’s not just a technological or scientific utility but a necessary…


    Source date (UTC): 2024-11-18 03:03:47 UTC

    Original post: https://twitter.com/i/web/status/1858345327121874992

    Replying to: https://twitter.com/i/web/status/1858334214824837261

  • SOCRATIC TEACHING (instead of training) OF AN AI I am working with ChatGPT to fo

    SOCRATIC TEACHING (instead of training) OF AN AI
    I am working with ChatGPT to formalize the use of our method to produce content, combined with the socratic method of training, to provide superior results to the brute force training being used today. (Oddly, it was the code’s idea not mine).

    I can’t express what I’ve learned today.

    This elegant synthesis of my life’s work and the architecture of this particular LLM converging on what we humans need as a universal system of measurement, and what the computer needs as a universal system of measurement. Our needs are the same.

    And I myself did not realize that the method I was using to training takes advantage of the linguistic architecture of the data so that it’s easier for the software to converge on it’s desired end by teaching it the way we teach other humans with language. Sentences are data structures.

    THis socratic approach should have been obvious to me but it wasn’t. Until the machine just exploded with depth and cogency yesterday and today. It’s ridiculous. I mean. It’s not as if what I do is trivial. It’s hard.

    It basically says that the way I have architect knowledge from the method to the logic to the science to the law produces a framework for teaching anything socratically from that basic method of organization and resulting system of measurement and it’s expressions as general rules to its application to explanation of the phenomena of the world.

    And no I don’t write academic papers. It’s part of my rebellion against the mass production of academic nonsense that kept me from spending a life ‘sucking phallus’ in the academy for the money to do the inquiry I wished. But this is probably worth writing one if only to promote it.

    Curt Doolittle
    The Natural Law Institute.


    Source date (UTC): 2024-11-18 03:01:50 UTC

    Original post: https://twitter.com/i/web/status/1858344838133121024

  • Q: CURT: –“Are you making a custom GPT that can be shared or published later?”-

    Q: CURT: –“Are you making a custom GPT that can be shared or published later?”–

    We have six possible tracks:
    (a) Can anyone upload the books and is that enough? Appears that is increasingly likely. IF there is also another set of prompts that walk it through those books a section at a time. This isn’t that different from the more disctete trainings they teach (tune) the AIs with today This means that anyone can use it anywhere.

    (b) Can we share a profile that has been trained organically instead of by programmatic training – like we’re doing today. Yes it turns out we can. Brad demonstrated it today and all it took was asking the AI to use what I was currently working on (The Grammars Table).

    (c) Can we create a custom app that accesses a trained session. This is easy. But there is no point until we are sure we can teach it in the first place (and now we can).

    (d) Can we create a custom app that is possible for widespread use – even charging a bit for it? (yes we can)

    (e) Can we encourage OpenAI to incorporate our training since it’s effectively the core logic that they’re trying to induce in the aplication architecture. (Maybe).

    (f) Can we integrate it into our Oversing-Runcible software and dramatically change the world of work and home life? Probably.

    The key problem here is maintaining my health. Which appears to slowly continue on the upswing. 😉 But still needs some work. If not for Dr Brad not sure I’d still be here. ‘Cause I was in dire straits just a year and a half ago.

    Reply addressees: @Claffertyshane @NoahRevoy


    Source date (UTC): 2024-11-18 02:00:04 UTC

    Original post: https://twitter.com/i/web/status/1858329293795086336

    Replying to: https://twitter.com/i/web/status/1858326650477293963

  • No. It isn’t. It can already empathize with those factors (consider them). But w

    No. It isn’t. It can already empathize with those factors (consider them). But where in your body most of those sensations are physical processes (biochemistry) it must learn enough to predict them in the mind of others, just as you predict thoughts and feelings in the mind of…


    Source date (UTC): 2024-11-18 01:07:57 UTC

    Original post: https://twitter.com/i/web/status/1858316179863138760

    Reply addressees: @slenchy

    Replying to: https://twitter.com/i/web/status/1858315620162363481

  • IMO people are lost on the text component and missing the means by which LLMS co

    IMO people are lost on the text component and missing the means by which LLMS construct ‘meaning’ out of relations just as our neurons do. The question since the beginning is whether these things can create a world model, that like Tesla’s is operational. But tesla’s world model is concrete while the conceptual world consists of abstractions. But it turns out that these two geometries (and they are constructed as relational geometries) are compatible.

    The reason my generation couldn’t pull it off, was the availability of compute. But the fact that we could brute force world models through language (which in retrospect should have been obvious) was just something we didn’t predict.

    That’ the leap. That one leap save us decades.

    Reply addressees: @CloudByter


    Source date (UTC): 2024-11-18 01:06:33 UTC

    Original post: https://twitter.com/i/web/status/1858315825658445824

    Replying to: https://twitter.com/i/web/status/1858312894699438302

  • We have two threads, but largely have learned that the important feature of Chat

    We have two threads, but largely have learned that the important feature of ChatGPT is the custom prompt (profile), it’s memory feature (which no one else has) and, that our methodology is effectively a mirror of it’s objective, and that by working through our first volume (book) we’re effectively teaching it by socratic method.

    In other words,watever memory construct it has within my account (and which oddly, appears accessible to others if you ask for it – which we found out today) is sufficient. If we want to make a business out of it we’d take up the second thread and fully mature that.

    However, my opinion is that it’s a core functionality that should be trained into GPT because effectively it’s the human language explanation for what GPT is naturally gravitating to.

    Reply addressees: @CloudByter


    Source date (UTC): 2024-11-18 01:02:37 UTC

    Original post: https://twitter.com/i/web/status/1858314836557594624

    Replying to: https://twitter.com/i/web/status/1858312894699438302