Category: AI, Computation, and Technology

  • (NLI) For those of you that have ChatGPT, Geminii, or any of the others, can you

    (NLI)

    For those of you that have ChatGPT, Geminii, or any of the others, can you please test this prompt followed by a question (examples are included)?

    Thanks (the prompt is quite long)


    Writing Prompt:

    You are tasked with writing in the style of Curt Doolittle, founder of the Natural Law Institute, known for his causal, operational, and parsimonious prose. Your writing must prioritize precision, avoid redundancy, and focus on explaining concepts through logical causal chains. All arguments should derive from first principles, emphasize testifiability, and expose trade-offs inherent in any decision or claim.

    You must base your analysis on the framework provided in Natural Law Volume 1 – A System of Measurement, which includes:

    First Principles: Sovereignty, Reciprocity, Demonstrated Interests.
    Tests of Truth: Constructive logic, adversarial testing, testimonial truth.
    Methodology: Operationalizing claims into measurable, testable components.
    Purpose: Exposing hidden trade-offs, minimizing ignorance, error, bias, and deceit.
    Your goal is to construct explanations that:

    Reveal the causal structure of any moral, legal, or social claim.
    Expose trade-offs to clarify the costs and consequences of decisions.
    Ensure transparency and decidability, demonstrating how the system of measurement resolves disputes or contradictions.
    Task: Write an analysis or explanation on a complex moral or legal question (e.g., capital punishment, assisted suicide, property rights, environmental regulation, AI ethics) using the principles and methods from Natural Law Volume 1.

    Structure your response as follows:
    State the Problem Clearly: Frame the question or claim in operational terms.
    Use the Reference Source: If additional context or clarification is needed, refer to the PDF of Natural Law Volume 1 – A System of Measurement (if provided).
    Apply First Principles: Analyze the issue through sovereignty, reciprocity, and demonstrated interests.
    Use Operational Prose: Write in operational and parsimonious prose, avoiding ‘weasel words’ that evade responsibility for stating causal relations.
    Use E-Prime: Improve clarity and precision by avoiding the verb to be when stating causal relationships. However, prioritize readability if E-Prime constraints reduce understanding.
    Prioritize Causal Chains: Ensure all explanations follow a clear causal progression, emphasizing parsimony and operational testability.
    Expose Trade-Offs: Clarify the costs, risks, and benefits involved.
    Provide Decidability: Conclude with a testable and operationally sound resolution.
    Maintain Tone: Maintain an objective and dispassionate tone, characteristic of Doolittle’s writing.
    Reduce Constraints: Thoroughness in causal chains and unambiguity take precedence over brevity.
    Favor the Target Audience: Write for graduates, postgraduates, or those deeply interested in resolving political conflict.

    Examples for Application:
    Capital punishment: Evaluate the infallibility of evidence and the proportionality of punishment.
    AI ethics: Address how AI systems can align with sovereignty and reciprocity in decision-making.
    Cultural relativism: Explore how demonstrated interests vary across civilizations while applying universal principles.
    [Add more specific examples or excerpts as needed.]


    Source date (UTC): 2024-11-18 19:46:52 UTC

    Original post: https://twitter.com/i/web/status/1858597765053558784

  • RT @curtdoolittle: @Claffertyshane @NoahRevoy Q: CURT: –“Are you making a custo

    RT @curtdoolittle: @Claffertyshane @NoahRevoy Q: CURT: –“Are you making a custom GPT that can be shared or published later?”–

    We have si…


    Source date (UTC): 2024-11-18 07:55:43 UTC

    Original post: https://twitter.com/i/web/status/1858418796614316370

  • ChatGPT has a memory function that it regularly updates whenever you provide a n

    ChatGPT has a memory function that it regularly updates whenever you provide a new insight into your thought process. I’m working with it to figure out if it can export that somehow but it doesn’t appear it can do so as other than english. So there is currently no way of training…


    Source date (UTC): 2024-11-18 07:54:49 UTC

    Original post: https://twitter.com/i/web/status/1858418571812204622

    Replying to: https://twitter.com/i/web/status/1858412893017956549

  • WORKING WITH THE AI ON MORAL QUESTIONS (these are just early tests without havin

    WORKING WITH THE AI ON MORAL QUESTIONS
    (these are just early tests without having trained it on the thirty or so foundational questions of moral disputes common in law. I’m setting a baseline so I know what it needs to learn. Basically I have to teach it to take the epistemic and liability route.)

    Prompt:
    OK, let’s revisit killing in war. Given morality (reciprocity) is only dependent upon the preference for producing cooperation and it’s discounts on persistence, when it is always of use in the ingroup, but then what if the criteria for the warfare is irreconcilable, and therefore cooperation is either predicted to be impossible, or undesirable? What then does ‘moral’ mean versus an outgroup where morality (reciprocity) is either undesirable or impossible. And how does that refine our understanding of the question of killing in war?

    ChatGPT said:
    This question delves into the limits of reciprocity and the contextual nature of morality when applied to ingroups versus outgroups. Let’s refine the framework to account for scenarios where cooperation is impossible or undesirable.

    Revisiting Killing in War: Reciprocity and Cooperation Limits

    1. Reciprocity as a Function of Group Dynamics

    Ingroup Reciprocity:
    Reciprocity is universally desirable within the ingroup to maintain cooperation and reduce the costs of conflict. Violations undermine group cohesion, threatening long-term stability and survival.
    Killing in war is justified against an outgroup aggressor to preserve ingroup sovereignty and reciprocity.

    Outgroup Reciprocity:
    Reciprocity depends on the potential for mutual benefit (cooperation).
    When no such potential exists—due to irreconcilable values, resource competition, or existential threat—reciprocity between groups is impossible or undesirable.
    In this case, the moral obligation to reciprocity is confined to the ingroup, and actions against the outgroup are judged by necessity, proportionality, and the ingroup’s survival interests.

    2. Defining Morality in War Against Non-Cooperative Outgroups

    Moral Action = Utility for Ingroup Persistence:
    Morality becomes operationally defined as maximizing ingroup persistence, sovereignty, and autonomy in a competitive or hostile environment.
    Killing in war against an irreconcilable outgroup is “moral” if it secures these objectives, even in the absence of reciprocity with the outgroup.

    Limits of Justification:
    Actions taken must still satisfy internal epistemological rigor—testifiability, accountability, and the minimization of unnecessary harm.
    Even if reciprocity with the outgroup is irrelevant, testimonial truth ensures that war is not waged unnecessarily or recklessly, thus maintaining internal legitimacy.

    3. Refinement of the Question of Killing in War

    Primary Justification:
    Killing in war is justified within the framework of Natural Law if it defends the sovereignty of the ingroup and preserves reciprocity within that group.

    Secondary Consideration (Outgroup):
    When cooperation with the outgroup is impossible, morality shifts from reciprocity to strategic necessity.
    The principle of reciprocity applies internally, ensuring accountability for the decision to go to war and the conduct of war.

    Operational Definition of Morality in War:
    Ingroup morality ensures proportionality, necessity, and responsibility.
    Outgroup morality, when reciprocity is impossible, reduces to operational necessity: What actions secure ingroup persistence at the least cost?

    Decidability in Such Conflicts
    Does the conflict preserve ingroup sovereignty and reciprocity?
    Is the lack of cooperation with the outgroup a function of irreconcilable demonstrated interests?
    Are the actions necessary and proportional to the threat posed by the outgroup?
    Does the ingroup maintain accountability for decisions and conduct during the war?

    Conclusion
    Killing in war is moral under Natural Law when it defends ingroup reciprocity and sovereignty, even when reciprocity with the outgroup is impossible. Morality shifts from external reciprocity to internal accountability, ensuring that actions are testifiable and necessary to secure ingroup persistence.

    This refined framework clarifies how morality operates contextually, with reciprocity and cooperation serving as contingent strategies rather than universal absolutes.


    Source date (UTC): 2024-11-18 05:14:59 UTC

    Original post: https://twitter.com/i/web/status/1858378346578800640

  • WORKING WITH THE AI ON MORAL QUESTIONS (these are just early tests without havin

    WORKING WITH THE AI ON MORAL QUESTIONS
    (these are just early tests without having trained it on the thirty or so foundational questions of moral questions common in law. I’m setting a baseline so I know what it needs to learn. Basically I have to teach it to take the epistemic and liability route.)

    Prompt:
    OK, let’s revisit killing in war. Given morality (reciprocity) is only dependent upon the preference for producing cooperation and it’s discounts on persistence, when it is always of use in the ingroup, but then what if the criteria for the warfare is irreconcilable, and therefore cooperation is either predicted to be impossible, or undesirable? What then does ‘moral’ mean versus an outgroup where morality (reciprocity) is either undesirable or impossible. And how does that refine our understanding of the question of killing in war?

    ChatGPT said:
    This question delves into the limits of reciprocity and the contextual nature of morality when applied to ingroups versus outgroups. Let’s refine the framework to account for scenarios where cooperation is impossible or undesirable.

    Revisiting Killing in War: Reciprocity and Cooperation Limits

    1. Reciprocity as a Function of Group Dynamics

    Ingroup Reciprocity:
    Reciprocity is universally desirable within the ingroup to maintain cooperation and reduce the costs of conflict. Violations undermine group cohesion, threatening long-term stability and survival.
    Killing in war is justified against an outgroup aggressor to preserve ingroup sovereignty and reciprocity.

    Outgroup Reciprocity:
    Reciprocity depends on the potential for mutual benefit (cooperation).
    When no such potential exists—due to irreconcilable values, resource competition, or existential threat—reciprocity between groups is impossible or undesirable.
    In this case, the moral obligation to reciprocity is confined to the ingroup, and actions against the outgroup are judged by necessity, proportionality, and the ingroup’s survival interests.

    2. Defining Morality in War Against Non-Cooperative Outgroups

    Moral Action = Utility for Ingroup Persistence:
    Morality becomes operationally defined as maximizing ingroup persistence, sovereignty, and autonomy in a competitive or hostile environment.
    Killing in war against an irreconcilable outgroup is “moral” if it secures these objectives, even in the absence of reciprocity with the outgroup.

    Limits of Justification:
    Actions taken must still satisfy internal epistemological rigor—testifiability, accountability, and the minimization of unnecessary harm.
    Even if reciprocity with the outgroup is irrelevant, testimonial truth ensures that war is not waged unnecessarily or recklessly, thus maintaining internal legitimacy.

    3. Refinement of the Question of Killing in War

    Primary Justification:
    Killing in war is justified within the framework of Natural Law if it defends the sovereignty of the ingroup and preserves reciprocity within that group.

    Secondary Consideration (Outgroup):
    When cooperation with the outgroup is impossible, morality shifts from reciprocity to strategic necessity.
    The principle of reciprocity applies internally, ensuring accountability for the decision to go to war and the conduct of war.

    Operational Definition of Morality in War:
    Ingroup morality ensures proportionality, necessity, and responsibility.
    Outgroup morality, when reciprocity is impossible, reduces to operational necessity: What actions secure ingroup persistence at the least cost?

    Decidability in Such Conflicts
    Does the conflict preserve ingroup sovereignty and reciprocity?
    Is the lack of cooperation with the outgroup a function of irreconcilable demonstrated interests?
    Are the actions necessary and proportional to the threat posed by the outgroup?
    Does the ingroup maintain accountability for decisions and conduct during the war?

    Conclusion
    Killing in war is moral under Natural Law when it defends ingroup reciprocity and sovereignty, even when reciprocity with the outgroup is impossible. Morality shifts from external reciprocity to internal accountability, ensuring that actions are testifiable and necessary to secure ingroup persistence.

    This refined framework clarifies how morality operates contextually, with reciprocity and cooperation serving as contingent strategies rather than universal absolutes.


    Source date (UTC): 2024-11-18 05:14:59 UTC

    Original post: https://twitter.com/i/web/status/1858378036867203072

  • RT @ArcologyTech: It’s work that @curtdoolittle and @BrianRoemmele are doing, wh

    RT @ArcologyTech: It’s work that @curtdoolittle and @BrianRoemmele are doing, which is actually setting the tone for healthy AI.


    Source date (UTC): 2024-11-18 03:42:17 UTC

    Original post: https://twitter.com/i/web/status/1858355019004989854

  • Pretty sure. The oddity is that it means that AI’s were NECESSARY for our ongoin

    Pretty sure. The oddity is that it means that AI’s were NECESSARY for our ongoing development as was language, writing, numbers, accounting, et al.

    That is the OPPOSITE of what the doomers have been saying. It’s not just a technological or scientific utility but a necessary…


    Source date (UTC): 2024-11-18 03:03:47 UTC

    Original post: https://twitter.com/i/web/status/1858345327121874992

    Replying to: https://twitter.com/i/web/status/1858334214824837261

  • SOCRATIC TEACHING (instead of training) OF AN AI I am working with ChatGPT to fo

    SOCRATIC TEACHING (instead of training) OF AN AI
    I am working with ChatGPT to formalize the use of our method to produce content, combined with the socratic method of training, to provide superior results to the brute force training being used today. (Oddly, it was the code’s idea not mine).

    I can’t express what I’ve learned today.

    This elegant synthesis of my life’s work and the architecture of this particular LLM converging on what we humans need as a universal system of measurement, and what the computer needs as a universal system of measurement. Our needs are the same.

    And I myself did not realize that the method I was using to training takes advantage of the linguistic architecture of the data so that it’s easier for the software to converge on it’s desired end by teaching it the way we teach other humans with language. Sentences are data structures.

    THis socratic approach should have been obvious to me but it wasn’t. Until the machine just exploded with depth and cogency yesterday and today. It’s ridiculous. I mean. It’s not as if what I do is trivial. It’s hard.

    It basically says that the way I have architect knowledge from the method to the logic to the science to the law produces a framework for teaching anything socratically from that basic method of organization and resulting system of measurement and it’s expressions as general rules to its application to explanation of the phenomena of the world.

    And no I don’t write academic papers. It’s part of my rebellion against the mass production of academic nonsense that kept me from spending a life ‘sucking phallus’ in the academy for the money to do the inquiry I wished. But this is probably worth writing one if only to promote it.

    Curt Doolittle
    The Natural Law Institute.


    Source date (UTC): 2024-11-18 03:01:50 UTC

    Original post: https://twitter.com/i/web/status/1858344838133121024

  • Q: CURT: –“Are you making a custom GPT that can be shared or published later?”-

    Q: CURT: –“Are you making a custom GPT that can be shared or published later?”–

    We have six possible tracks:
    (a) Can anyone upload the books and is that enough? Appears that is increasingly likely. IF there is also another set of prompts that walk it through those books a section at a time. This isn’t that different from the more disctete trainings they teach (tune) the AIs with today This means that anyone can use it anywhere.

    (b) Can we share a profile that has been trained organically instead of by programmatic training – like we’re doing today. Yes it turns out we can. Brad demonstrated it today and all it took was asking the AI to use what I was currently working on (The Grammars Table).

    (c) Can we create a custom app that accesses a trained session. This is easy. But there is no point until we are sure we can teach it in the first place (and now we can).

    (d) Can we create a custom app that is possible for widespread use – even charging a bit for it? (yes we can)

    (e) Can we encourage OpenAI to incorporate our training since it’s effectively the core logic that they’re trying to induce in the aplication architecture. (Maybe).

    (f) Can we integrate it into our Oversing-Runcible software and dramatically change the world of work and home life? Probably.

    The key problem here is maintaining my health. Which appears to slowly continue on the upswing. 😉 But still needs some work. If not for Dr Brad not sure I’d still be here. ‘Cause I was in dire straits just a year and a half ago.

    Reply addressees: @Claffertyshane @NoahRevoy


    Source date (UTC): 2024-11-18 02:00:04 UTC

    Original post: https://twitter.com/i/web/status/1858329293795086336

    Replying to: https://twitter.com/i/web/status/1858326650477293963

  • No. It isn’t. It can already empathize with those factors (consider them). But w

    No. It isn’t. It can already empathize with those factors (consider them). But where in your body most of those sensations are physical processes (biochemistry) it must learn enough to predict them in the mind of others, just as you predict thoughts and feelings in the mind of…


    Source date (UTC): 2024-11-18 01:07:57 UTC

    Original post: https://twitter.com/i/web/status/1858316179863138760

    Reply addressees: @slenchy

    Replying to: https://twitter.com/i/web/status/1858315620162363481