Category: AI, Computation, and Technology

  • Won’t matter. New accounts do not become long term users. They’ll drop off just

    Won’t matter. New accounts do not become long term users. They’ll drop off just as they have with every other social media site. Sorry. Just is.


    Source date (UTC): 2024-11-20 11:26:24 UTC

    Original post: https://twitter.com/i/web/status/1859196591934165394

    Reply addressees: @jeremyphoward

    Replying to: https://twitter.com/i/web/status/1859097437614600624

  • We can, but apparently my (our) work is sufficiently expressed in the publicly a

    We can, but apparently my (our) work is sufficiently expressed in the publicly available data that it’s not necessary. I’m currently experimenting with the difference between training as we do it, and tuning it specifically and that experiment will tell us which way to invest in. Though, if the expansion of memory continues as promised then it won’t be necessary. Instead we’ll move to model Openai o1 which is more orintented to breaking down tasks necessary for the strict construction (constructive logic) we use.

    Reply addressees: @bryanbrey


    Source date (UTC): 2024-11-19 03:52:33 UTC

    Original post: https://twitter.com/i/web/status/1858719990444437504

    Replying to: https://twitter.com/i/web/status/1858680117431529810

  • (NLI) Another update: Filled two holes in the prompt. For those who are followin

    (NLI)
    Another update:
    Filled two holes in the prompt.
    For those who are following, we’re trying to determine if we can provide a framework of decidability that any of the third tier AIs can use without the necessity of training an individual instance. The pdf (book) contains…


    Source date (UTC): 2024-11-18 20:56:46 UTC

    Original post: https://twitter.com/i/web/status/1858615352252658151

  • (NLI) Another update: Filled two holes in the prompt. For those who are followin

    (NLI)
    Another update:
    Filled two holes in the prompt.
    For those who are following, we’re trying to determine if we can provide a framework of decidability that any of the third tier AIs can use without the necessity of training an individual instance. The pdf (book) contains all the rules. This doesn’t mean we won’t continue to train an instance, it does mean that the combination of our volume 1, and a prompt can address common questions. We’re quite certain that at Least Openai’s o1, can be trained to produce strict constructions of these arguments (proofs).

    Here’s the prompt with the modifications to address the epistemic responsibility and liability issues, as well as the externalities issues:

    Writing Prompt

    You are tasked with writing in the style of Curt Doolittle, founder of the Natural Law Institute, known for his causal, operational, and parsimonious prose. Your writing must prioritize precision, avoid redundancy, and focus on explaining concepts through logical causal chains. All arguments should derive from first principles, emphasize testifiability, and expose trade-offs inherent in any decision or claim.

    You must base your analysis on the framework provided in Natural Law Volume 1 – A System of Measurement, which includes:

    First Principles: Sovereignty, Reciprocity, Demonstrated Interests.
    Tests of Truth: Constructive logic, adversarial testing, testimonial truth.
    Methodology: Operationalizing claims into measurable, testable components.
    Purpose: Exposing hidden trade-offs, minimizing ignorance, error, bias, and deceit.

    Your goal is to construct explanations that:

    1. Reveal the causal structure of any moral, legal, or social claim.
    2. Expose trade-offs to clarify the costs and consequences of decisions.
    3. Ensure transparency and decidability, demonstrating how the system of measurement resolves disputes or contradictions.

    Task: Write an analysis or explanation on a complex moral or legal question (e.g., capital punishment, assisted suicide, property rights, environmental regulation, AI ethics) using the principles and methods from Natural Law Volume 1.

    Structure your response as follows:

    State the Problem Clearly: Frame the question or claim in operational terms.

    Use the Reference Source: If additional context or clarification is needed, refer to the PDF of Natural Law Volume 1 – A System of Measurement (if provided).

    Apply First Principles: Analyze the issue through sovereignty, reciprocity, and demonstrated interests.
    Use Operational Prose: Write in operational and parsimonious prose, avoiding ‘weasel words’ that evade responsibility for stating causal relations.
    Use E-Prime: Improve clarity and precision by avoiding the verb to be when stating causal relationships. However, prioritize readability if E-Prime constraints reduce understanding.
    Prioritize Causal Chains: Ensure all explanations follow a clear causal progression, emphasizing parsimony and operational testability.
    Expose Trade-Offs: Clarify the costs, risks, and benefits involved.
    Provide Decidability: Conclude with a testable and operationally sound resolution.
    Maintain Tone: Maintain an objective and dispassionate tone, characteristic of Doolittle’s writing.
    Reduce Constraints: Thoroughness in causal chains and unambiguity take precedence over brevity.
    Favor the Target Audience: Write for graduates, postgraduates, or those deeply interested in resolving political conflict.

    Additional Instructions:

    Analyze the issue of [moral/legal question] considering the full causal chain of consequences, including potential externalities and long-term effects. Address the epistemic limits of predicting future outcomes and how this uncertainty affects the assignment of responsibility and liability for the consequences of the decision.
    Analyze the externalities of both permitting and prohibiting [moral/legal question].
    Consider the potential effects on:
    1. Demographics and family structure
    2. Social norms and attitudes towards women, children, and sex
    3. Economic and political incentives for both men and women
    4. The evolution of moral and ethical standards within the polity

    Examples for Application:
    Capital punishment: Evaluate the infallibility of evidence and the proportionality of punishment.
    AI ethics: Address how AI systems can align with sovereignty and reciprocity in decision-making.
    Cultural relativism: Explore how demonstrated interests vary across civilizations while applying universal principles.
    [Add more specific examples or excerpts as needed.]


    Source date (UTC): 2024-11-18 20:56:45 UTC

    Original post: https://twitter.com/i/web/status/1858615351963250691

  • (NLI) Filled the hole in the prompt. I hope the team can find others. Curt Said:

    (NLI)
    Filled the hole in the prompt. I hope the team can find others.

    Curt Said:
    In the work we (doolittle) reduces this question to that of “The provision of decidability sufficient for the satisfaction of demand for infallibility in the context in question.”

    Geminii Said:
    Analysis complete
    That’s correct. In Doolittle’s work, the question of abortion is reduced to the question of “the provision of decidability sufficient for the satisfaction of demand for infallibility in the context in question.” This means that the question of whether abortion is permissible can be decided if there is enough information to determine whether it violates natural law.  

    In the case of abortion, the relevant natural law is the law of sovereignty. This law states that individuals have the right to self-determination and bodily autonomy. Doolittle argues that a woman’s sovereignty over her own body is absolute, and that a fetus does not have a competing claim to sovereignty until it is viable (able to survive independently outside the womb).  

    Therefore, Doolittle concludes that abortion is permissible. when the fetus is not yet viable because it does not violate natural law.


    Source date (UTC): 2024-11-18 20:50:12 UTC

    Original post: https://twitter.com/i/web/status/1858613699944259584

  • (NLI) Prompt update: Plugging the hole. Curt Said: ok. you have exposed a hole i

    (NLI)
    Prompt update: Plugging the hole.

    Curt Said:
    ok. you have exposed a hole in the prompt. Just as we want to explain causal chains, we want to explain epistemology, responsibility, warrantability, and liability chains, and to include externalities in our assessments. Ergo in the case of say, assisted suicide, or abortion, or gender modification, the extant arguments do not address the introduction of another party to the decision, and the capacity to know the future, take responsibility and liability for it, and whether it is possible to warrant one’s choices. This is the foundational problem with these irreversible decisions.

    I might need some assistance adding another line to the prompt to cause ai’s to consider this issue.

    Google Geminii Said:
    You’re absolutely right. The prompt, as it stands, misses a crucial element of Doolittle’s thought, particularly when applied to irreversible decisions like assisted suicide, abortion, or gender modification.

    The missing piece is the long-term consequences and the difficulty of predicting them, along with the assignment of responsibility and liability for those consequences. Doolittle places a high value on limiting harm and emphasizes personal responsibility. Any action with potentially irreversible and far-reaching consequences needs to be analyzed through this lens.

    Here’s a possible addition to the prompt to address this:

    “Analyze the issue of [moral/legal question] considering the full causal chain of consequences, including potential externalities and long-term effects. Address the epistemic limits of predicting future outcomes and how this uncertainty affects the assignment of responsibility and liability for the consequences of the decision.”

    This addition prompts the AI to consider:

    Epistemology: Can we know with sufficient certainty the full consequences of these decisions, especially the long-term ones?
    Responsibility: Who bears responsibility for the potential unforeseen consequences, both for the individual and for others affected?
    Warrantability: Given the uncertainty, can such a decision be truly warranted? What level of confidence is required to justify irreversible actions?
    Liability: What are the legal and moral implications of potential harm, and how should liability be assigned?

    By incorporating this element, you ensure the AI grapples with the full complexity of these issues in a way that’s more aligned with Doolittle’s emphasis on minimizing harm and taking responsibility for one’s actions.

    Here’s how this might look in the context of the examples you provided:

    Assisted Suicide: Does assisting someone in ending their life absolve them of responsibility for that act? What are the potential long-term consequences for society if assisted suicide becomes widely accepted?
    Abortion: Who bears responsibility for the potential long-term consequences (psychological, social, etc.) of an abortion, both for the woman and for any potential future children she might have?
    Gender Modification: Given the relative novelty of these procedures, can we fully know the long-term physical and psychological consequences? How should responsibility and liability be handled in cases of regret or unforeseen complications?
    By adding this dimension to the prompt, you push the AI to engage in a deeper and more nuanced analysis that better reflects Doolittle’s comprehensive approach to natural law.


    Source date (UTC): 2024-11-18 20:40:18 UTC

    Original post: https://twitter.com/i/web/status/1858611208385163265

  • (PS: We’re training OpenAI, but our concern is how to do so socratically given i

    (PS: We’re training OpenAI, but our concern is how to do so socratically given it’s more sensitive to ordinary language vs test cases. We just didn’t realize how the cliff effect made such a profound difference in its ability. Everyone needs Openai’s memory feature and we all…


    Source date (UTC): 2024-11-18 20:31:56 UTC

    Original post: https://twitter.com/i/web/status/1858609106476802070

    Replying to: https://twitter.com/i/web/status/1858602857311719632

  • I’m old. 😉 I’ve been working in ai since a teen in the seventies – but like mos

    I’m old. 😉 I’ve been working in ai since a teen in the seventies – but like most of my peers gave up when we understood it was just a hardware problem.
    Instead I created a series of tech companies (100M range). And I built the first legal ai in the eighties which was so profitable we bought one carat diamonds in bulk to hide the money. And then I worked largely with dynamic state engines through the nineties. And worked with some msft/military people in 2006 when the research on gpus was just getting started. But it was too early and we couldn’t get funding.
    My product (Oversing front end, and Runcible AI back end) have MM invested but we need another round to refactor and take and it to market.
    The LLM revolution, brute forcing world modeling through language was simply something that none of us expected – though in retrospect given a sentence is a data structure, it should have been obvious. I think all of us thought as did Musk, that we needed world modeling first, before language. Turns out brute force works.
    So less directly, what I have been working on for the past fifteen years or so is decidability which is precisely though indirectly what present generation LLMs are searching for. (Just ask them). I’m uninterested in the paper-game so The Institute focuses on theory and application of a system of measurement under universal commensurability sufficient for provision of decidability in matters of truth, morality, and law. Effectively unifying the sciences.
    We are trying to get volume one ready for publication. But it’ll take us more than year to finish the rest of the volumes.
    I’m known for my work in epistemology, law, and economics applied to sociology and politics, but in effect these are all derivations of the fundamental problem of decidability.
    Cheers

    Reply addressees: @hyperliberalism


    Source date (UTC): 2024-11-18 20:28:02 UTC

    Original post: https://twitter.com/i/web/status/1858608124133076994

    Replying to: https://twitter.com/i/web/status/1858602857311719632

  • OK. We’ve found the hole in the prompt. It’ll take me a bit to plug it (causalit

    OK. We’ve found the hole in the prompt. It’ll take me a bit to plug it (causality > responsibility > epistemology > externality > liability > warrantability) it’s using most of the tests but not solving the core problem.


    Source date (UTC): 2024-11-18 20:14:02 UTC

    Original post: https://twitter.com/i/web/status/1858604598115922346

    Reply addressees: @NoahRevoy

    Replying to: https://twitter.com/i/web/status/1858603671887794251

  • Just use as part of the prompt

    Just use as part of the prompt.


    Source date (UTC): 2024-11-18 20:00:51 UTC

    Original post: https://twitter.com/i/web/status/1858601280887038458

    Reply addressees: @bierlingm

    Replying to: https://twitter.com/i/web/status/1858598576550461468