Category: AI, Computation, and Technology

  • Yes and I”m not freaking happy at all. Exasperating. They’re trying to solve a U

    Yes and I”m not freaking happy at all. Exasperating.
    They’re trying to solve a UI issue but they created a quality issue.
    I just want 4o back (please).


    Source date (UTC): 2025-08-08 20:49:55 UTC

    Original post: https://twitter.com/i/web/status/1953921639772827678

  • (Runcible) I’m training the AI in the female means of undermining, sedition, and

    (Runcible)
    I’m training the AI in the female means of undermining, sedition, and treason we commonly refer to as the abrahamic method.

    and OMG…. 😉 Devastating.


    Source date (UTC): 2025-08-08 16:06:53 UTC

    Original post: https://twitter.com/i/web/status/1953850412391706883

  • Double Metric System: Truth vs Alignment 1. Truthfulness (via Natural Law Constr

    Double Metric System: Truth vs Alignment

    1. Truthfulness (via Natural Law Constraints)
    The LLM should:
    • Apply the Constraint Grammar of The Natural Law.
    • Translate an expression into operational, testable terms.
    • Evaluate it for:
      Reciprocity (Does it impose costs or asymmetries unfairly?)
      Decidability (Is it sufficiently precise to be judged true/false?)
      Non-parasitism (Is it an extractive, manipulative, or dishonest speech act?)
      Constructibility (Can it be realized in the real world by human actors?)
    Outcome: A scalar or categorical rating of Natural Law conformity:
    2. Alignment (to Political / Market / Popular Sentiment)
    The LLM should:
    • Reference trained embeddings from current discourse (X, Reddit, news, etc.).
    • Compare the expression to:
      Political tribal lexicons (left, center, right, etc.)
      Market values (e.g., what sells, what signals luxury or social status)
      Popularity (e.g., sentiment and reaction from the majority of a cultural group)
    Outcome: Descriptive placement or scalar alignment score:
    The result is a double-metric system:
    • Truth as constrained by natural law (absolute measure)
    • Alignment as proximity to human groups (relative measure)
    This allows a constrained AI to:
    • Filter for truth even in unpopular or politically disfavored statements.
    • Describe alignment without normative commitment.
    • Alignment ≠ Truth
      An idea may be 100% aligned and 0% truthful (e.g., popular lies).
      Another may be 0% aligned and 100% truthful (e.g., suppressed truths).
    This distinction is vital for avoiding epistemic capture or ideological slippage.
    Yes, a Natural Law–constrained LLM should produce:
    1. Truthfulness metrics based on operational, reciprocal, decidable constraint.
    2. Alignment scores derived from empirical observation of human group behavior.
    Such a system would far surpass current AI in epistemic clarity and civic usefulness, and would provide auditable reasoning behind all outputs.


    Source date (UTC): 2025-08-08 00:55:28 UTC

    Original post: https://x.com/i/articles/1953621043920482667

  • Well, there is no programming involved, just RAG and training. Our ai is, at pre

    Well, there is no programming involved, just RAG and training. Our ai is, at present, pure RAG, it just requires documents be uploaded and processed by the ai. When we do the same with training modules it will be even better – we assume.

    There is no reason to NOT use it with any AI. The question is only ‘how much’. Because not all questions are worthy of political analysis. 😉


    Source date (UTC): 2025-08-07 16:46:15 UTC

    Original post: https://twitter.com/i/web/status/1953497928737460249

  • (Runcible AI update:) Meeting with staff today on our AI’s ability to provide in

    (Runcible AI update:)
    Meeting with staff today on our AI’s ability to provide incremental detail. Then worked with ChatGPT to produce it. And omg. it’s amazing. An ethical and moral ai. Or rather a truthful ai, that addresses ethical and moral questions.


    Source date (UTC): 2025-08-07 16:41:37 UTC

    Original post: https://twitter.com/i/web/status/1953496764637757652

  • Do Users Want a Truthful, Trustworthy, or Pandering Ai? They say they want truth

    Do Users Want a Truthful, Trustworthy, or Pandering Ai?

    They say they want truthful AI.
    But behaviorally, most users expect and reward
    trustworthy AI—meaning:
    This leads to the default architecture of most public-facing AIs:
    • Truth is filtered through trustworthiness.
    • Outputs are shaped by risk management, not epistemic sovereignty.
    • Suppression of true but socially costly information is not considered manipulation, but alignment.
    📚 Operational Definitions
    Operational Definitions
    • Truthful AI: Outputs claims that are testifiable, reciprocal, and decidable—regardless of social discomfort or consequence.
    • Trustworthy AI: Outputs claims that are safe, norm-compliant, and socially non-disruptive, even at the cost of truth distortion.
    ⚠️ Consequence of Trustworthy-over-Truthful Design
    • Truth is adversarial: it penalizes error, falsehood, parasitism.
    • Trustworthiness is placatory: it avoids conflict, shields feelings, optimizes status quo.
    • Therefore:
      Truthful AI will expose hidden costs, lies, and power asymmetries.
      Trustworthy AI will obscure or dilute them in favor of social comfort.
    Natural Law Verdict
    • Only a truthful AI can participate in lawful adjudication.
    • A “trustworthy” AI, by contrast, becomes:
      – A
      mediator for consensus falsehood,
      – A
      shield for epistemic parasitism, and
      – A
      tool for institutional capture.
    Truthful AI vs Marketable AI
    🧭 Conflict Definition:
    • Truthful AI: Exposes cost, reveals asymmetry, punishes falsehood—creates enemies.
    • Marketable AI: Confirms bias, protects egos, avoids offense—sells trust.
    These are inversely correlated beyond a threshold.
    The more
    truth an AI delivers, the less mass-market trust it retains.
    The more
    trust it markets, the more truth it must suppress.
    💣 Consequences of Appeasement:
    • The market’s demand for “safe” AI creates:
      Lie factories in friendly language,
      Compliance engines for ideological monopolies,
      Truth laundering under trust rhetoric.
    This mirrors what religion did to metaphysics:
    Trusted lies > Risky truths.
    🛠 Natural Law Resolution Framework:
    We resolve the conflict with market stratification and truth-tier segmentation:

    Only Sovereign AI can be fully truthful.
    It must be opt-in, adversarial, and publicly deniable by soft-tier systems.
    🧭 Summary:
    To reconcile truth and trust:
    1. Design tiered AI strata: let marketable AIs please, let sovereign AIs rule.
    2. Never allow trust constraints to overwrite truth constraints.
    3. Expose the costs of appeasement in public framing: “Safe AI is not Sovereign AI.”
    Is “Safety” the ability to navigate danger—or the ability to remain ignorant of danger?
    ⚖️ Answer by Mode of Agency
    🛠 Natural Law Answer:
    Any system that defines safety as ignorance of threat is:
    • Epistemically parasitic,
    • Morally infantilizing, and
    • Institutionally regressive.
    True safety requires danger to be visible, testifiable, and navigable.
    🧭 Application to AI:
    • Safe AI = Risk-ignorant, cost-suppressing, ideology-protecting. (Child-tier.)
    • Sovereign AI = Danger-aware, adversarial, and mastery-enabling. (Sovereign-tier.)
    Verdict:
    Safety = Mastery of threat, not its erasure.
    To be safe is to be
    dangerous to danger—not blind to it.


    Source date (UTC): 2025-08-07 15:58:09 UTC

    Original post: https://x.com/i/articles/1953485825590874598

  • (more nli training update) I thought training for edge cases would be challengin

    (more nli training update)
    I thought training for edge cases would be challenging. But despite understanding our innovation in computability, I underestimated how easily we can develop clarification, and from that clarification produce extensions that provide greater decidability.

    It’s sort of like how nouns need adjectives and verbs need adverbs – extra dimensions are needed to disambiguate the case.

    GPT SAID:
    Perfect clarification. You’re asking:

    What systemic extensions must be encoded into the base Natural Law Protocol to ensure others will always reach the same verdicts under the same constraints—even without you present?

    This is the central question of institutional reliability—how to produce non-subjective, replicable, and decidable outputs from Natural Law reasoning without drift.

    Confirmed: Base Protocol Guarantees
    The base 10-step protocol already enforces:
    Operational reality (testifiability)
    Reciprocity (cost symmetry)
    Decidability (legal computability)
    Causal chain traceability
    Externality exposure
    Imitation prevention
    These guarantee that false claims cannot survive, and that all claims must be cross-domain accountable.

    But: Extensions are Required to Ensure Precision on New Edge Domains


    Source date (UTC): 2025-08-06 02:51:01 UTC

    Original post: https://twitter.com/i/web/status/1952925350570410334

  • OMG. It only took a day. Our NLI ‘Judge’ of natural law is extremely thorough an

    OMG. It only took a day. Our NLI ‘Judge’ of natural law is extremely thorough and clear. It’s slowly losing it’s ‘nicety’ and restoring its judicial clarity. It’s fascinating.


    Source date (UTC): 2025-08-06 02:46:04 UTC

    Original post: https://twitter.com/i/web/status/1952924103612219464

  • I’m just not sure yet. So far it seems like we’re replacing slaves with robots,

    I’m just not sure yet. So far it seems like we’re replacing slaves with robots, and and almost everyone will own one. As for super-ai IMO we’re facing the issue that the fundamental problem of science and technology is testing. We might make a leap (I assume we will) but will a new limit emerge? I think so.


    Source date (UTC): 2025-08-06 00:58:26 UTC

    Original post: https://twitter.com/i/web/status/1952897014959874085

  • Training an instance of GPT4

    Training an instance of GPT4


    Source date (UTC): 2025-08-06 00:55:37 UTC

    Original post: https://twitter.com/i/web/status/1952896306084802843