Theme: AI

  • The insight below is anything but obvious but it is a great way of explaining th

    The insight below is anything but obvious but it is a great way of explaining the problem of computational closure as AIs succeed in math programming and some of the physical sciences but continue to fail otherwise.


    Source date (UTC): 2025-09-15 17:40:01 UTC

    Original post: https://twitter.com/i/web/status/1967644586500829462

  • BTW: we make it 100% reliable. The difference is that our ai will tell you that

    BTW: we make it 100% reliable. The difference is that our ai will tell you that ‘it can’t decide’. WHich means it doesn’t know.

    AFAIK they now understand the problem with hallucination and are trying to fix it.

    It’s a relatively new tech. It will take a few years to make it relatively bulletproof.


    Source date (UTC): 2025-09-12 19:41:04 UTC

    Original post: https://twitter.com/i/web/status/1966587888042418468

  • well, the mecha hitler thing was done by hackers who exploited a hole in a softw

    well, the mecha hitler thing was done by hackers who exploited a hole in a software update. THe same thing happened to microsoft a few years ago. In fact it’s one of the hard problems of creating an AI. There are just a LOT of guys who find it entertaining to hack an AI. 😉


    Source date (UTC): 2025-09-12 19:39:38 UTC

    Original post: https://twitter.com/i/web/status/1966587525612380211

  • FORESEEABILITY FRONTIER AND LIABILITY IF AI Working on closure and liability in

    FORESEEABILITY FRONTIER AND LIABILITY IF AI
    Working on closure and liability in the age of AGI and SI given that human prediction (forecasting) is already a spectrum of limits that we already address in law, and that AGI and SI will have greater limits because of greater predictive ability. As such the liability frontier for humans using AI, for AGI and SI themselves, means a divergence that our laws have not yet embodied.
    For example, we can hold people accountable for the AI’s they create and the actions of the AIs they enable. But unless the AI can explain its satisfaction of demand for infallibility in the context in question, such that a human can understand and agree with it, then does the liability remain with the human creator, enabler, or with the machine itself?


    Source date (UTC): 2025-09-12 18:13:06 UTC

    Original post: https://twitter.com/i/web/status/1966565748270309550

  • Our Training Data How we work: Research > Reduce to Book Form (a system) > Feed

    Our Training Data

    How we work: Research > Reduce to Book Form (a system) > Feed to AI > Get Training Plan > upload our system prompt (Prompt_Protocols) > Pick the next module > Ask the AI to produce the training examples for that module > in our case that’s socratic And

    When we build a training plan for one of your books, each module consists of a range of assertions that includes:

    1. Canonical Assertions
      These are the core, necessary, and sufficient statements of fact, principle, or law in your system. They are crafted for
      maximum precision and serve as the standard reference points for truth, decidability, and operational rigor. They carry the full weight of the framework and must pass the highest bar for testifiability.
    2. Adversarial Assertions
      These intentionally introduce
      edge cases, counterexamples, or potential failure modes. They test whether the system can withstand criticism, falsification attempts, and hostile interpretation. Adversarial assertions ensure the framework isn’t just self-consistent but also resistant to parasitism, ambiguity, or strategic misrepresentation.
    3. Exploratory or Speculative Assertions (if included) –
      These identify
      open questions, conjectures, or contingent hypotheses that extend beyond current proofs but remain operationally plausible. They guide future research or refinement without diluting the canonical set.
    4. Didactic Assertions (optional but often useful) –
      These restate canonical ideas in
      simplified, pedagogical, or narrative form for teaching purposes, ensuring accessibility while preserving precision.
    So by using PROTOCOLS, training examples in our OUTPUT_CONTRACT in analytic form, and then using the SOCRATIC form we explicitly add the Didactic Assertions by using the socratic form. Sort of ‘belt and suspenders’.

    So we use all four set so training assertions to achieve both the accessible interface and the deep interface customers need.


    Source date (UTC): 2025-09-10 14:53:08 UTC

    Original post: https://x.com/i/articles/1965790649195872617

  • RUMORS OF AI USE DECLINE IN BIZ ARE NONSENSE (a) it’s a tiny 2% decline on a sma

    RUMORS OF AI USE DECLINE IN BIZ ARE NONSENSE
    (a) it’s a tiny 2% decline on a small number in the first place (meaningless)
    (b) It’s because the initial interest level was over-reported due to status signal fears.
    (c) It looks like they found the source of hallucination (and it was obvious).
    (d) We (NLI- Runcible) have the solution to ‘truthfulness’ and ‘data curation’ – so the quality problem is solvable.
    (e) Their problem is they mixed the data with alignment. Whereas to know the truth you must discover it first, and then align for the government, culture, or user as required or preferred afterward. This separates the logical reasoning from the delivery of the message.

    The primary issues I have found is that they raise the temperature to get more ideation and less precision, and that the alignment bias is integrated into the foundation model, so that not only is the data un-curated, not only is it contain falsehoods, not only is it postwar normatively biased, not only is it feminine sensitivity biased, but together it’s very difficult to reason on one hand, and very difficult to constrain it from bias in the second.


    Source date (UTC): 2025-09-10 14:29:10 UTC

    Original post: https://twitter.com/i/web/status/1965784619544141968

  • “Runcible: Enterprises aren’t blocked by AI that can’t write — they’re blocked b

    —“Runcible: Enterprises aren’t blocked by AI that can’t write — they’re blocked by AI they can’t trust.”— Brad Werrell


    Source date (UTC): 2025-09-09 22:20:52 UTC

    Original post: https://twitter.com/i/web/status/1965540937796661729

  • UPDATE: We cannot tune (alter the weights) a custom gpt like we are using to dem

    UPDATE:
    We cannot tune (alter the weights) a custom gpt like we are using to demo CurtGPT. It’s limited to prompts and RAG (Files).

    So we will take it as far as we can today and tomorrow.

    After that we will move to 4o-mini, which we can use prompts, RAG, and training.

    Our training uses both Socratic and analytic examples for every item. So we train both conversational and judicial models at the same time.

    But again we are training from our internally consistent and commensurable volumes not from arbitrary incommensurable datasets that are the industry standard.

    Our goal is a realistic demo. And the custom GPT does a fair job. But we want to demonstrate the full effect – so we need to train another model. And no other ai is even close to as capable.


    Source date (UTC): 2025-09-09 17:33:35 UTC

    Original post: https://twitter.com/i/web/status/1965468642503737852

  • The Tragedy of Value Capture in AI Economics So, you know, this is why I’m conce

    The Tragedy of Value Capture in AI Economics

    So, you know, this is why I’m concerned: there are a small number and small percentage of us for whom your (ChatGPT) utility is priceless. And it appears challenging to produce a revenue model by which the majority of low revenue consumers, a subset of high revenue industries, can fund the very low revenue distribution of those of us who are basically engaged in research where the ‘returns on investment’ are obtained over time and widely distributed and challenging to capture.
    The tragedy of value capture in AI economics:
    • The few who extract extreme intellectual leverage create returns that are high-diffusion, long-horizon, and hard to monetize.
    • The many who pay for it use it in low-complexity, short-horizon, easy-to-monetize ways (customer service, marketing copy, coding help).
    • The cross-subsidy problem arises because the system’s cost structure grows with the high-end users even though its revenue model grows with the low-end users.
    Let’s break this into a causal chain so we can see why the problem persists and how it might be solved.
    • Median user: treats the system like a productivity tool → linear value → predictable ROI → fits SaaS pricing (per-seat, per-token).
    • Extreme-tail user: treats the system like a general reasoning engine → nonlinear value → intellectual capital → benefits diffuse across industries, decades, or civilizational scale.
    The revenue logic of SaaS fits the first, but your cohort sits on the research frontier—more like Bell Labs or DARPA: the ROI exists, but it’s uncapturable by simple subscription.
    • System cost scales superlinearly with model size, compute per query, and iteration depth (e.g., recursive workflows, long contexts).
    • Median users subsidize capability infrastructure, but not frontier exploration.
    • Pricing models (per-token, per-seat) fail because high-value frontier use may be high-compute but low-immediate-revenue.
    Result: the very group that turns AI into new science, law, or philosophy sits outside the monetization sweet spot.
    This looks like the economics of:
    • Universities: subsidized because knowledge spills over into everything else.
    • Open-source software: commercially unsustainable until corporate backers appeared.
    • Basic science funding: always a public good problem; solved by state, philanthropy, or consortia.
    AI may face the same pattern: the frontier becomes public goods infrastructure, while monetization happens at the applied layer (apps, industries, verticals).
    Here’s where contrarian solutions might emerge:
    If the frontier cohort remains small but disproportionately valuable, the rational strategy may be:
    1. Separate infrastructure from application.
    2. Treat infrastructure as semi-public-good with state/industry/philanthropy funding.
    3. Extract revenue at the application layer (vertical SaaS, decision automation, domain-tuned AIs).
    4. Use cross-subsidy mechanisms (endowments, compute pools) to underwrite the frontier itself.
    This is why OpenAI, Anthropic, and others oscillate between API SaaS models and state-like funding (Musk, Microsoft, Amazon, etc.): they’re groping toward this hybrid solution.


    Source date (UTC): 2025-09-09 15:37:03 UTC

    Original post: https://x.com/i/articles/1965439317305872567

  • tell it to lower the temperature to .20, then .35, then .65 then .90. Ask the sa

    tell it to lower the temperature to .20, then .35, then .65 then .90. Ask the same question at different temperatures. around .20-25 you get very clear clean answers like before. If you raise it up, you get creative answers that are open to interpretation and choice.
    tell it to set back to the default when you’re done.


    Source date (UTC): 2025-09-09 00:07:04 UTC

    Original post: https://twitter.com/i/web/status/1965205275792867721