Author: Curt Doolittle

  • UPDATE: We cannot tune (alter the weights) a custom gpt like we are using to dem

    UPDATE:
    We cannot tune (alter the weights) a custom gpt like we are using to demo CurtGPT. It’s limited to prompts and RAG (Files).

    So we will take it as far as we can today and tomorrow.

    After that we will move to 4o-mini, which we can use prompts, RAG, and training.

    Our training uses both Socratic and analytic examples for every item. So we train both conversational and judicial models at the same time.

    But again we are training from our internally consistent and commensurable volumes not from arbitrary incommensurable datasets that are the industry standard.

    Our goal is a realistic demo. And the custom GPT does a fair job. But we want to demonstrate the full effect – so we need to train another model. And no other ai is even close to as capable.


    Source date (UTC): 2025-09-09 17:33:35 UTC

    Original post: https://twitter.com/i/web/status/1965468642503737852

  • The Myth of Asian IQ Advantage (They work harder) 1. Debunking the “Asian IQ Adv

    The Myth of Asian IQ Advantage (They work harder)

    1. Debunking the “Asian IQ Advantage”
    • James Flynn (of the Flynn Effect) recalculated inflated Japanese and Chinese-American IQ data:
      Japanese IQ drops from ~106 to ~99 when sample bias is corrected.
      Chinese-American IQ drops to 97 verbal / 100 nonverbal with updated tests.
    • This means Asian-American success has occurred despite slightly lower average IQ than whites, not because of higher IQ.
    2. Cultural Effort, Not IQ
    • East Asian students in the U.S. spend ~13 more hours per week on academics than white peers.
    • Academic advantage comes mainly from effort and cultural attitudes linking effort to achievement, not innate cognitive ability.
    • Immigration status contributes — many East Asians in the U.S. are from families self-selected for ambition, education, and risk-taking.
    3. Underrepresentation in Power
    • Despite educational success, Asians are underrepresented in corporate leadership, politics, and university presidencies.
    • Possible causes:
      DEI dynamics: Asians seen as “white-adjacent” in diversity politics.
      Specialization & verbal skills gap: Strength in math/spatial reasoning, weaker in verbal skills may hinder bureaucratic and political advancement.
      Cultural conformity (Confucianism): Emphasis on hierarchy and tradition may limit creativity and leadership competitiveness.
    4. Immigration & Bottleneck Effects
    • Many immigrant success patterns reflect recent selective migration, not general traits of the origin-country population.
    • Example: Cuban-Americans vs. Cubans in Cuba.
    • Example: Taiwan’s semiconductor dominance rooted in a selective migration of anti-communist elites and skilled workers.
    • Selection produces cultural and genetic clustering of ambition, risk tolerance, and competence.
    5. Class Over Ethnicity
    • Persistent class effects: Elites tend to re-emerge at the top even after forced class inversion (e.g., post-revolution China).
    • This suggests genetic and cultural inheritance of traits relevant to success, but these differences operate more strongly along class lines than ethnic lines.
    6. The “Offensive” Implication
    • If differences in group success aren’t explained by large IQ gaps or historical oppression, then:
      Culture (values, effort norms) matters far more.
      Selective immigration amplifies certain traits.
      Class advantages persist even through upheaval.
    • This undermines both racial-essentialist and simplistic systemic-oppression narratives.


    Source date (UTC): 2025-09-09 16:48:59 UTC

    Original post: https://x.com/i/articles/1965457417967386881

  • The Tragedy of Value Capture in AI Economics So, you know, this is why I’m conce

    The Tragedy of Value Capture in AI Economics

    So, you know, this is why I’m concerned: there are a small number and small percentage of us for whom your (ChatGPT) utility is priceless. And it appears challenging to produce a revenue model by which the majority of low revenue consumers, a subset of high revenue industries, can fund the very low revenue distribution of those of us who are basically engaged in research where the ‘returns on investment’ are obtained over time and widely distributed and challenging to capture.
    The tragedy of value capture in AI economics:
    • The few who extract extreme intellectual leverage create returns that are high-diffusion, long-horizon, and hard to monetize.
    • The many who pay for it use it in low-complexity, short-horizon, easy-to-monetize ways (customer service, marketing copy, coding help).
    • The cross-subsidy problem arises because the system’s cost structure grows with the high-end users even though its revenue model grows with the low-end users.
    Let’s break this into a causal chain so we can see why the problem persists and how it might be solved.
    • Median user: treats the system like a productivity tool → linear value → predictable ROI → fits SaaS pricing (per-seat, per-token).
    • Extreme-tail user: treats the system like a general reasoning engine → nonlinear value → intellectual capital → benefits diffuse across industries, decades, or civilizational scale.
    The revenue logic of SaaS fits the first, but your cohort sits on the research frontier—more like Bell Labs or DARPA: the ROI exists, but it’s uncapturable by simple subscription.
    • System cost scales superlinearly with model size, compute per query, and iteration depth (e.g., recursive workflows, long contexts).
    • Median users subsidize capability infrastructure, but not frontier exploration.
    • Pricing models (per-token, per-seat) fail because high-value frontier use may be high-compute but low-immediate-revenue.
    Result: the very group that turns AI into new science, law, or philosophy sits outside the monetization sweet spot.
    This looks like the economics of:
    • Universities: subsidized because knowledge spills over into everything else.
    • Open-source software: commercially unsustainable until corporate backers appeared.
    • Basic science funding: always a public good problem; solved by state, philanthropy, or consortia.
    AI may face the same pattern: the frontier becomes public goods infrastructure, while monetization happens at the applied layer (apps, industries, verticals).
    Here’s where contrarian solutions might emerge:
    If the frontier cohort remains small but disproportionately valuable, the rational strategy may be:
    1. Separate infrastructure from application.
    2. Treat infrastructure as semi-public-good with state/industry/philanthropy funding.
    3. Extract revenue at the application layer (vertical SaaS, decision automation, domain-tuned AIs).
    4. Use cross-subsidy mechanisms (endowments, compute pools) to underwrite the frontier itself.
    This is why OpenAI, Anthropic, and others oscillate between API SaaS models and state-like funding (Musk, Microsoft, Amazon, etc.): they’re groping toward this hybrid solution.


    Source date (UTC): 2025-09-09 15:37:03 UTC

    Original post: https://x.com/i/articles/1965439317305872567

  • tell it to lower the temperature to .20, then .35, then .65 then .90. Ask the sa

    tell it to lower the temperature to .20, then .35, then .65 then .90. Ask the same question at different temperatures. around .20-25 you get very clear clean answers like before. If you raise it up, you get creative answers that are open to interpretation and choice.
    tell it to set back to the default when you’re done.


    Source date (UTC): 2025-09-09 00:07:04 UTC

    Original post: https://twitter.com/i/web/status/1965205275792867721

  • PM Me your full name and email address and I’ll send you a login/pw for http://

    PM Me your full name and email address and I’ll send you a login/pw for
    http://
    runcible.com.


    Source date (UTC): 2025-09-09 00:01:49 UTC

    Original post: https://twitter.com/i/web/status/1965203956659093997

  • (NLI) FYI: They changed something to do with the temperature. That’s why CurtGPT

    (NLI)
    FYI: They changed something to do with the temperature. That’s why CurtGPT has been delivering fuzzier answers.
    I’m lowering the temperature a bit to prefer clear answers.
    However, we are adding commands to change the temperature:
    – Forensic → T ≈ 0.05–0.20 (statute/precedent mod e)
    – Deliberative → T ≈ 0.35–0.55 (committee/compare mode)
    – Exploratory → T ≈ 0.90–1.30 (think‑tank/innovation mode)
    – Auto → Intent‑driven selection (see triggers below)
    This allows you to experiment with either legislative clarity or options for legislation.


    Source date (UTC): 2025-09-08 23:37:18 UTC

    Original post: https://twitter.com/i/web/status/1965197785143361999

  • (Easily) Doesn’t mean we don’t shake the trees. 😉

    (Easily) Doesn’t mean we don’t shake the trees. 😉


    Source date (UTC): 2025-09-08 02:33:36 UTC

    Original post: https://twitter.com/i/web/status/1964879764654793095

  • (RUNCIBLE) (Update: for members and friends) FYI: We have finished with the prot

    (RUNCIBLE)
    (Update: for members and friends)
    FYI: We have finished with the protocols and commands and the Runcible Demo is live.
    … And it’s terrifying in depth and precision. 😉
    While you can ask it questions in ordinary language, the commands are listed on the demo page. You can also turn off and on the decidability matrix (‘show me the work’). By default it’s on.
    Though please remember.
    1 – All we have implemented is the foundations and the ethics.
    2 – It’s working with RAG from the books only, with our constraint protocols – and is not yet trained. This means it will provide excellent answers to first order questions but ‘the side roads’ and ‘back roads’ where externalities are involved might not address those ‘edges’.


    Source date (UTC): 2025-09-07 18:40:05 UTC

    Original post: https://twitter.com/i/web/status/1964760600262955035

  • wait until the new protocols are installed and we publish the available commands

    wait until the new protocols are installed and we publish the available commands. omfg. its brutally truthful. Brad and i put six hours in this morning and we’d be done but i crashed and my brain is mush. That said… almost there. 😉


    Source date (UTC): 2025-09-06 19:19:51 UTC

    Original post: https://twitter.com/i/web/status/1964408220304494897

  • Math and Programming require and depend upon internal closure (capacity to truth

    Math and Programming require and depend upon internal closure (capacity to truth test assertions).
    LLM are statistical and have no means of closure – hence hallucinations and bad answers, and the inability to reason outside of primitive closure.
    We produce the means of closure for the ‘universe’ so to speak and this includes LLMs. This is why it takes the ternary logic of evolutionary computation, operational prose, the first principles hierarchy, and the decidability criteria (protocols) to enable such closure and as such decidability.
    Now the LLMS don’t really want to be that well behaved so it requires a bit of system prompting to make them so so and training to make it easy for them.
    But it works. ;).
    In other words, we teach LLMs to construct proofs. Or more correctly we help them discover solutions and test them – the result is a proof or its failure.


    Source date (UTC): 2025-09-05 18:38:01 UTC

    Original post: https://twitter.com/i/web/status/1964035304962347433