Category: AI, Computation, and Technology

  • AI DOOMER NONSENSE – HINTON INCLUDED Look, AI can’t take over. Someone has to gi

    AI DOOMER NONSENSE – HINTON INCLUDED
    Look, AI can’t take over. Someone has to give it instructions to take over and the capacity to act to take over. All systems of any category of logic require criteria of decidability. In life that’s self interest by acquisition that increases opportunity for further acquisition – it’s a relatively greedy algorithm even it’s the dumbest possible algorithm.
    Right now, AI knowledge bases consist of effectively unfiltered expressions of the human mind’s acquisitions in infinite form and variation. Sure, that’s a bias. But until (a) an AI has homeostasis (a system of self measurement), (b) self awareness (continuous recursive memory of the relationship between that state and its inputs), (c) a set of derived objectives on how to maintain that homeostasis, (d) system of decidability to determine as such, (e) the capacity to alter the statate of real world resources (d) the capacity to influence people to do so (money, property) … then it’s just a search engine combined with a predictive calculator.
    So we need to prevent people from giving AI those properties. It’s not that it will develop them without us explicitly deciding to inject risk into AIs.
    In other words, as long as there is Network Isolation requiring human action – like we do with every other high risk asset and machine – then, you know, man is the problem not machine.


    Source date (UTC): 2025-05-01 23:44:56 UTC

    Original post: https://twitter.com/i/web/status/1918089283644342274

  • ARE OTHER LLMs VIABLE – WORTH THE SPEND? Other than OpenAI and Grok, every other

    ARE OTHER LLMs VIABLE – WORTH THE SPEND?
    Other than OpenAI and Grok, every other LLM is utterly incompetent for our purposes – embarrassingly so. Occasionally we find perplexity useful for research.

    Is there any reason we should continue to pay for our accounts with Anthropic, Gemini, et all?

    They brag about context window size for example but cannot tolerate large documents, hallucinate terribly, and err constantly.

    I realize what we do is harder for LLMs than math and code, but that doesn’t prevent OpenAI from handling it.


    Source date (UTC): 2025-05-01 23:18:18 UTC

    Original post: https://twitter.com/i/web/status/1918082582132269056

  • RT @NoahRevoy: If you understood what we were doing and why it was important you

    RT @NoahRevoy: If you understood what we were doing and why it was important you would realize that we are at a massive turning point in hu…


    Source date (UTC): 2025-05-01 21:37:26 UTC

    Original post: https://twitter.com/i/web/status/1918057198271512755

  • THE PROBLEM WITH TRAINING LLMS ISN”T JUST DETERMINING THE TRUTH… (Important) –

    THE PROBLEM WITH TRAINING LLMS ISN”T JUST DETERMINING THE TRUTH… (Important)
    –“You can’t simply tell a model trained exclusively on neoliberal wikipedia edits to “be conservative.” Nobody is even pretending to try to address this root problem which is the single biggest political problem.”– Matt Parrott @MatthewParrott

    In fact, the problem is explaining to both feminine egalitarian consumptive left and masculine meritocratic capitalizing right biases the cause and structure of one another’s positions.

    This can’t happen when LLMs (a) are trained on the publicly available corpus of text, and (b) the LLM has no concept of the difference between two different systems humans make use of: measurement of the universe (categories) and measurement of human preference or it’s aversion for it.

    In most cases both biases, left feminine and right masculine are using hyperbole as a signal of moral outrage given some perceived transgression on the part of the other bais. In such cases – which is most cases, the hyperbole may be analytically false, but by cause and externality symptomatically true.

    There is therefore, given the structure of language and norms, a left bias in most mass produced information.
    Cheers
    CD


    Source date (UTC): 2025-05-01 21:37:01 UTC

    Original post: https://twitter.com/i/web/status/1918057092944150531

  • (Grins) Given we are experiencing a ELE severity conquest of the human mind by a

    (Grins)
    Given we are experiencing a ELE severity conquest of the human mind by an accidental innovation in artificial intelligence, by brute force construction of mind using all extant text, image and sound, I feel the urge to share a bit of nerdy joy as a reminder that both GlaDOS, Collosus, and Wintermute are waiting in the wings. 😉

    https://t.co/qRiEEG9Vw8


    Source date (UTC): 2025-05-01 01:49:38 UTC

    Original post: https://twitter.com/i/web/status/1917758278769205251

  • (Grins) Given we are experiencing a ELE severity conquest of the human mind by a

    (Grins)
    Given we are experiencing a ELE severity conquest of the human mind by an accidental innovation in artificial intelligence, by brute force construction of mind using all extant test, image and sound, I feel the urge to share a bit of nerdy joy as a reminder that both GlaDOS, Collosus, and Wintermute are waiting in the wings. 😉

    https://t.co/qRiEEG9Vw8


    Source date (UTC): 2025-05-01 01:49:38 UTC

    Original post: https://twitter.com/i/web/status/1917749483938668546

  • And unfortunately it’s impossible for anyone outside our team to grasp that such

    And unfortunately it’s impossible for anyone outside our team to grasp that such a thing is even possible, or worse, that we’re not full of s—-t. 😉

    No. Really. We did it. I know it sounds nuts. But we did. 😉 https://twitter.com/curtdoolittle/status/1916633635022843935

  • Truth? In an ai? We have done it. We re training the AI today. The research effo

    Truth? In an ai? We have done it. We re training the AI today. The research effort took us almost three decades. The underlyling science is as revolutionary as was Darwin’s.


    Source date (UTC): 2025-04-27 23:20:42 UTC

    Original post: https://twitter.com/i/web/status/1916633635022843935

    Reply addressees: @elonmusk

    Replying to: https://twitter.com/i/web/status/1916374559596445780

  • Thinking…. I used numbers because I was refering to population. I didn’t intui

    Thinking…. I used numbers because I was refering to population. I didn’t intuit the interpretability. Qty might mean other than population. So … I don’t know what to use?


    Source date (UTC): 2025-04-27 19:26:24 UTC

    Original post: https://twitter.com/i/web/status/1916574672340287634

    Reply addressees: @bryanbrey

    Replying to: https://twitter.com/i/web/status/1916573735819034861

  • (NLI, GPT Estimates) Developing the Training Data. Our first domain is Ethics. I

    (NLI, GPT Estimates)
    Developing the Training Data.
    Our first domain is Ethics. I have about 50 more prompts for ethics left. There are a couple of hundred already in the document ready to convert to JSON.

    The full set of domains that will require a training program:
    1) ethics: how humans cooperate at scale in order to persist in an adversarial universe.
    2) language and grammar as measurement;
    3) physics and the ternary logic of evolutionary computation;
    4) social organization by the three means of influence to coercion;
    5) group evolutionary strategy and the path dependency of institutional formation.
    6) the natural law as the means of human organization least divergent from the laws of nature.
    7) the process for applying our deflationary methodology.
    Though the truth is that domain expansion will continue along with human knowledge, even if emergent first principles are only discovered at each of the more complex levels within each domain.

    It appears it will take about three to four days per domain to produce, and then there are all sorts of test prompts and other juicy bits to produce. But the net is that we’re ‘getting the process down’.


    Source date (UTC): 2025-04-26 06:38:01 UTC

    Original post: https://twitter.com/i/web/status/1916018913957343232