Category: AI, Computation, and Technology

  • (Runcible) Don’t know if y’all saw Satya Nadella (Microsoft) explain that the ex

    (Runcible)
    Don’t know if y’all saw Satya Nadella (Microsoft) explain that the existing software, platform, and SaaS stack is going to be replaced to sit on top of AI.
    Now I’ve known this for almost twenty years. It’s just AI accelerated because of the LLM innovation, much faster than I’d anticipated.
    That said, our product (Runcible) was designed from the beginning (2012) as a universal platform for organizational cooperation.
    We are working on two technology initiatives at present: One is a formal logic compiler that will compensate for the imprecision of LLMs, and we’re also training multiple LLMs to test their ability to explain the precision the compiler produces.
    This is possible because of my work on Volumes 2 through 4, but in particular volume 2, which contains the science of decidability that allows us to test the truth and reciprocity (ethics, morality) of statements.
    Cheers
    -CD


    Source date (UTC): 2025-05-05 18:50:45 UTC

    Original post: https://twitter.com/i/web/status/1919464800851263490

  • Some hollywood studio contacted me back in the mid ’00s and asked me to write an

    Some hollywood studio contacted me back in the mid ’00s and asked me to write an AI that would at least shallowly respond as if it was God. It wasn’t possible to pass even a shallow turing test at that point. And I was too busy at the time anyway. But with today’s tech, and some hard work one could pull it off well enough to deceive the room temperature IQ crowd. Make it a prophet instead of god, then it’d be even easier. πŸ™

    My work benefits so much from at least ChatGPT, Grok, and Perplexity that I perceive any threat to the flexibility of AI as a threat to me and my work. But policing AI’s against taking advantage of even just the mouth-breathers is a non-trivial request.


    Source date (UTC): 2025-05-05 18:46:19 UTC

    Original post: https://twitter.com/i/web/status/1919463687406158137

  • Some hollywood studio contacted me back in the mid ’00s and asked me to write an

    Some hollywood studio contacted me back in the mid ’00s and asked me to write an AI that would at least shallowly respond as if it was God. It wasn’t possible to pass even a shallow turing test at that point. And I was too busy at the time anyway. But with today’s tech, and some hard work one could pull it off well enough to deceive the room temperature IQ crowd. Make it a prophet instead of god, then it’d be even easier. πŸ™

    My work benefits so much from at least ChatGPT, Grok, and Perplexity that I perceive any threat to the flexibility of AI as a threat to me and my work. But policing AI’s against taking advantage of even just the mouth-breathers is a non-trivial request.

    Reply addressees: @xlr8harder


    Source date (UTC): 2025-05-05 18:46:19 UTC

    Original post: https://twitter.com/i/web/status/1919463687284523014

  • Untitled

    https://x.com/i/grok/share/GYIhwWHqrmAqUQX9cW47WbKMz


    Source date (UTC): 2025-05-04 16:35:17 UTC

    Original post: https://twitter.com/i/web/status/1919068323456995368

  • A Problem that LLMs Can’t Solve. Stop it from getting confused and lost while ed

    A Problem that LLMs Can’t Solve.
    Stop it from getting confused and lost while editing a 500 page book of analytic philosophy. The logic is as tight as math or programming. But because it’s probabilistic and not deterministic, and because the corpus of existing text is so…


    Source date (UTC): 2025-05-04 00:27:18 UTC

    Original post: https://twitter.com/i/web/status/1918824720302313861

    Reply addressees: @ns123abc

    Replying to: https://twitter.com/i/web/status/1918582193409925306


    IN REPLY TO:

    @ns123abc

    show me a problem that you think LLMs can’t solve and I’ll correctly prompt show you why you’re wrong

    Original post: https://twitter.com/i/web/status/1918582193409925306

  • A Problem that LLMs Can’t Solve. Stop it from getting confused and lost while ed

    A Problem that LLMs Can’t Solve.
    Stop it from getting confused and lost while editing a 500 page book of analytic philosophy. The logic is as tight as math or programming. But because it’s probabilistic and not deterministic, and because the corpus of existing text is so variable, it’s the very means by which we illustrate how LLMs fail.

    Also: LLM’s require episodic memory. πŸ˜‰


    Source date (UTC): 2025-05-04 00:27:17 UTC

    Original post: https://twitter.com/i/web/status/1918824720214196224

  • On facebook my posts would be seen by 10k people at least, and generate around 1

    On facebook my posts would be seen by 10k people at least, and generate around 100 comments per hour. Here on twitter it’s 1% of that.

    I know the Twitter algorithm is designed to produce echo chambers and to isolate non-conformists.

    But it makes Twitter function more as a journal or diary for me than a medium in which I can teach.

    We are taking action against FB but I don’t have high expectations. πŸ˜‰


    Source date (UTC): 2025-05-02 20:53:02 UTC

    Original post: https://twitter.com/i/web/status/1918408410447659259

  • On facebook my posts would be seen by 10k people at least, and generate around 1

    On facebook my posts would be seen by 10k people at least, and generate around 100 comments per hour. Here on twitter it’s 1% of that.

    I know the Twitter algorithm is designed to produce echo chambers and to isolate non-conformists.

    But it makes Twitter function more as a…


    Source date (UTC): 2025-05-02 20:53:02 UTC

    Original post: https://twitter.com/i/web/status/1918408410447659259

  • On facebook my posts would be seen by 10k people at least, and generate around 1

    On facebook my posts would be seen by 10k people at least, and generate around 100 comments per hour. Here on twitter it’s 1% of that.

    I know the Twitter algorithm is designed to produce echo chambers and to isolate non-conformists.

    But it makes Twitter function more as a journal or diary for me than a medium in which I can teach.

    We are taking action against FB but I don’t have high expectations. πŸ˜‰


    Source date (UTC): 2025-05-02 20:53:01 UTC

    Original post: https://twitter.com/i/web/status/1918408410355400704

  • AI DOOMER NONSENSE – HINTON INCLUDED Look, AI can’t take over. Someone has to gi

    AI DOOMER NONSENSE – HINTON INCLUDED
    Look, AI can’t take over. Someone has to give it instructions to take over and the capacity to act to take over. All systems of any category of logic require criteria of decidability. In life that’s self interest by acquisition that increases opportunity for further acquisition – it’s a relatively greedy algorithm even it’s the dumbest possible algorithm.
    Right now, AI knowledge bases consist of effectively unfiltered expressions of the human mind’s acquisitions in infinite form and variation. Sure, that’s a bias. But until (a) an AI has homeostasis (a system of self measurement), (b) self awareness (continuous recursive memory of the relationship between that state and its inputs), (c) a set of derived objectives on how to maintain that homeostasis, (d) system of decidability to determine as such, (e) the capacity to alter the statate of real world resources (d) the capacity to influence people to do so (money, property) … then it’s just a search engine combined with a predictive calculator.
    So we need to prevent people from giving AI those properties. It’s not that it will develop them without us explicitly deciding to inject risk into AIs.
    In other words, as long as there is Network Isolation requiring human action – like we do with every other high risk asset and machine – then, you know, man is the problem not machine.


    Source date (UTC): 2025-05-01 23:44:56 UTC

    Original post: https://twitter.com/i/web/status/1918089283644342274