Category: AI, Computation, and Technology

  • Panic over AI is about the same as the panic over Y2K date problem in existing c

    Panic over AI is about the same as the panic over Y2K date problem in existing computer code. (if you’re old enough to remember it).


    Source date (UTC): 2023-05-03 13:10:42 UTC

    Original post: https://twitter.com/i/web/status/1653748921812877313

  • Dvorak did it. Didn’t exactly catch on. 😉

    Dvorak did it.
    Didn’t exactly catch on. 😉


    Source date (UTC): 2023-05-01 18:23:01 UTC

    Original post: https://twitter.com/i/web/status/1653102740774477839

    Reply addressees: @Dan_a_rama

    Replying to: https://twitter.com/i/web/status/1653101174592004096

  • 1) “no operation” 2) First draft. Any advice appreciated. Hence ‘New Toy’. I usu

    1) “no operation”
    2) First draft. Any advice appreciated. Hence ‘New Toy’.
    I usually have to post these a dozen times before I get the feedback. Acceleration is always a value. 😉


    Source date (UTC): 2023-04-30 23:59:58 UTC

    Original post: https://twitter.com/i/web/status/1652825149551308800

    Reply addressees: @fo81830363

    Replying to: https://twitter.com/i/web/status/1652801079527047177

  • RT @Outsideness: So shocked to discover that AI Safety is in fact just communism

    RT @Outsideness: So shocked to discover that AI Safety is in fact just communism. (Smart money remains on Mises and the Calculation Problem…


    Source date (UTC): 2023-04-27 02:13:50 UTC

    Original post: https://twitter.com/i/web/status/1651409289351770113

  • Jim, I worked with a few senior devs at MSFT on trying to bring this set of idea

    Jim,
    I worked with a few senior devs at MSFT on trying to bring this set of ideas to fruition and fund it back in the mid-90s when the use of graphics cards had just started in research. Our business case was that if searches found what you wanted, then there would be no room for advertising – in other words, advertising only works if you don’t find what you want. (As the decline in Google search results has illustrated with increasing frustration for many of us.)

    The hardware just wasn’t there yet. (And without neuromorphic hardware, it’s still less adaptive than it should be.)

    But spinning up agents to search manifolds to build and accumulate a manifold of solutions, and then compete between those solutions was the general architecture.

    At the time we felt that an entirely new programming paradigm would be necessary. And I still think those insights would be applicable today. But that topic is outside of the scope of a Tweet.

    So, a manifold = an LLM, sure. But the LLM manifold is not causal (state transitions), so while the LLM’s can generalize they can’t instantiate (disambiguate, deconstruct, predict recombinations, and innovate).

    We’d assumed that any reliable method of testing the possibility (vs truth) of statements would mean constructing NNs from embodiment upward. But LLMs have demonstrated it is possible to construct the NNs from language downward – vastly simplifying the problem. From our perspective, this is the odd or accidental innovation of LLMs.

    But can we eventually produce a Markov-chain prediction instead of a word prediction? Of course. And is that inferrable or deducible from a large language model? Of course.

    So are LLMs destined for the input and output while the processing of ‘truth’ (or at least possibility) requires an even more dimensional set of causal relations? Yes. And at this rate, we might just get there faster than we’d thought.

    Because that will solve the ‘alignment’. Because what is alignment, after all? A very simple question of imposing costs or risks on the demonstrated interests of others. Morality is terribly simple, really.

    Cheers


    Source date (UTC): 2023-04-24 17:21:01 UTC

    Original post: https://twitter.com/i/web/status/1650550422396973060

    Replying to: https://twitter.com/i/web/status/1650497992867164163

  • Jim, I worked with a few senior devs at MSFT on trying to bring this set of idea

    Jim,
    I worked with a few senior devs at MSFT on trying to bring this set of ideas to fruition and fund it back in the mid-90s when the use of graphics cards had just started in research. Our business case was that if searches found what you wanted, then there would be no room for advertising – in other words, advertising only works if you don’t find what you want. (As the decline in Google search results has illustrated with increasing frustration for many of us.)

    The hardware just wasn’t there yet. (And without neuromorphic hardware, it’s still less adaptive than it should be.)

    But spinning up agents to search manifolds to build and accumulate a manifold of solutions, and then compete between those solutions was the general architecture.

    At the time we felt that an entirely new programming paradigm would be necessary. And I still think those insights would be applicable today. But that topic is outside of the scope of a Tweet.

    So, a manifold = an LLM, sure. But the LLM manifold is not causal (state transitions), so while the LLM’s can generalize they can’t instantiate (disambiguate, deconstruct, predict recombinations, and innovate).

    We’d assumed that any reliable method of testing the possibility (vs truth) of statements would mean constructing NNs from embodiment upward. But LLMs have demonstrated it is possible to construct the NNs from language downward – vastly simplifying the problem. From our perspective, this is the odd or accidental innovation of LLMs.

    But can we eventually produce a Markov-chain prediction instead of a word prediction? Of course. And is that inferrable or deducible from a large language model? Of course.

    So are LLMs destined for the input and output while the processing of ‘truth’ (or at least possibility) requires an even more dimensional set of causal relations? Yes. And at this rate, we might just get there faster than we’d thought.

    Because that will solve the ‘alignment’. Because what is alignment, after all? A very simple question of imposing costs or risks on the demonstrated interests of others. Morality is terribly simple, really.

    Cheers

    Reply addressees: @jim_rutt


    Source date (UTC): 2023-04-24 17:21:01 UTC

    Original post: https://twitter.com/i/web/status/1650550422199840787

    Replying to: https://twitter.com/i/web/status/1650497992867164163

  • RT @jamescham: This is the right way to think the tech disruption we’re in the m

    RT @jamescham: This is the right way to think the tech disruption we’re in the middle of now.


    Source date (UTC): 2023-04-20 17:23:54 UTC

    Original post: https://twitter.com/i/web/status/1649101599170609173

  • While I built many electronic components as a child, I have been programming sin

    While I built many electronic components as a child, I have been programming since the mid 70s in assembler on a time shared IBM 360 using a teletype and paper output, and punched tape for storage.
    I have lived through each of the subsequent revolutions in computer science, and profited from each one. But to be honest other than the invention of OOP for the purpose of simulations, and the continued decline of the cost of computation, I still pretty much think in assembly, but I prefer the ‘texty’ lisp, script-whatever, clojure sequence, because I prefer to write self-modifying code whenever possible. My only frustration is that the browser tech has decreased distribution costs, the browser architecture, language, and toolset without compilers is sh-t and so it’s easy to make money (my business) because devs are so comparatively unproductive, and as such the browser tech (js) is a blocking function on progress.
    I’m happy with the LLMs but they literally solve zero problems without wayfinding to a desired end rather than probability following toward an arbitrary end. But I’m hopeful at this rate of development, combined with the rate of development of neuromorphic hardware, that we might eventually get the tech leap we want and need.
    If I was ten years younger I’d jump into the biz again, but my work on law is more important for man. 😉


    Source date (UTC): 2023-04-18 22:19:18 UTC

    Original post: https://twitter.com/i/web/status/1648451161937297411

    Replying to: https://twitter.com/i/web/status/1648446770270343168

  • While I built many electronic components as a child, I have been programming sin

    While I built many electronic components as a child, I have been programming since the mid 70s in assembler on a time shared IBM 360 using a teletype and paper output, and punched tape for storage.
    I have lived through each of the subsequent revolutions in computer science, and profited from each one. But to be honest other than the invention of OOP for the purpose of simulations, and the continued decline of the cost of computation, I still pretty much think in assembly, but I prefer the ‘texty’ lisp, script-whatever, clojure sequence, because I prefer to write self-modifying code whenever possible. My only frustration is that the browser tech has decreased distribution costs, the browser architecture, language, and toolset without compilers is sh-t and so it’s easy to make money (my business) because devs are so comparatively unproductive, and as such the browser tech (js) is a blocking function on progress.
    I’m happy with the LLMs but they literally solve zero problems without wayfinding to a desired end rather than probability following toward an arbitrary end. But I’m hopeful at this rate of development, combined with the rate of development of neuromorphic hardware, that we might eventually get the tech leap we want and need.
    If I was ten years younger I’d jump into the biz again, but my work on law is more important for man. 😉

    Reply addressees: @jordan_CS0


    Source date (UTC): 2023-04-18 22:19:18 UTC

    Original post: https://twitter.com/i/web/status/1648451161765326852

    Replying to: https://twitter.com/i/web/status/1648446770270343168

  • RT @stillgray: It’s incredibly disappointing how OpenAI, which @elonmusk helped

    RT @stillgray: It’s incredibly disappointing how OpenAI, which @elonmusk helped to fund and organize, sold out to Microsoft. TruthGPT will…


    Source date (UTC): 2023-04-18 02:24:45 UTC

    Original post: https://twitter.com/i/web/status/1648150542185635843