Theme: AI

  • (regarding the AI revolution) As a former serial tech entrepreneur with early ca

    (regarding the AI revolution)
    As a former serial tech entrepreneur with early career origins in ai before its multiple winters, this is the first time it’s been really fun to be alive in a long time. Because this tech is like giving machine guns to hunter-gatherers. And the Wild West metaphor has nothing by comparison. The field of opportunities is growing faster than anyone can seize any of them. 😉


    Source date (UTC): 2023-04-06 20:23:32 UTC

    Original post: https://twitter.com/i/web/status/1644073372840611844

  • (regarding the AI revolution) As a former serial tech entrepreneur with early ca

    (regarding the AI revolution)
    As a former serial tech entrepreneur with early career origins in ai before its multiple winters, this is the first time it’s been really fun to be alive in a long time. Because this tech is like giving machine guns to hunter-gatherers. And the Wild West metaphor has nothing by comparison. The field of opportunities is growing faster than anyone can seize any of them. 😉


    Source date (UTC): 2023-04-06 20:23:32 UTC

    Original post: https://twitter.com/i/web/status/1644073372773490693

  • ChatGPT4 may not be terribly good at reasoning but the rate at which the brain t

    ChatGPT4 may not be terribly good at reasoning but the rate at which the brain trust is throwing IQ points at innovation on it is something I’ve never seen ever before in my life. So at the very least the recent LLMs have broken the threshold for minimum competency upon which those without profound amounts of capital can innovate. Just the stuff I’ve seen roll out this week is insane. I remember waiting for code to come out printed in magazines that we could type into the console and save to punch tape… lol


    Source date (UTC): 2023-04-05 02:14:19 UTC

    Original post: https://twitter.com/i/web/status/1643436875384750082

  • ChatGPT4 may not be terribly good at reasoning but the rate at which the brain t

    ChatGPT4 may not be terribly good at reasoning but the rate at which the brain trust is throwing IQ points at innovation on it is something I’ve never seen ever before in my life. So at the very least the recent LLMs have broken the threshold for minimum competency upon which those without profound amounts of capital can innovate. Just the stuff I’ve seen roll out this week is insane. I remember waiting for code to come out printed in magazines that we could type into the console and save to punch tape… lol


    Source date (UTC): 2023-04-05 02:14:19 UTC

    Original post: https://twitter.com/i/web/status/1643436875305164802

  • Taleb is even more overly sensitive than I am. Look, he had a great idea. I can

    Taleb is even more overly sensitive than I am. Look, he had a great idea. I can explain why his project failed – so far. But it’s not going to fail in the future when AI’s produce a ‘unit of measure’ that he’s missing.

    As for his fat tony character vs the credentialists, he’s only half right, and the part he’s wrong about he can’t face for personal reasons.

    Debating him is impossible.
    It’s too bad because at least to me, he’s a hero for the same reasons most of us came to the same conclusions at about the same times. “everything interesting, worse, everything MEANINGFUL, happens at the edges (outliers). So anti-fragility is more important than maximizing utility.

    What I got right and he didn’t, (because I studied AI and Economics and Law) is that you can only outlaw categories of bad behavior. You can’t calculate it in advance.

    Reply addressees: @MartinC86461960 @DanAnde23836316 @nntaleb


    Source date (UTC): 2023-04-03 19:48:28 UTC

    Original post: https://twitter.com/i/web/status/1642977386554695680

    Replying to: https://twitter.com/i/web/status/1642976007219998720

  • Taleb is even more overly sensitive than I am. Look, he had a great idea. I can

    Taleb is even more overly sensitive than I am. Look, he had a great idea. I can explain why his project failed – so far. But it’s not going to fail in the future when AI’s produce a ‘unit of measure’ that he’s missing.

    As for his fat tony character vs the credentialists, he’s only half right, and the part he’s wrong about he can’t face for personal reasons.

    Debating him is impossible.
    It’s too bad because at least to me, he’s a hero for the same reasons most of us came to the same conclusions at about the same times. “everything interesting, worse, everything MEANINGFUL, happens at the edges (outliers). So anti-fragility is more important than maximizing utility.

    What I got right and he didn’t, (because I studied AI and Economics and Law) is that you can only outlaw categories of bad behavior. You can’t calculate it in advance.


    Source date (UTC): 2023-04-03 19:48:28 UTC

    Original post: https://twitter.com/i/web/status/1642977386651107329

    Replying to: https://twitter.com/i/web/status/1642976007219998720

  • Me: “Well the LLM’s are contextually aware, but not self aware, and certainly no

    Me: “Well the LLM’s are contextually aware, but not self aware, and certainly not self-vs world aware.”

    Martin: “Many people aren’t self-aware either.”

    See how he does that? You see?
    Now, a proper English intellectual would say that with a bit of wry humor. But a proper Czech intellectual just states it as an obvious expressionless fact, as if you should know it already. 😉

    You should follow Martin (@TheAutistocrat ) 😉


    Source date (UTC): 2023-04-03 18:00:36 UTC

    Original post: https://twitter.com/i/web/status/1642950239819538441

  • Me: “Well the LLM’s are contextually aware, but not self aware, and certainly no

    Me: “Well the LLM’s are contextually aware, but not self aware, and certainly not self-vs world aware.”

    Martin: “Many people aren’t self-aware either.”

    See how he does that? You see?
    Now, a proper English intellectual would say that with a bit of wry humor. But a proper Czech intellectual just states it as an obvious expressionless fact, as if you should know it already. 😉

    You should follow Martin (@TheAutistocrat ) 😉


    Source date (UTC): 2023-04-03 18:00:36 UTC

    Original post: https://twitter.com/i/web/status/1642950239928610819

  • EMBEDDING P-LAW IN LLM’S: TURNS OUT, YES. All; So getting down in the weeds on t

    EMBEDDING P-LAW IN LLM’S: TURNS OUT, YES.
    All;
    So getting down in the weeds on the new LLMs, particularly GPT4. And the emergent properties are fascinating. It’s not self aware, and so self falsifying, or moral testing, but it *is* producing the equivalent ‘framing’ (paradigming)…


    Source date (UTC): 2023-04-03 15:18:21 UTC

    Original post: https://twitter.com/i/web/status/1642909406873501696

  • EMBEDDING P-LAW IN LLM’S: TURNS OUT, YES. All; So getting down in the weeds on t

    EMBEDDING P-LAW IN LLM’S: TURNS OUT, YES.
    All;
    So getting down in the weeds on the new LLMs, particularly GPT4. And the emergent properties are fascinating. It’s not self aware, and so self falsifying, or moral testing, but it *is* producing the equivalent ‘framing’ (paradigming) a context. The addition of reflection, and falsification (recursion) appears to work. And it’s clear the community understands the problem at this point but I’m not really sure why the solution to GPT’s “bad hypothesis’ problem isn’t obviously adversarial falsification, and recursion. I mean … what do you think the social function of our brains is? Predicting others behavior so that we falsify our own intuitionistic behaviors that are more selfish.
    This turns out to be another problem of CRD (continuous recursive disambiguation). In that the knowledge needed to ask an unambiguous question may not be present prior to the production of an ambiguous answer. 😉 Which is obviouvs – and that’s why discourse is necessary (and why we can be a bit ‘mad’ if we don’t have others to test our ideas against.
    So what does this mean for NLI and our formal algorithmic natural law of cooperation, economic ethics, morality, politics et all?
    It means that I SHOULD take time out to work on integrating NLI’s method, definitions, science, first principles, etc into one of the LLMs.
    But to do that means I have to complete the work on the first few articles of the constitution that formalize the rest of the ‘sciences’.
    I’m pretty sure that I could make GPTX write law. And even decide law – incrementally, recursively, untiil it could decide unambiguously.
    #AI


    Source date (UTC): 2023-04-03 15:18:20 UTC

    Original post: https://twitter.com/i/web/status/1642909406642708482