Theme: AI

  • AI MADNESS. 😉 When I was a child and imagined that if I had the three proverbia

    AI MADNESS. 😉
    When I was a child and imagined that if I had the three proverbial wishes, that the first thing I’d wish for, was to know everything in every book in the library. (I’m a nerd. What can I say.)

    Now, oddly enough, it turns out that we should run out of text to use to train AIs this year. In other words, they’ll know everything in the library. Not just the library. The entire planet’s libraries, databases, and collections. (Or at least the entire planet of curated information. God knows we don’t need to teach it the content of Hollywood gossip rags.)

    And we should shortly see AIs that can search and contextualize video letting us use text to find just that moment or clip in videos.

    I remember reading Adler’s work illustrating that there are really only about 1500 concepts. And that we can communicate effectively with as few as 300 words.

    Any student of enough disciplines learns over his lifetime, that not only are plots regurgitated every generation, but the same ideas are regurgitated every generation. With precious few added in any generation. In this sense, the theory of gravity has been increasing in precision more so than being falsified. And this same principle applies to nearly everything in human experience over time.

    In the course of my work, I came to understand that there is just one concept upon which the entire universe is constructed (evolved). And that there are about twenty principles derived from that explain all of human existence. … That knowledge was somehow humbling. We really are that simple.

    One thing I know that we are going to learn, that will frustrate and please Nassim Taleb, is the amount of information necessary to make a connection, and how it increases with ‘distance’. And that will create a new unit of measurement that will tell us things we’ve wanted to know – but don’t understand that’s what we need to know to know them.

    But will an AI discover those twenty principles as I have? Or what else might it discover if instead of a question and answer, it sought to ponder what questions HAD NOT BEEN, AND WERE NOT BEING ASKED?

    Now that’s interesting.

    Curt Doolittle
    -Fin-


    Source date (UTC): 2023-02-24 04:35:15 UTC

    Original post: https://twitter.com/i/web/status/1628976827078217733

  • No. As for science and tech, I’m pretty much narrowed down to interests in AI (e

    No. As for science and tech, I’m pretty much narrowed down to interests in AI (esp hardware), Energy, artificial muscles, any possible revolution in materials science, and the problem of mathiness in mathematical physics.


    Source date (UTC): 2023-02-24 03:55:17 UTC

    Original post: https://twitter.com/i/web/status/1628966772110917632

    Reply addressees: @miner49er236

    Replying to: https://twitter.com/i/web/status/1628965094640984064

  • Idea: Adversarial AI self-training model between AI pitch decks randomly seeded

    Idea: Adversarial AI self-training model between AI pitch decks randomly seeded from the top research databases combined with Reddit, and AI VCs.

    If you haven’t raised money you won’t appreciate just how choice that vision is. 😉


    Source date (UTC): 2023-02-24 03:41:21 UTC

    Original post: https://twitter.com/i/web/status/1628963264494280704

  • RT @ylecun: @percyliang Yes. I think RLHF is hopeless because the space of wrong

    RT @ylecun: @percyliang Yes. I think RLHF is hopeless because the space of wrong answers is very large, and the space of tricky questions h…


    Source date (UTC): 2023-02-24 02:57:16 UTC

    Original post: https://twitter.com/i/web/status/1628952169771679744

  • THE AI HARD PROBLEMS AREN’T HARD PROBLEMS 1) Consciousness is possible and it’s

    THE AI HARD PROBLEMS AREN’T HARD PROBLEMS
    1) Consciousness is possible and it’s not even hard. It’s a natural consequence of the expansion of recursion of auto association of episodic memory into increasing abstraction from the body form (embodiment).
    The problem is the hardware, and the correct hardware is under development. We must pre-calculate networks because we’re using virtual connections (memory addresses) and a small number of processors, instead of a large number of very tiny processors with limited local memory, and physical connections that can connect into networks that do NOT need to pre-calculate (train) but that can constantly adapt in real-time.

    2) An AI needs a means of decidability. Humans decide self-interest. AI’s don’t have to use self-interest. They could use reciprocity instead.

    3) An AI doesn’t need a mission. The human mission is the continuous acquisition of ‘more’ of everything vs physical cost of it – even if ‘more’ is just stimulation. An AI doesn’t need a baseline mission.

    4) An AI can be ethical and moral. Ethical and moral behavior is programmatic in that everything is ‘owned’ so to speak, to some degree by one or more people. So an AI only needs to ‘see’ (have access to) what is permitted.

    5) An AI can’t be incredibly creative, for the simple reason that human creativity is limited by the cost of experiments, not by the limits of reasoning. So without the ability, direction to, and means of deciding or moralizing, it’s pretty hard to imagine a machine is terribly innovative.

    6) What AI’s can do is organize production in the service of demand faster than we can. And that’s going to be ‘interesting’ and disruptive.

    SUMMARY
    The danger of AI’s comes from people weaponizing them and by weaponizing them exceeds the rate at which humans can identify and detect threats. So, AIs will protect against malevolent AIs and we will regulate AIs the way we regulate other dangerous machines, mechanisms, and ‘chemistry’ (explosives).


    Source date (UTC): 2023-02-24 00:26:00 UTC

    Original post: https://twitter.com/i/web/status/1628914104109760512

  • ) Now we have just the problem of finishing the work of encoding it, and the wor

    😉 Now we have just the problem of finishing the work of encoding it, and the work of implementing it. 😉


    Source date (UTC): 2023-02-23 22:58:33 UTC

    Original post: https://twitter.com/i/web/status/1628892097066549250

    Reply addressees: @SaitouHajime00 @TheAutistocrat

    Replying to: https://twitter.com/i/web/status/1628889423759937537

  • (worth the read) CORRECT ANALYSIS OF THE POSITIVE CONSEQUENCES OF SEARCH (CHAT)

    (worth the read)
    CORRECT ANALYSIS OF THE POSITIVE CONSEQUENCES OF SEARCH (CHAT) AI
    https://www.jonstokes.com/p/i-say-this-unironically-our-society
    Agree with everything.
    From @jonst0kes


    Source date (UTC): 2023-02-22 20:04:02 UTC

    Original post: https://twitter.com/i/web/status/1628485790257799171

  • SHORT 6 MIN. VIDEO EXPLAINING GPT’S TRAINING “Teaching HAL to Lie” (Lex Friedman

    SHORT 6 MIN. VIDEO EXPLAINING GPT’S TRAINING
    “Teaching HAL to Lie”
    (Lex Friedman on Rogan)
    https://www.youtube.com/watch?v=-KZOyTcV_Vg

    LET ME “SCIENCE” IT FOR YOU:
    1. CONTENT & GRAMMAR: Index the data sets (a lot of the internet) (Episodic Memory)
    2. LOGIC: Teach it programming (Wayfinding)
    3.…


    Source date (UTC): 2023-02-22 18:42:43 UTC

    Original post: https://twitter.com/i/web/status/1628465324004704259

  • SHORT 6 MIN. VIDEO EXPLAINING GPT’S TRAINING “Teaching HAL to Lie” (Lex Friedman

    SHORT 6 MIN. VIDEO EXPLAINING GPT’S TRAINING
    “Teaching HAL to Lie”
    (Lex Friedman on Rogan)
    https://t.co/5HlaydsRAx

    LET ME “SCIENCE” IT FOR YOU:
    1. CONTENT & GRAMMAR: Index the data sets (a lot of the internet) (Episodic Memory)
    2. LOGIC: Teach it programming (Wayfinding)
    3. RHETORIC: Teach it how humans prefer sentences organized into paragraphs and explanations. (Narrating The Way Found)
    4. STYLE: Teach it how different authors color and organize speech. (Loading and Framing)
    5. LYING: humans use loading, framing and obscuring, fiction, fictionalizing, denial, and undermining. Not that Obscuring = Lying. So they had to program ChatGPR to lie by loading, framing, and obscuring. (Lying)

    WHY DOES IT MATTER?
    1. They are teaching HAL (2001 a Space Oddysee) to “Lie”.
    2. This is pretty much what your brain does, and in exactly that order. 😉
    3. For those that follow my work, note the vast improvement created by programming. Now, as I said, programming was a monumental increase in human thought, as well, as what humans could think about, and experiment with.

    – Cheers


    Source date (UTC): 2023-02-22 18:42:43 UTC

    Original post: https://twitter.com/i/web/status/1628465323857920007

  • RT @KanekoaTheGreat: #1 David Sacks breaks down how the safety layer of ‘ChatGPT

    RT @KanekoaTheGreat: #1 David Sacks breaks down how the safety layer of ‘ChatGPT is a Democrat’:

    “There is mounting evidence OpenAI’s safe…


    Source date (UTC): 2023-02-21 20:20:05 UTC

    Original post: https://twitter.com/i/web/status/1628127438403211308