RT @curtdoolittle: @RealLouisPrince @NektariosAI @GaryMarcus Hmm… If I do a few searches, and find that no one else is discussing the sub…
Source date (UTC): 2023-03-30 09:25:43 UTC
Original post: https://twitter.com/i/web/status/1641371115246190599
RT @curtdoolittle: @RealLouisPrince @NektariosAI @GaryMarcus Hmm… If I do a few searches, and find that no one else is discussing the sub…
Source date (UTC): 2023-03-30 09:25:43 UTC
Original post: https://twitter.com/i/web/status/1641371115246190599
Hmm… If I do a few searches, and find that no one else is discussing the subject with clarity, then I can do that. There is a lot of ‘noise’ being produced and not much signal. I suspect that’s because few people understand the mechanics of both the brain and the software, and fewer can reduce the difference to an explanation digestable by the curious. In simple terms, the language model is getting very close to the same dependency chain that our brains use – though artificial ‘neuron’ is now a bit misleading. It’s more like neural minicolumn or column. Our brains disambiguate everything in relation to the current body positions. So our body and actions are the measurements it can use to test limits. To some degree the language model produces a bodyless brain in a vat, lacking that embodiment as a system of measure, and must learn about ‘bodies’ by inference. Now, we learn about abstract concepts the same way. So it shouldn’t be that impossible to ‘get there’ with enough working memory for wayfinding (recursive searching). And I see that this is happening … a bit. Shallowly. But it’s there.
Reply addressees: @RealLouisPrince @NektariosAI @GaryMarcus
Source date (UTC): 2023-03-30 09:25:40 UTC
Original post: https://twitter.com/i/web/status/1641371099979030534
Replying to: https://twitter.com/i/web/status/1641367638721851392
Holes In LLM:
1. “Lack of planning in arithmetic/reasoning problems”
2. “Lack of planning in text generation”
The human mind predicts a future(destination, goal) then wayfinds a way to get there (recursion).
LLMs follow a trail of breadcrumbs.
Source date (UTC): 2023-03-30 05:09:38 UTC
Original post: https://twitter.com/i/web/status/1641306668540493825
I’m pretty impressed that it can write in the ‘style of curt doolittle’ and it’s pretty accurate. Now the logic I put in the text isn’t there. But the summary of the work it produces IS logical.
So that’s the interesting generalization that has promise.
Source date (UTC): 2023-03-30 02:51:46 UTC
Original post: https://twitter.com/i/web/status/1641271974948225024
Reply addressees: @NektariosAI @OrenElbaum @j_q_balter @GaryMarcus
Replying to: https://twitter.com/i/web/status/1641270639267856384
I’m not ‘fooled’. Given I understand the technology (and dropped out of it in the 80s because of the then hardware limitations), and I understand rather deeply the human brain. I’m asking a technical question of whether a world model that allows logical testing of sequences of operations etc, and predictions from them, can emerge from enough self reinforcement. You wouldn’t think so. I woldn’t think so. But I can see how it might be possible.
We are in the first stages of considering how to put our ethical model into it, and I can see how that might work … eventually.
So I’m just reserving judgement.
Reply addressees: @j_q_balter @NektariosAI @GaryMarcus
Source date (UTC): 2023-03-30 02:32:08 UTC
Original post: https://twitter.com/i/web/status/1641267033236029442
Replying to: https://twitter.com/i/web/status/1641264297702719489
I’m not ‘fooled’. Given I understand the technology (and dropped out of it in the 80s because of the then hardware limitations), and I understand rather deeply the human brain. I’m asking a technical question of whether a world model that allows logical testing of sequences of operations etc, and predictions from them, can emerge from enough self reinforcement. You wouldn’t think so. I woldn’t think so. But I can see how it might be possible.
We are in the first stages of considering how to put our ethical model into it, and I can see how that might work … eventually.
So I’m just reserving judgement.
Source date (UTC): 2023-03-30 02:32:08 UTC
Original post: https://twitter.com/i/web/status/1641267033336586240
Replying to: https://twitter.com/i/web/status/1641264297702719489
Correct. But it sure speeds up the process.
If you can feed it a list of definitions and rules (Microsoft is about to release it) then it’s over.
GPT can do that. Just not deeply enough.
Source date (UTC): 2023-03-30 02:07:21 UTC
Original post: https://twitter.com/i/web/status/1641260794393362433
Reply addressees: @tysonmaly
Replying to: https://twitter.com/i/web/status/1641259696676909058
Yeah. Imagine daily life when these things can be anywhere at any time.
Source date (UTC): 2023-03-30 01:32:22 UTC
Original post: https://twitter.com/i/web/status/1641251991602044929
Reply addressees: @SovereigntyKing
Replying to: https://twitter.com/i/web/status/1641250261086633991
IS THERE A NEAR LIMIT TO THE LANGUAGE MODEL?
(There should be, but is there?) https://twitter.com/curtdoolittle/status/1641251315278835713
Source date (UTC): 2023-03-30 01:31:22 UTC
Original post: https://twitter.com/i/web/status/1641251741126602752
https://twitter.com/curtdoolittle/status/1641251315278835713
I’ve been working on these issues since the 80s. And until I saw the emergent behavior in GPT-4, I would have agreed with you. I’d thought we’d need a world model, a prediction (autoassociation) manifold, an evaluative model (preference, ethics) and a language model. But I’m a little concerned that I might have been wrong, and that it’s possible to construct all of them from the language model – at least incrementally.
I dislike this evolutionary pathway because the systems aren’t discrete and controllable(auditable). So we aren’t overcoming the problem of the human mind (and correcting it).
And so I’m not disagreeing. And while I’m skeptical as you are. I’m back to reserving judgment.
Reply addressees: @NektariosAI @GaryMarcus
Source date (UTC): 2023-03-30 01:29:41 UTC
Original post: https://twitter.com/i/web/status/1641251315174047745
Replying to: https://twitter.com/i/web/status/1641248987092058115