I’m not ‘fooled’. Given I understand the technology (and dropped out of it in th

I’m not ‘fooled’. Given I understand the technology (and dropped out of it in the 80s because of the then hardware limitations), and I understand rather deeply the human brain. I’m asking a technical question of whether a world model that allows logical testing of sequences of operations etc, and predictions from them, can emerge from enough self reinforcement. You wouldn’t think so. I woldn’t think so. But I can see how it might be possible.

We are in the first stages of considering how to put our ethical model into it, and I can see how that might work … eventually.

So I’m just reserving judgement.

Reply addressees: @j_q_balter @NektariosAI @GaryMarcus


Source date (UTC): 2023-03-30 02:32:08 UTC

Original post: https://twitter.com/i/web/status/1641267033236029442

Replying to: https://twitter.com/i/web/status/1641264297702719489

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *