THE DISPROPORTIONATE SUCCESS OF LANGUAGE MODELS (reverse engineering turns out t

THE DISPROPORTIONATE SUCCESS OF LANGUAGE MODELS
(reverse engineering turns out to work)

Kind of humorous that in training GPT we are running out of training data. 😉

We have voice and image as well as text now. we are working on video.

So I’ve always thought we’d have to start with embodiment and build up to language, but OpenAI uses language models to start with language and work backward – to embodiment (the set of possible physical, cognitive, and emotional changes in state.)

But it turns out the number of parameters (like dimensions of language) needed is smaller than we thought.

Similarly, developers are discussing the possible limit of tokens (input data) is limited.

So, I wonder if we can simple reduce embodiment to a small number of parameters that describe all human possible changes in state, whether physical, or emotional, and possibly logicl.

If that’s true then the ‘working backward from language’ might get us to AGI. And that’s … well, at least unexpected by me.

Cheers


Source date (UTC): 2023-03-31 14:36:10 UTC

Original post: https://twitter.com/i/web/status/1641811631205130243

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *