CONVERGENCE IN AI?
—“Voyager continuously improves itself by writing, refining, committing, and retrieving *code* from a skill library. So, with GPT-4 it unlocks a new paradigm: “training” is code execution rather than gradient descent. “Trained model” is a codebase of skills that Voyager iteratively composes, rather than matrices of floats. We are pushing no-gradient architecture to its limit.”— @DrJimFan
This strategy will work. I used this technique in the mid 80s by storing streams of state engine transitions. But hardware just wasn’t available to do more than run the experiments,
I’d assumed that the bottom up technique (think Tesla) would evolve into language, but LLMs proved that brute forcing language would construct the logic top down, and the middle is a consistently correspondent world model, with locations, spaces, objects, and possible operations.
So we are seeing convergence happen much more quickly because LLMs solve the input model problem just as language does for humans.
Language (grammar(rules), nouns, verbs(operations),
… physical and logical operations by humans,
… … programming transformations(operations),
… … … state transitions (operations) and,
… … … … mathematical operations
Source date (UTC): 2023-08-17 18:51:14 UTC
Original post: https://twitter.com/i/web/status/1692247732654620672
Leave a Reply