The current LLMs that will run on a phone are relatively simple. But then, so is the hardware on our phones. Even so, it’s clear we’ll be able to use locally running AIs in androids, robots, vehicles, military vehicles, missiles, planes, and satellites, faster than we’d assumed.
Our LLMs are dumb as a rock. But we haven’t hit anything near the low-level limit of the tech yet. But as someone who has worked on these questions to one degree or another for decades, we still have three (hard) layers of problems to solve. While we’re seeing some simple evolution of wayfinding and recursion, we still don’t have episodic memory, prediction, and judgment (morality) – and those are pretty hard problems.
Basically, we’d thought we’d need to build AI bottom up, but instead, we’re building it top down as a child learns from observing, listening, and identifying patterns. And it turns out that while we have less control over it, it’s working far better than any of us ever imagined.
Source date (UTC): 2023-06-19 19:30:23 UTC
Original post: https://twitter.com/i/web/status/1670876702606491650
Leave a Reply