Jim, I worked with a few senior devs at MSFT on trying to bring this set of idea

Jim,
I worked with a few senior devs at MSFT on trying to bring this set of ideas to fruition and fund it back in the mid-90s when the use of graphics cards had just started in research. Our business case was that if searches found what you wanted, then there would be no room for advertising – in other words, advertising only works if you don’t find what you want. (As the decline in Google search results has illustrated with increasing frustration for many of us.)

The hardware just wasn’t there yet. (And without neuromorphic hardware, it’s still less adaptive than it should be.)

But spinning up agents to search manifolds to build and accumulate a manifold of solutions, and then compete between those solutions was the general architecture.

At the time we felt that an entirely new programming paradigm would be necessary. And I still think those insights would be applicable today. But that topic is outside of the scope of a Tweet.

So, a manifold = an LLM, sure. But the LLM manifold is not causal (state transitions), so while the LLM’s can generalize they can’t instantiate (disambiguate, deconstruct, predict recombinations, and innovate).

We’d assumed that any reliable method of testing the possibility (vs truth) of statements would mean constructing NNs from embodiment upward. But LLMs have demonstrated it is possible to construct the NNs from language downward – vastly simplifying the problem. From our perspective, this is the odd or accidental innovation of LLMs.

But can we eventually produce a Markov-chain prediction instead of a word prediction? Of course. And is that inferrable or deducible from a large language model? Of course.

So are LLMs destined for the input and output while the processing of ‘truth’ (or at least possibility) requires an even more dimensional set of causal relations? Yes. And at this rate, we might just get there faster than we’d thought.

Because that will solve the ‘alignment’. Because what is alignment, after all? A very simple question of imposing costs or risks on the demonstrated interests of others. Morality is terribly simple, really.

Cheers


Source date (UTC): 2023-04-24 17:21:01 UTC

Original post: https://twitter.com/i/web/status/1650550422396973060

Replying to: https://twitter.com/i/web/status/1650497992867164163

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *