WHAT’S THE PROMISE OF AI AGENTS?
(probably spam)
Q: Curt: –“What do you think: Will AI Agent Workflows in 2026 look like Online Poker in 2010?”—
1) I agree only with the fact that work will change, the volume of work per person might increase in some white collar work as it did under the first two computer revolutions, but the number of white collar workers should very likely shrink and very likely shrink a great deal.
2) This is not the first technological revolution I’ve experienced in my lifetime. Except for the initial phase in the 40s and 50s I have some exposure to each generation. In each of these revolutions, low hanging fruit is mistakenly interpreted as a boundless undiscovered valley of unlimited potential. An it’s always been false. We exhausted each generation of technological innovation rather quickly. The most recent that living generations are familiar with was the phone, but we exhausted innovation in phone apps in just a few years. The ‘agent’ innovation in LLMs will very likely have a scale effect closer to the client server revolution than it will to the internet revolution. Conversely, the exhaustion of parallel processing of complex vector relationships is as inexhaustible as the transistor revolution. The reason being that the universe consists of relations and those n-dimensional manifolds (of relations) are the most accurate means of representing reality (the universe) while maintaining some form of reduction (reducibility) that can be used for deduction, inference, and guessing.
3) In the given example of poker there would be no need for the human whatsoever. Instead, humans will only introduce error. In many, many white collar jobs, the utility of people created by the computer revolutions in producing white collar work will be reversed just as manual labor was reduced by industrialization in factories, and farm labor was reduced by say, the loom and tractors. But the costs of goods, which are mostly
4) The ‘dumbness’ of AI’s outside of search, math, computer science, and research by permutation in the physical sciences remains astounding. And until that is overcome – which we understand but don’t quite know how to solve by merging say LLMs with Agents (procedural systems) with navigating the physical world, with manipulating the physical world, this dumbness will persist. The capacity of the current AIs to reason as humans do instead of merely solve ‘reason puzzles’ is illusory because of the absence of that merger (synthesis). In my work they simply cannot do it. I mean it’s sad really that in my work, I work with LLMs every day, and that means I effectively experience their limitations every day.
5) My company has been developing a very large and complex “universal application platform” for years now. This platform creates a framework of commensurability across all human cooperation. This commensurability functions as numbers in math, and types, commands, functions in computer programming, and unambiguity in operational language. Essentially creating standards of categories, weights, and measure across all human cooperation. And within this platform, one can construct interfaces for tasks, roles, responsibilities or whatever, in any domain where humans collaborate and cooperate. This platform separates rules that must be followed (prescriptions for processes), from statistical insight, from derivied insights in group, to derived insights across groups, fields, or populations. This is what I understand as necessary for producing context specific insight into complex causal density quite *unlike* math, programming, and ‘puzzle’ reasoning.
6) Human capacity for the appearance of multitasking is limited. In fact human’s don’t multi-task, they switch, and the number of contexts they can switch between is as limited as the number of objects we can visualize independently: usually three to five but no more. And if humans can in fact appear to multitask, they would rely on pattern recognition where the AI’s would demonstrate superiority.
7) Human capacity for novelty in multiple contexts and high precision within a given context might remain for a while, but eventually, machines will outperform humans. Yet humans will be required for obtaining the information necessary for the solution to novel problems becuase while some novel problems might consist of interstitial permutations of existing knowledge, the hard problems will remain because they will require the construction of physical experimentation – at least until we finish discovering the first principles of the universe and can rely on constructability. So discovering those first principles is the hard limit of human-machine competition.
8) I could go on quite a while with this sequence but the intuition in the original post is that human exposure to parallel data in real time would continued in utility is false – we merely took advantage of the incompetence of statistical and procedural algorithms in pattern recognition. Whereas unlike statistical and procedural algorithms, the whole point of bayesian systems is that they can account for much higher causal density than can humans, and do so faster in real time, and even predict better in real time. So the theory proposed is likely false – that individual would be rapidly and easily replaced.
9) The question is – what human-possible activity can’t be replaced? a) Outwitting one another. b) human subjective risk tolerance. c) Permission to impose costs upon human demonstrated interests. d) all of the above in alliances of capital between humans with disparate ever changing intersts and preferences.
More another time.
CD
Reply addressees: @bryanbrey @rileybrown_ai