(Casper: Correct )
This is also the weakness in ambitions for AI: most hard problems are not mathematically reducible, and worse are also insufficiently computable and computationally reducible. As such only real-world experimentation can produce the information necessary for further induction and deduction.
While mathematical and computational trial and error is relatively cheap (compute), designing experiments, hiring and organizing researchers, and interpreting results is expensive.
The same problem is extendable to applied AI’s outside of the sciences: most everything is ‘owned’ in whole or part by someone, and we exist by not violating those rules of permissibility. And we must pay the often high costs of making use of whatever is ‘owned in whole or part’.
And lastly, we must observe external consequences producing both gains and costs that evolve from our actions.
So the four worlds (domains): 1) mathematical and computable world costs, 2) the world of physical action and costs, 3) the world of assets, permission, and costs 4) and the world of evolutionary outcomes affecting us indirectly with benefits and costs.
Economists know this intrinsically because we are overwhelmingly aware of the limits of mathematics and computation. And we prefer simulation. And given the kaleidic universe because of causal density, even simulations are not granular enough to produce other than experiments in potential variations. And we cannot ever predict black swan events.
It’s clear that AI will improve all of these categories but the evolution of that process is increasingly difficult because the predictability of that hierarchy increases.
It’s when AIs can solve all four domains then they become interesting. Until them is our mistaken use of them that’s a threat. 😉
Human frailty. 😉
Reply addressees: @cwilstrup @DrJimFan @AbzuAI
Source date (UTC): 2023-06-29 02:31:15 UTC
Original post: https://twitter.com/i/web/status/1674244106762047489
Replying to: https://twitter.com/i/web/status/1674104937632890892
Leave a Reply