(from elsewhere, To Alex Tabarrok, marginal revolution)

I think the more operational answer is that all AI’s will be able to do is drastically reduce informational asymmetries, and predict reactions to them out to the first or second order. Just as money, accounting, and now digital accounting have drastically reduced asymmetries of information. However, people will also develop AI’s to outwit such AI’s for competitive advantage, and Human beings will seek Virtue Signals (Status signals) to outwit those predictions for social advantage. The principle example being fashion, which while cyclical is driven by technological innovation with a surprisingly small number of variables.

We keep discussing AI in the context of a monopoly like the government without considering that Ai’s will seek to outwit AI’s just as traders and digital trading seek to outwit each other today. I don’t think AI’s are as much of a problem as finding a way to organize society when all the multiples of any meanning, require vast capital expenditures limited to very few. So just as the stock market provides a credit advantage that often defeats more meritocratic (and quality) advances, so will artificial intelligence. Conversely, it is far easier to starve machines of information than it is people. And while human organizations of all scales can degrade somewhat gracefully except in rare circumstances, mechanical networks degrade quickly.

Just as we are one war away from ending the era of navies, we are one major conflict away from ending our over conficence in the instantaneous delivery of energy, and the unwise luxury of such velocity that we have only three hours of power, three days of water, and one week of food in ‘storage’.