Nassim Taleb and I are working on the same problem, which we identified by similar means: designing models. He was inspired when he designed financial risk models, and I was inspired when I designed artificial intelligences for games in anticipation of the kind of warfare we are seeing emerge today.

I work bottom up (operationally), and Taleb works top-down (statistically). But this is the same problem from two ends of the spectrum. (He publishes books on the mass market to make money, I build software and companies for a limited number of partners and customers.) I want to find the mechanism and he wants to quantify the effect. But we are looking for the same thing. What is it?

Computers are useful in increasing our perceptions. The game of Life is an interesting software experiment in that if you vary the rate (time) you see different patterns emerge. If you vary the scale you see different patterns emerge. But in the end, these patterns, while they appear relatively random at slow (operationally observable) rates, turn out to be highly deterministic at faster ( consequentially observable) rates.

And this single experimental game tells us a lot about the human mind’s limits of perception. We see what we can, and the longer we observe the more consequential the patterns are that emerge, and the more deterministic is any system we observe.

We have all heard how few behaviors ants have but what kind of complexity emerges from it. During a vacation in southern Oregon one year I observed ducks for a few days as a way of distracting myself from business stress. Ducks are not smart like crows. They have just a few behaviors (intuitions is perhaps a better word). And their apparent complexities emerge from just those few behaviors. But if you watch them long enough you see machines that do about four or five things. And that’s all.

So, there is some limit to our perception underneath man’s behavior that is ascertainable: the metrics of human thought.

And I would suggestion without reservation that this research program is at least – if more – profoundly important than the research program into the physical structure of the universe.

This mathematics is achievable, but we don’t yet know how to go about it. And I am pretty certain that it’s a data collection problem: until we have vastly more data about our selves we probably cannot determine it. (emphasis on probably).

We may solve it by analogy with artificial intelligence. Or we may not. I suspect that we will. We will develop a unit of cognition wherein x information is required for every IQ point in order to create a bridge between one substantive network of relations and another.

But Taleb and I issue the same warning – although I think I have an institutional solution that can be implemented as formal policy and he has an informative narrative but no solution – as yet. Although his paper last year that shows just how extraordinarily large our information must be once we start getting into outliers.

We both use some version of ‘skin in the game’ as a guardianship against wishful thinking and cognitive bias. I use the legal term warranty and he uses the financial street name ‘skin in the game’ But the idea is the same.

In Taleb’s case, I think he is more concerned with stupidity and hubris as we have seen in the statistical (non-operational) financialization of our economy. Whereas I am more concerned with deception, as we have seen in the conversion of the social sciences to statistical pseudosciences in every field: psychology, sociology, economics, politics, and (as I have extended the scope of political theory) to group evolutionary strategy.

But whether top down or bottom up, statistical or pseudoscientific, skin in the game or warranty, hubris or deceit, the problem remains the same:

It is too easy for people in modernity to rely on pseudoscience in order to execute deceptions that cause us to consume every form of capital, from the genetic, to the normative, to the ethical and moral, to the informational (knowledge itself), to the institutional, to built capital, to portable capital, to money, to accounts, to the territorial, and destroying civilization, and in particular the uniqueness of western civilization in the process.

So to assert our ( Taleb and I) argument more directly: given that these people have put no skin in the game, and provided no warranty, but that we can impose upon them the warranty against their will for their malfeasance, what form of restitution shall we extract from them?

Territorial, physical, institutional, traditional, informational, normative, and genetic?

How do we demand restitution for what they have done?

How would you balance the accounts plus provide such incentive under rule of law that this would never happen again?

As for the Great Wars – all debts are paid.

Curt Doolittle

The Philosophy of Aristocracy

The Propertarian Institute

Kiev, Ukraine