The Ludic Fallacy consists in the error that probability can be calculated on unclosed systems, whereas outliers are of greater influence on consequences that change state than are regularities that maintain state. In other words, there are very few conditions under which dice are a model for probability, and the ratio of influence (change) is a log of the tail. Dice are closed systems. There are no outliers. Whereas in all other categories (real world) we are almost always measuring variations in a norm, not possible outliers – which although rare, are far more influential than the regularities we measure. In other words, we get what we measure but what we measure is largely unimportant, because it’s obvious and not influential. What we don’t measure is that which is not obvious and rare, but influential. When we predict the future we depend upon regularities. but if regularities exist then there is no profit to be made. it is from outliers that profits are made. This is a via negativa strategy, just as is falsification. Or stated otherwise, the unimaginable and improbable is more influential than the imaginable and probable. This is – reductio version – the whole point of Taleb’s work. And Taleb is, even if he doesn’t succeed, the counter to Keynesian Probabilism, the same way I am counter to Marxist pseudoscience.
Theme: Causality
-
A Little Deeper Understanding of The Ludic Fallacy and Why I Rarely Use Any Variation on “probable”.
The Ludic Fallacy consists in the error that probability can be calculated on unclosed systems, whereas outliers are of greater influence on consequences that change state than are regularities that maintain state. In other words, there are very few conditions under which dice are a model for probability, and the ratio of influence (change) is a log of the tail. Dice are closed systems. There are no outliers. Whereas in all other categories (real world) we are almost always measuring variations in a norm, not possible outliers – which although rare, are far more influential than the regularities we measure. In other words, we get what we measure but what we measure is largely unimportant, because it’s obvious and not influential. What we don’t measure is that which is not obvious and rare, but influential. When we predict the future we depend upon regularities. but if regularities exist then there is no profit to be made. it is from outliers that profits are made. This is a via negativa strategy, just as is falsification. Or stated otherwise, the unimaginable and improbable is more influential than the imaginable and probable. This is – reductio version – the whole point of Taleb’s work. And Taleb is, even if he doesn’t succeed, the counter to Keynesian Probabilism, the same way I am counter to Marxist pseudoscience.
-
Bill Joslin Given the vast array of possibility and immense casual density of th
Bill Joslin
Given the vast array of possibility and immense casual density of the universe, incrementally eliminating the arbitrary from the relevant recursively refines the process of knowledge production and choice of effective action. The spectrum from Analogy to Theory to Axiomatic Proof to Operational Description outlines the process of continuous disambiguation which iteratively remove doubt in preference, good, and truth and therefore future action. The degree of doubt which survives the disambiguation process dictates the degree of knowledge obtained. The degree of knowledge obtain dictates the effectiveness of action – the degree of agency obtained.
Source date (UTC): 2018-06-10 10:56:00 UTC
-
Why? Because on a large enough scale over large enough time, anything that can h
Why? Because on a large enough scale over large enough time, anything that can happen will happen.
Source date (UTC): 2018-06-07 19:50:08 UTC
Original post: https://twitter.com/i/web/status/1004812775766577152
-
Why? Because on a large enough scale over large enough time, anything that can h
Why? Because on a large enough scale over large enough time, anything that can happen will happen.
Source date (UTC): 2018-06-07 15:49:00 UTC
-
Married men live longer because men who are not married are so for reasons that
Married men live longer because men who are not married are so for reasons that shorten their lives. Marriage does not extend your life. It’s one of the things that people with good life extending potential do.
Source date (UTC): 2018-06-04 17:00:07 UTC
Original post: https://twitter.com/i/web/status/1003682823906177030
-
Married men live longer because men who are not married are so for reasons that
Married men live longer because men who are not married are so for reasons that shorten their lives. Marriage does not extend your life. It’s one of the things that people with good life extending potential do.
Source date (UTC): 2018-06-04 12:59:00 UTC
-
Reality consists of the following actionable and conceivable dimensions: 1 – poi
Reality consists of the following actionable and conceivable dimensions:
1 – point, (identity, or correspondence)
2 – line (unit, quantity, set, or scale defined by relation between points)
3 – area (defined by constant relations)
4 – geometry (existence, defied by existentially possible spatial relations)
5 – change (time (memory), defined by state relations)
6 – pure, constant, relations. (forces (ideas))
7 – externality (lie groups etc) (external consequences of constant relations)
7 – reality (or totality) (full causal density)
We can speak in descriptions including (at least):
1 – operational (true) names
2 – mathematics (ratios)
3 – logic (sets)
4 – physics (operations)
5 – Law (reciprocity)
6 – History (memory)
7 – Literature (allegory (possible))
8 – Literature of pure relations ( impossible )
8a – Mythology (supernormal allegory)
8b – Moral Literature (philosophy – super rational allegory)
8c – Pseudoscientific Literature (super-scientific / pseudoscience literature)
8c – Religious Literature (conflationary super natural allegory)
8d – Occult Literature (post -rational experiential allegory )
We can testify to the truth of our speech only when we have performed due diligence to remove:
1 – ignorance,
2 – error,
3 – bias,
4 – wishful thinking,
5 – suggestion,
6 – obscurantism,
7 – fictionalism, and
8 – deceit.
So of the tests:
1 – categorical consistency (equivalent of point)
2 – internal consistency (equivalent of line)
3 – external correspondence (equivalent shape/object)
4 – operational possibility (what you just described) (equivalent of change [operations])
6 – limits, parsimony, and full accounting. (equivalent of proof)
Those operations existed or can exist.
You can imagine a something with the properties of a unicorn, you can speak of the same, draw the same, sculpt the same … but until you can breed one (and even then we must question), and we can test it, the unicorn does not exist ***in any condition that we can test in all dimensions necessary for you to testify it exists***
This is just one of the differences between TRUTH (dimensional consistency (constant relations)), and some subset of the properties of reality (DIMENSIONAL CONSISTENCY).
Mathematics allows us to describe constant relations between constant categories (correspondence) by means of self-reference we call ‘ratios’ to some constant unit (one). The more deterministic (constant) the relations the more descriptive mathematics, the higher causal density that influences changes in state, the more information and calculation is necessary for the description of candidate consequences, and eventually we must move from the description of end states to the description of intermediary states that because of causal density place limits on the ranges of possible end states.
In other words, in oder to construct theories (descriptions) of general rules of constant relations, we SUBTRACT properties of reality from our descriptions until we include nothing but identity(category), quantity, and ratio, and constrain ourselves to operations that maintain the ratios between the subject (identity).
Mathematics has evolved but retained (since the greeks at least) the ‘magical’ (fictional, supernormal fiction, we call platonism) as a means of obscuring a mathematician’s lack of understanding of just why ‘this magic works’. When in reality, mathematics is trivially simple, because it rests on nothing more than correspondence (identity), quantity, ratio, and operations that maintain those ratios, and incrementally adding or removing dimensions, to describe relations across the spectrum between points(identities, objects, categories) and pure relations at scales we do not yet possess the instrumentation or memory or ability to calculate at such vast scales – except through intermediary phenomenon.
As such, operationally speaking, the discipline of mathematics consists (Truthfully) of the science (theories of), general rules of constant relations at scale independence, in arbitrarily selected dimensions. In other words. Mathematics consists of the study of measurement.
it is understandable why we do not grasp the first principles of the universe – they are unobservable directly except at great cost. It is not understandable why we do not grasp the first principles of mathematics: because measurement is a very simple thing, and dimensions are very simple things.
That mathematicians still speak in fictional language, just as do theists and just as do the majority of philosophers (pseudo science, pseudo-rationalism, pseudo-mythology)
Ergo, infinities are a fictionalism. Multiple infinities are a fictionalism. Both fictionalism describe conditions where time and actions (operations) have been removed as is common in the discipline of measurement (mathematics). Operationally, numbers (operationally constructed positional names, must be existentially produced as are changes in gears. And as such certain sets of numbers (outputs) are produced faster (like seconds or minutes vs hours) than other sets of numbers (outputs).
Source date (UTC): 2018-06-04 07:38:00 UTC
-
NOTES: Constant vs contingent vs inconsistent vs non-relations. Recursive Contin
NOTES:
- Constant vs contingent vs inconsistent vs non-relations.
- Recursive Continuous Disambiguation vs Scale of Set of Constant Relations(density)
- Cumulation of association vs falsification of associations
- Computational efficiency.
- State Persistence vs breadth search, vs depth search
- We cannot know the intelligence of distant ancestors.
- Planning a series of steps in sequence must emerge – which requires recursion.
- Consciousness must emerge, meaning, the ability to compare states.
- Cooperation must emerge, meaning, the ability to empathize with intent.
- At some point we must develop sufficient computational ability to manipulate our bodies in some way that allows for unambiguous communication, or a means of continuous disambiguation, that is fast enough for one another to make use of in real time, and easy enough for one another to retain.
- And at some point, given sufficient computational ability, memory, and state persistence independent of recursion, language must emerge.
- At some point the value of such communication much be such that the cost of it is offset by the rewards of it.
- And we should see a cliff in history where there is a dramatic change when we did develop those abilities. And we do see it – rather recently.
But language requires a system of measurement. The system of measurement is limited by our senses. And as such meaning refers to a set of measurements, eventually reducible to analogies to human experience. So while semantic content (measurements) must vary from species to species, grammar (continuous recursive disambiguation) should be universal in the sense that it varies predictably with computational abilities. We can understand a child, a person with 60IQ, 70IQ and so on, up to 200+ IQ. But as far as I can tell the set of measurements (basis of semantics) remain the same, and all that changes is the scope of the state persisted, the depth of recursion, and the density and distance of relations, and the ability to model (forecast). In other words, simple people are in fact simply ‘more simple’ in the density of content of their semantics, use of grammar, and models (Stories) that they can construct with them. So universal grammar as a set of computational minimums and efficiencies, should always exist, and human universal grammar as universal grammar limited to human measurements (semantics), does exist. And any organism with sufficient computational (neural) capacity, should develop some means of communication using some variation of universal grammar, and some sense-perception – action dependent semantics. May 29, 2018 4:28pm
-
NOTES: Constant vs contingent vs inconsistent vs non-relations. Recursive Contin
NOTES:
- Constant vs contingent vs inconsistent vs non-relations.
- Recursive Continuous Disambiguation vs Scale of Set of Constant Relations(density)
- Cumulation of association vs falsification of associations
- Computational efficiency.
- State Persistence vs breadth search, vs depth search
- We cannot know the intelligence of distant ancestors.
- Planning a series of steps in sequence must emerge – which requires recursion.
- Consciousness must emerge, meaning, the ability to compare states.
- Cooperation must emerge, meaning, the ability to empathize with intent.
- At some point we must develop sufficient computational ability to manipulate our bodies in some way that allows for unambiguous communication, or a means of continuous disambiguation, that is fast enough for one another to make use of in real time, and easy enough for one another to retain.
- And at some point, given sufficient computational ability, memory, and state persistence independent of recursion, language must emerge.
- At some point the value of such communication much be such that the cost of it is offset by the rewards of it.
- And we should see a cliff in history where there is a dramatic change when we did develop those abilities. And we do see it – rather recently.
But language requires a system of measurement. The system of measurement is limited by our senses. And as such meaning refers to a set of measurements, eventually reducible to analogies to human experience. So while semantic content (measurements) must vary from species to species, grammar (continuous recursive disambiguation) should be universal in the sense that it varies predictably with computational abilities. We can understand a child, a person with 60IQ, 70IQ and so on, up to 200+ IQ. But as far as I can tell the set of measurements (basis of semantics) remain the same, and all that changes is the scope of the state persisted, the depth of recursion, and the density and distance of relations, and the ability to model (forecast). In other words, simple people are in fact simply ‘more simple’ in the density of content of their semantics, use of grammar, and models (Stories) that they can construct with them. So universal grammar as a set of computational minimums and efficiencies, should always exist, and human universal grammar as universal grammar limited to human measurements (semantics), does exist. And any organism with sufficient computational (neural) capacity, should develop some means of communication using some variation of universal grammar, and some sense-perception – action dependent semantics. May 29, 2018 4:28pm