Category: Epistemology and Method

  • UNDERSTANDING P-LAW EPISTEMOLOGY (excerpt from elsewhere) You should not expect

    UNDERSTANDING P-LAW EPISTEMOLOGY
    (excerpt from elsewhere)

    You should not expect to understand P-Epistemology without work. If you could, then the many great minds before us would have figured out epistemology. I’m simply amazed that out of the new LLMs have proven it. And by the same means. πŸ˜‰

    So as I said you need the whole package of:
    1) ternary logic,
    2) evolutionary computation
    3) by continuous recursive disambiguation,
    4) irreducible first principles that result from that disambiguation,
    5) the demonstrated interests that result from those first principles,
    6) reciprocity that results from those first principles and those demonstrated interests,
    7) the criteria for decidability in satisfying the demand for infallibility,
    8) and the criteria for testimony that results,
    9) and the grammars and the method to produce languages as measurement to achieve with the language in which testimony is expressed,
    10) and the means (Logic) of error bias and deceit,
    11) producing the capacity to identify what is ignorance, error, bias, deceit, denial, projection, undermining, sedition or treason,
    12) thus identifying whether the individual’s truth claim (or false claim), is the product of the failure of due diligence due to ignorance or error, or conversely an incentive to deceive by bias, and deceit.

    The relatively common inability to know this criteria, and work through this criteria, is understandable, given the rather shallowness of human cognition. But this is not a matter of cognition but one of calculation. Or rather algorithmic testing.

    So while it takes only a few hundred pages to describe all of the above, the capacity to master it is no less difficult than the master of economics and law together.

    And, THERE IS NO SHORTCUT. You have to think of it all in order to think of any of it.

    A SUBSET: THE CONCEPT OF TRUTH (PERFORMATIVE, TESTIFIABLE)
    We aren’t trying to determine if something is ideally or analytically, true, but whether it is testifiable (possible to claim as true).

    So if one of the criteria isn’t satisfied, then you drill down on it, until you determine it’s constructed from first principles.

    In the context of testifiable truth, those criteria are:
    Realism,
    Naturalism
    Identity (unambiguity)
    Internally consistent (logical)
    Externally Correspondent (empirical)
    Operationally Possible (demonstrable)
    Rational Choice (rational)
    Reciprocal (moral (reciprocally rational))
    AND
    Fully accounting, within stated limits,
    Warrantable,
    And within the limits of restitutability.

    Most people skip over realism, naturalism.

    For example, [“Everything between these brackets is a lie”] is ambiguous and intentionally so, by not satisfying the first rule of grammar, which is “continuous recursive disambiguation”, and instead, is doing the opposite, by intentionally using recursion without disambiguation – and as such it’s a lie of intention.

    And I know this is difficult, because thinking in the via negativa (darwinian survival) not justificationary truth is a very difficult habit to overcome.

    It’s because you’re using justification not falsification. Does an amoral question absent rationality and reciprocity survive falsification – yes it does. Because there is nothing there to falsify.

    And I know this is difficult, because thinking in the via negativa (darwinian survival) not justifictionary truth is a very difficult habit to overcome.

    We never know anything is true. We only know:
    1) This is testifiable by the criteria of testifiability.
    2) Whether the demand for infallibility in the context in question is met by the remaining testifiable testimony.
    3) In some case it may be sufficiently decidable for you in your mind, sufficiently for you to act, sufficient for you to act given it’s impact on others and sufficient for you to act given it’s impact on others, their retaliation if you err, and your ability to provide restitution if you do err regardless of how correct you thought you were.

    It’s promising that what you said is true and that you can morally claim it is true, because you have done due diligence (checks), and given those checks, if. you test those checks, and you find I err, it is only because I err, not because I deceive.

    Because when we are discussing something such as not only whether a thing is testifiable or not, true or false, but whether it is immoral or not, AND because if it’s either false or immoral, or perhaps more importantly, when it’s both false and immoral, we want to punish them for it by both restitution and punishment. So one of the concepts I’m trying to teach y’all is not to fall into the sophistry of philosophy, and instead take it through science and into law. We want to know if it’s testifiable, true, rational, moral, and … so we can punish you for claiming otherwise.

    The result we’re seeking to produce is both law, and the alteration of human behavior so that thtey are more conscious of the means by which we must perform due diligence against our tendency to lie, or worse, distribute the lies of others because we don’t really understand them – they just feel good.

    So while I can repeat until I’m blue in the face that you’re still trying to justify rather than falsify (survival) almost all of you will keep doing it, because until you’ve answered hundreds of questions using falsification by these criteria you aren’t even aware that you’re using justification because everything in your life in ever walk of life, has taught you justificationism.

    Which is exactly the problem we’re trying to overcome.

    Why? Because way-finding through a maze by following instructions is cheap, vs verifying the entirety of the maze is block other than the one way that survives is not.

    The point is to teach the method so that we understand the relationship between operational testifiability, not being a pragmatism, but requiring the precision necessary to satisfy the demand for unambiguity and as a consequence satisfying decidability sufficiently to satisfy the demand for infallibility.

    So when i tell you start with decidability as the demand for infallibility, then with the spectrum of truth, then testimony, and then reciprocity, and then demonstrated interests, and then the capital witin the group evolutionary strategy.

    I”m not kidding that THERE IS NO SHORTCUT. You have to think of it all in order to think of any of it.

    Theers
    CD


    Source date (UTC): 2024-05-25 22:31:31 UTC

    Original post: https://twitter.com/i/web/status/1794496548056780800

  • WHY IS THE P-METHOD SO DIFFICULT TO INTUIT? Because people think it’s philosophy

    WHY IS THE P-METHOD SO DIFFICULT TO INTUIT?
    Because people think it’s philosophy – it’s not.

    We aren’t trying to determine if something is ideally or analytically, true, but whether it is testifiable (possible to claim as true).

    So if one of the criteria isn’t satisfied, then you drill down on it, until you determine it’s constructed from first principles.

    So as I said you need the whole package of: 1) ternary logic, 2) evolutionary computation 3) by continuous recursive disambiguation, 4) irreducible first principles that result from that disambiguation, 5) the demonstrated interests that result from those first principles, 6) reciprocity that results from those first principles and those demonstrated interests, 7) the criteria for decidability in satisfying the demand for infallibility, 8) and the criteria for testimony that results, 9) and the grammars and the method to produce languages as measurement to achieve with the language in which testimony is expressed, 10) and the means (Logic) of error bias and deceit, 11) producing the capacity to identify what is ignorance, error, bias, deceit, denial, projection, undermining, sedition or treason, 12) thus identifying whether the individual’s truth claim (or false claim), is the product of the failure of due diligence due to ignorance or error, or conversely an incentive to deceive by bias, and deceit.

    The relatively common inability to know this criteria, and work through this criteria, is understandable, given the rather shallowness of human cognition. But this is not a matter of cognition but one of calculation. Or rather algorithmic testing. So while it takes only a few hundred pages to describe all of the above, the capacity to master it is no less difficult than the mastery of economics and law together.

    Cheers


    Source date (UTC): 2024-05-25 16:48:00 UTC

    Original post: https://twitter.com/i/web/status/1794410097482903552

  • WHY IS THE P-METHOD SO DIFFICULT TO INTUIT? Because people think it’s philosophy

    WHY IS THE P-METHOD SO DIFFICULT TO INTUIT?
    Because people think it’s philosophy – it’s not.

    We aren’t trying to determine if something is ideally or analytically, true, but whether it is testifiable (possible to claim as true).

    So if one of the criteria isn’t satisfied, then you drill down on it, until you determine it’s constructed from first principles.

    So as I said you need the whole package of: 1) ternary logic, 2) evolutionary computation 3) by continuous recursive disambiguation, 4) irreducible first principles that result from that disambiguation, 5) the demonstrated interests that result from those first principles, 6) reciprocity that results from those first principles and those demonstrated interests, 7) the criteria for decidability in satisfying the demand for infallibility, 8) and the criteria for testimony that results, 9) and the grammars and the method to produce languages as measurement to achieve with the language in which testimony is expressed, 10) and the means (Logic) of error bias and deceit, 11) producing the capacity to identify what is ignorance, error, bias, deceit, denial, projection, undermining, sedition or treason, 12) thus identifying whether the individual’s truth claim (or false claim), is the product of the failure of due diligence due to ignorance or error, or conversely an incentive to deceive by bias, and deceit.

    The relatively common inability to know this criteria, and work through this criteria, is understandable, given the rather shallowness of human cognition. But this is not a matter of cognition but one of calculation. Or rather algorithmic testing. So while it takes only a few hundred pages to describe all of the above, the capacity to master it is no less difficult than the master of economics and law together.

    Cheers


    Source date (UTC): 2024-05-25 16:48:00 UTC

    Original post: https://twitter.com/i/web/status/1794407086664683520

  • @Plinz, I’d like a little more clarity since I assume you mean by ‘sound’ a ‘for

    @Plinz,

    I’d like a little more clarity since I assume you mean by ‘sound’ a ‘formal’ or ‘constructable’ explanation of ‘the physical, neurological, and associative construction of representation?

    So Why Did It Take So Long?
    Early 20th Century: Gestalt Psychology – how complex representations are produced in the mind.
    1950s-1960s: Early cognitive models and generative grammar. (Age of the cognitive revolution)
    1970s-1980s: Connectionism, formal semantics, and cognitive linguistics. (Distributed representation)
    1990s-2000s: Embodied cognition, neuroimaging, corpus linguistics, and distributional semantics.
    Recent Years: Computational models, deep learning, and cross-disciplinary research integrating multidimensional data analysis.

    And;
    1. Neurologically we do know.
    2. The emergence of LLM’s have popularized what we have known.
    3. Depending upon your meaning, yes, until Turing we didn’t have the theory we have now. (Because Babbage failed to produce a theory, costing us a century, and the divergence of mathematic, logic, philosophy, and physics that ensued).
    4. And linguistics has, at least since Chomsky adapted Turing for grammar, but in particular at least the past three decades, known (and I employed seventy something library science people for years working on it);
    5. Philosophers have discussed ‘what’s it like to think like a bat’, meaning embodiment, scale, and time differences.
    6. Artists, whether visual, poetic, or musical, or literary have known and made use of it, despite not grasping its constitution.
    7. Mythology and theology certainly have understood.
    And while the ancients thought in atoms (objects) and did grasp that ‘there can’t be nothing because we can’t observe anything without something to compare it to’, so it’s just “persistent relations in time, all the way down”.

    It Took A Profound Reversal in Our Thought
    And we thought, until at least Popper, but it’s certainly still the main framework of human thought, that justification produced non-falsehood, when conversely, the universe consists of persistent relations, and all logic is falsificationary, and even with a full knowledge of all first principles in the universe, given limits on computational and mathematical reducibility cannot cover the scope of operational possibility, rendering prediction of possibilities limited to some general regularly of the emergence of new patterns.

    So there is a pattern in the history of human cognition that like the arc from embodiment through to the operational logic of first principles that evolves from human object, space, background place and location use in wayfinding, and the parsimony of memory needed to remember routes, as well as the parsimony of language necessary to explain both routes and their increasingly abstract applications of wayfinding to all manner of thoughts … that the brain is working in the opposite direction from distributed information, eliminating all information that does not consist of relations, and then combining those relations into perceptions we can then use to wayfind.

    So it’s natural that given all of our introspectively possible cognition would result from such objects and justifications that we would fail to observe the unintrospectable construction of those things from nothing but relations in time between vibrations of neurons.

    Cheers.
    CD

    Reply addressees: @Plinz


    Source date (UTC): 2024-05-19 22:44:03 UTC

    Original post: https://twitter.com/i/web/status/1792325373306269696

    Replying to: https://twitter.com/i/web/status/1792140352410791980

  • Anti-Whiteism when introduced, and Antiwhiteism as it is adopted into the vernac

    Anti-Whiteism when introduced, and Antiwhiteism as it is adopted into the vernacular. At least, that’s the pattern of such things.

    ( Not that this is a terribly serious question in the first place. πŸ˜‰ )


    Source date (UTC): 2024-05-19 03:28:01 UTC

    Original post: https://twitter.com/i/web/status/1792034450093527259

    Reply addressees: @Steve_Sailer @sapinker

    Replying to: https://twitter.com/i/web/status/1792033634414907749

  • THE TRUTH If the truth is racist, religion-ist, culture-ist, classist, and sexis

    THE TRUTH
    If the truth is racist, religion-ist, culture-ist, classist, and sexist then that is just the truth.

    To chastise the truth is merely to lie. It is a natural but odd feminine genetic intuition and social disposition, that is indoctrinated in all by Christian repetition, to attempt to equate disapproval with truth, argument with the spectrum of denial, evasion, redirection, projection, undermining, lies and false promise of freedom from the laws of the universe, whether physical (scarcity, prosperity), behavioral(ability, reciprocity), evolutionary (natural selection) and of course death, by social construction by an elaborate self reinforcing institution and community of liars.

    This is why you desperately try to fabricate accusations to achieve a social construction by mass production of lies.

    Which is of course, on of the reasons I studied antisocial behavior, lying, sex and group differences in lying.

    It was a fascinating really, even if a bit depressing.

    (Please save your breath for other peasants.)

    Affections
    CD

    Reply addressees: @mark_my_words @B1TCHEVAPORATE @HolywoodHatesUS


    Source date (UTC): 2024-05-14 00:53:33 UTC

    Original post: https://twitter.com/i/web/status/1790183634047053824

    Replying to: https://twitter.com/i/web/status/1790178079979257984

  • THE THREE GRAMMARS OF EXPERIENCE REQUIRE THREE DIFFERENT CRITERIA FOR EXISTENCE.

    THE THREE GRAMMARS OF EXPERIENCE REQUIRE THREE DIFFERENT CRITERIA FOR EXISTENCE.

    –“Q: Curt: How is it you have this certainty that the spiritual terms are nonsense”–

    There are three possible forms of existence: material persistence independent of us. Verbal description of experience dependent upon us but sharable. And Intuitionistic experience dependent upon the individual, both impersistent and un-sharable.

    So, do you mean scientifically as in testifiable(material), or the sense of literary and philosophical phenomenalism(verbal), in the sense of theological intuition of supernatural observation(imaginary) of alternate dimensions, or universes?

    If there is some analogy across all three of those frames (demonstrable, descriptive, and imaginary) then we can say we are referring to the same shared experience.

    If, instead, you mean that the phenomenal, or the imaginary exists other than as experience or imagination then that is neither demonstrable, testifiable, sharable, and it is false.

    At this point we know enough about the structure of the universe that any system of information transfer other than those we are aware of is impossible. And we cannot find one single example of the supernatural despite legions of people seeking to discover one, and legions of professionals determining their false every, single, time.

    I can address the spiritual, and just as a movie or novel or scripture can convey a set of imagined and felt qualia to you, it can be explained. This does not mean anything other than that these are three levels of the mind, that correspond to the hierarchy of mental processing. And that mental processing is biased toward the internal sensory(feelings), the external and internal empathic(others), or the external systemic (action).

    So, you cannot testify to the spiritual, but that does not mean the experience is irrelevant or not meaningful to you. As long as you do not engage in self harm by (addiction) to a falsehood.

    Cheers
    CD

    Reply addressees: @HakeemDemi


    Source date (UTC): 2024-05-14 00:22:03 UTC

    Original post: https://twitter.com/i/web/status/1790175710356688896

    Replying to: https://twitter.com/i/web/status/1790164454241698051

  • A math formula, a statement in formal logic, a physics equation, a chemistry ske

    A math formula, a statement in formal logic, a physics equation, a chemistry skeletal formula, an electronic circuit, an assembly language program, most economics, a set of blueprints, a balance sheet, a good portion of legislation, regulation and law, and certainly my work are ‘word salad’ to those ignorant of the skills necessary to understand them. If you interpret something as word salad you are simply identifying that which your ignorant of. πŸ˜‰

    Reply addressees: @TOEwithCurt


    Source date (UTC): 2024-05-12 19:39:15 UTC

    Original post: https://twitter.com/i/web/status/1789742154002083840

    Replying to: https://twitter.com/i/web/status/1789738461789831598


    IN REPLY TO:

    Unknown author

    Chris Langan is on Curt Jaimungal’s TOE (@TOEwithCurt) today. Speaking nonsense again, and Curt isn’t capable of handling him. It’s funny that I can find some truth in what Chris says but he’s a bit of a phenomenalist and says ‘start with perception’.
    But that’s rather silly since the universe is constructed from trivial rules, everything in it is emergent from those trivial rules, including the neurons that emerge from those same principles.
    The universe consists of the defeat of entropy by the production of density that survives in persistent relations – and neurons identify sets of persistent relations.
    The only theory we need is evolutionary computation by discovery of stable relations, and the hierarchy of emergent possibilities for recombination and the possible operations they can perform, that emerge from these assemblies – what we call disciplines.
    So of course he doesn’t understand Wolfram as simply running evolutionary simulations to identify emergences.
    Consequence of combinations are are computationally (operationally reducible) but they are not computationally predictable, nor are they mathematically reducible and so cannot be mathematically predictable.
    It’s not that complicated.
    CD

    Original post: https://x.com/i/web/status/1789738461789831598

  • Well of course, but it’s not like we need a logical criteria for judging somethi

    Well of course, but it’s not like we need a logical criteria for judging something that exists pervasively regardless of what we do. πŸ˜‰ That’s why I say it’s the first principle of cooperation.
    I avoid philosophical framing except when necessary, but categorically the tradition…


    Source date (UTC): 2024-05-07 15:41:37 UTC

    Original post: https://twitter.com/i/web/status/1787870409611755775

    Replying to: https://twitter.com/i/web/status/1787850212666462242

  • The Universal Grammar of Language: Measuring Existence (~750 Words) Language is

    The Universal Grammar of Language: Measuring Existence

    (~750 Words) Language is a system of measurement, made commensurable using marginal indifference in body, sense, and perception, describing all of existence that’s reducible to analogy of human experience, consisting of a sequential stream of sounds or symbols, producing increasing precision(disambiguation), that by the process of continuous recursive disambiguation(sentences) of an identity(concept, experience, scene) upon which we consent to (agree to) some degree of shared meaning (shared experience), using the universal grammar of language, of evolution, of physics, of the quantum background, of existence: Evolutionary Computation by continuous recursive disambiguation of entropy(energy, disorder) into negative entropy(mass, order), thus creating complexity by the defeat of entropy. We can describe the universe because language relies on the same logic as the universe.
    Ok so that’s high level how language works, and why it’s a sharable experience, and why we can gradually describe more of the universe with it – because it’s following the same rules as the evolution of all else in existence.
    But what ‘measurements’ does language consist of? Words. All words are names. Names of things that don’t change (nouns, pronouns, adjectives), names of things that are changing some state or other (verbs, adverbs,), names of their relations.
    How does arithmetic differs from language? Ordinary language consists of names of states, or changing states. So we can use verbs for actions(run), nouns to generalize them(movement), and adjectives that generalize temporary states (motionless).
    Vocabularies consist of words that serve the need for the totality of expression in a population in human life.
    Paradigms consist of subsets of vocabulary defining or limiting the dimensions permissible in the use of vocabulary, logic, grammar and syntax.
    Human macro-paradigms are: |Paradigmatic Evolution|: Embodiment > Anthropomorphism(counting) > Mythology(Arithmetic) > Religion(Math) > Philosophy(Geometry) > Empiricism(Algebra) > Science(Calculus) > Operationalism(Construction).
    The paradigm of Arithmetic is extremely simple. 1. All names consist of ratios to whatever identity we choose to reference. 2. All operators are +, -, *, /, =. 3. All results of operations are equal, unequal, and unequal by less than or more than.
    And the Consequences of the Vocabulary, Logic, Grammar and Syntax of Arithmetic are Very Simple
    1. Arithmetic is an extremely minimal language that consists of names (digits, glyphs of position (positional vocabulary)), phrases (positional names), verbs (operators), and agreements (unequal, equal, and modifiers, less than and more than.)
    2. The names are however context independent: they can refer to anything we choose.
    3. Positional names are unique: so they are memory, conflation, inflation, and ambiguity independent.
    4. Operations on positional names are also deterministic, operationally closed, logically closed, and ambiguity invariant, and as such arithmetic operations are interpretation independent.
    5. Positional names are unlimited in construction. So by combining unlimited construction and context independence we achieve scale independence.
    6. We perform mathematics in our minds even if we record it with tools. As such arithmetic operations are also time and cost independent.
    7. And given that it can be written, arithmetic is memory, and visualization independent.
    CLOSING
    So, while ordinary language that describes the existential world is vulnerable to context, ambiguity interpretation, scale, time, and cost variation, arithmetic REMOVES THOSE DIMENSIONS from the paradigm, with it’s simple paradigm, vocabulary, logic, and grammar. As such we have no choice but to follow simple rules of addition subtraction, multiplication and division in order to sense, perceive, and judge that which is otherwise beyond our perception, comprehension, memory, and reason.
    This is why arithmetic works.
    It’s an innovation in language and writing that extends our capacity beyond our native memory perception and reason.
    And when combined with the balance scale of double entry accounting lets us weigh and measure complex human cooperation at extraordinary scale and complexity over extraordinary time.
    Now, this is the basis of understanding all paradigms. What dimensions, terms, and agreements are necessary and which are prohibited in order to prevent human vulnerability to variations in context, ambiguity, interpretation, scale, time, and cost – and lying.
    The unification of the sciences whether formal (language and logic), physical, behavioral, or evolutionary, can be achieved through this same analysis and the disambiguation of terms such that they are universal across the sciences instead of unique to them, and the uniqueness necessary in the sciences is derived from and explain d by the universal definitions that are constructed from the first principles: evolutionary computation of the defeat of entropy by the discovery of persistency in the form of ever increasing organizations of complex mass.
    Cheers Curt Doolittle The Natural Law Institute

    Source date (UTC): 2024-05-07 04:32:39 UTC

    Original post: https://x.com/i/articles/1787702060579750122