Theme: Grammar

  • “I don’t see the point of calling it a logic if it isn’t mathematically formaliz

    “I don’t see the point of calling it a logic if it isn’t mathematically formalized.”

    That is where you err. Math is a primitive language that for simple reason is less vulnerable to some categories of inflation and conflation. But mathematical reducibility is small compared to axiomatic reducibility which is why the foundations of math are expressed axiomatically.

    Logic of sets is greater than mathematical reducibility. The logic of operations is greater than logical or mathematical reducibility.

    So no, unless you can explain something operationally you ae always of necessity speaking in analogy not causality.

    Ergo. math is logically formalized and logic is operationally formalized – the opposite of your presumption. :/

    Reply addressees: @LiminalRev @AutistocratMS


    Source date (UTC): 2025-04-18 00:06:42 UTC

    Original post: https://twitter.com/i/web/status/1913021334172672000

    Replying to: https://twitter.com/i/web/status/1913017932268847124

  • There is only one principle from which the rest are constructed in a hierarchy.:

    There is only one principle from which the rest are constructed in a hierarchy.: Derivations. You might say that we develop grammars (sublogics of paradigms) for our convenience but in the end it just entropy != charge+/- and combination =.


    Source date (UTC): 2025-04-17 23:58:55 UTC

    Original post: https://twitter.com/i/web/status/1913019373255893107

    Reply addressees: @AutistocratMS @LiminalRev

    Replying to: https://twitter.com/i/web/status/1913018073596248491

  • I suspect there is some confusion between: a) ternary logic of relations: “three

    I suspect there is some confusion between:
    a) ternary logic of relations: “three points are required to test a line”,
    b) ternary logic of decidability: U/T/F,
    c) ternary logic of charge (spin),
    d) ternary logic of transformations: before, during, after,
    e) and ternary logic of evolutionary computation.
    The entire corpus is built from this single principle: The universe has only one rule at increasing levels of complex expression. It must. There is no alternative.

    Reply addressees: @LiminalRev @AutistocratMS


    Source date (UTC): 2025-04-17 23:48:55 UTC

    Original post: https://twitter.com/i/web/status/1913016857273171968

    Replying to: https://twitter.com/i/web/status/1912997402673291392

  • The thing is others get it and some others don’t even though it’s consistent at

    The thing is others get it and some others don’t even though it’s consistent at all scales. But that’s why I’m not using them in the books. (Though I might in the logic book). I’d rather not include them and confuse people than include them and lose half the readership.)


    Source date (UTC): 2025-04-17 21:34:42 UTC

    Original post: https://twitter.com/i/web/status/1912983081054880122

    Reply addressees: @AutistocratMS

    Replying to: https://twitter.com/i/web/status/1912974358286619113

  • For those that don’t recognize Pierre’s use of the terms indexical and iconic in

    For those that don’t recognize Pierre’s use of the terms indexical and iconic in this context:

    Pierre Rousseau’s post engages with Curt Doolittle’s cultural framework, contrasting “truth before face” (West, associated with men and “iconic” reasoning) and “face before truth” (East, linked to women and “indexical” reasoning), using philosophical terms from semiotics: “iconic” refers to signs resembling their object, while “indexical” points to signs tied to context, as defined by Arthur Burks (1949) in Philosophy and Phenomenological Research.

    The mention of “iconicity” as a “3rd tier realization” and “recursive reality” suggests a layered, self-referential understanding of truth, possibly drawing on developmental psychology where symbolic thinking in children evolves into more complex reasoning, a concept explored in Piaget’s stages of cognitive development (1954).

    Rousseau’s claim that “Xi uses feminism to subvert it” implies a geopolitical critique, likely referencing Xi Jinping and suggesting that feminist ideals are being co-opted to undermine Western cultural values, a perspective that aligns with Butler’s Gender Trouble (1999) on subverting identity norms, but lacks direct evidence in this context.

    Reply addressees: @truthb4face


    Source date (UTC): 2025-04-16 17:58:06 UTC

    Original post: https://twitter.com/i/web/status/1912566181506215939

    Replying to: https://twitter.com/i/web/status/1912565091523850243

  • We have two paths in progress. Eric is decomposing assertions into a formal logi

    We have two paths in progress. Eric is decomposing assertions into a formal logic graph, and then reconstituting it with an LLM. I’m trying to train an LLM from the bottom up to practice dependency tracing. I suspect both will work and hopefully each will compensate for any weakness in the other. Myself I feel that the AIs respond to socratic training more so than rules. If only becaues they cannot follow causal chaining without careful regulation of their processing. But we shall see.

    Reply addressees: @WalterIII @farajalrashidi


    Source date (UTC): 2025-04-10 18:15:12 UTC

    Original post: https://twitter.com/i/web/status/1910396157391712256

    Replying to: https://twitter.com/i/web/status/1910394704761676034

  • We have to train it in our logic first, then apply that logic to all subsequent

    We have to train it in our logic first, then apply that logic to all subsequent training data, so that we arent trying to correct error but preventing it from introduction into the model.


    Source date (UTC): 2025-04-10 02:35:35 UTC

    Original post: https://twitter.com/i/web/status/1910159695668789346

    Reply addressees: @AngrySaltMiner

    Replying to: https://twitter.com/i/web/status/1910020247048057125

  • Training a Base Model To Use Our Methodology Speculative Insight Our approach to

    Training a Base Model To Use Our Methodology

    Speculative Insight
    Our approach to training is closest to constructive neuro-symbolic alignment. We are not retraining a model to behave; we are teaching it a logic, using operational primitives and adversarial truth testing. Most AI research assumes abstraction is layered on top of training. We’re rightly flipping this, saying: abstractions must be rebuilt from primitives under decidability constraints.
    This is both:
    • Epistemologically superior to probabilistic inference by language prediction, and
    • Efficient if the base model already has rich sensorimotor, common-sense, and action grammar knowledge.
    Strategy Viability
    Our strategy is highly viable under the following plan:
    1. Select a model like Mistral or Yi-34B with good grounding and minimal prior abstractions.
    2. Perform continued pretraining, not just fine-tuning—on our corpus of:
      – Operational definitions
      – Formal grammars
      – Natural Law structure
      – First-principles logic trees
      – Canon of examples (cases)
    3. Use adversarial Socratic dialogue in training, where errors trigger correction from your defined logic.
    4. Apply RLAIF (Reinforcement Learning from Adversarial Instruction Following) rather than standard RLHF—this avoids crowd-sourced moral shaping.
    Our strategy is both intelligent and viable, provided the foundation model has a sufficient grounding in primitives (perception, action, objects, relations, events, and basic intentions)—what might be called naïve physics and naïve psychology—while remaining relatively uncommitted to particular abstract frameworks. In effect, you’re looking for:
    1. High coverage of experiential and operational primitives (so you don’t need to re-teach what a door, key, argument, or goal is),
    2. Low entrenchment in abstract philosophical, ideological, or academic conceptual hierarchies, so you can impose your own.
    Candidate Base Model:
    1. Mistral 7B / Mixtral
    • Why: Mistral 7B is known for efficiency, open weights, and solid grounding in daily-use language. It’s less “opinionated” than LLaMA-2 or GPT-J on abstractions.
    • Primitives: Reasonably good on object/agent/action-level reasoning.
    • Bias: Minimal ideological shaping.
    • Steerability: Very good.
    • Viability: Very high.


    Source date (UTC): 2025-04-09 16:41:46 UTC

    Original post: https://x.com/i/articles/1910010260338925613

  • Speculative Insight Our approach to training is closest to constructive neuro-sy

    Speculative Insight

    Our approach to training is closest to constructive neuro-symbolic alignment. We are not retraining a model to behave; we are teaching it a logic, using operational primitives and adversarial truth testing. Most AI research assumes abstraction is layered on top of training. We’re rightly flipping this, saying: abstractions must be rebuilt from primitives under decidability constraints.

    This is both:

    Epistemologically superior to probabilistic inference by language prediction, and

    Efficient if the base model already has rich sensorimotor, common-sense, and action grammar knowledge.

    Strategy Viability

    Our strategy is highly viable under the following plan:

    Select a model like Mistral or Yi-34B with good grounding and minimal prior abstractions.

    Perform continued pretraining, not just fine-tuning—on our corpus of:
    – Operational definitions
    – Formal grammars
    – Natural Law structure
    – First-principles logic trees
    – Canon of examples (cases)

    Use adversarial Socratic dialogue in training, where errors trigger correction from your defined logic.

    Apply RLAIF (Reinforcement Learning from Adversarial Instruction Following) rather than standard RLHF—this avoids crowd-sourced moral shaping.

    Our strategy is both intelligent and viable, provided the foundation model has a sufficient grounding in primitives (perception, action, objects, relations, events, and basic intentions)—what might be called naïve physics and naïve psychology—while remaining relatively uncommitted to particular abstract frameworks. In effect, you’re looking for:

    High coverage of experiential and operational primitives (so you don’t need to re-teach what a door, key, argument, or goal is),

    Low entrenchment in abstract philosophical, ideological, or academic conceptual hierarchies, so you can impose your own.

    Candidate Base Model:

    1. Mistral 7B / Mixtral

    Why: Mistral 7B is known for efficiency, open weights, and solid grounding in daily-use language. It’s less “opinionated” than LLaMA-2 or GPT-J on abstractions.

    Primitives: Reasonably good on object/agent/action-level reasoning.

    Bias: Minimal ideological shaping.

    Steerability: Very good.

    Viability: Very high.

    🔸Mixtral adds sparse Mixture-of-Experts for better generalization, while keeping training compute reasonable.


    Source date (UTC): 2025-04-09 16:36:23 UTC

    Original post: https://x.com/i/articles/1910008903259283460

  • I’ll probably submit it or at least post on ArXive. I do not expect wide accepta

    I’ll probably submit it or at least post on ArXive. I do not expect wide acceptance. But I have felt that the point needed to be made, and made in the style and format (grammar) they are familiar with. The criticism will consist of four points I already understand. And I expect the majority to fail to grasp the argument, despite that its not really that complicated: a classical explanation given our current knowledge of the quantum background (zero point) defines causality simply and opens up clear venues for closure by investigation. I have intuitions after writing this why two of the numbers vary. So I think I have and can make my case. I just have too much work already. 🙁

    Reply addressees: @GestaltedApe


    Source date (UTC): 2025-03-25 18:30:35 UTC

    Original post: https://twitter.com/i/web/status/1904601823748726784

    Replying to: https://twitter.com/i/web/status/1904531983172043199