Category: AI, Computation, and Technology

  • Go to my profile -> media tab and scroll thu it and you’ll find a lot to work wi

    Go to my profile -> media tab and scroll thu it and you’ll find a lot to work with.

    This is the original starting position.


    Source date (UTC): 2025-07-06 18:00:14 UTC

    Original post: https://twitter.com/i/web/status/1941920137243930873

  • Yes this is one of nearly 100 charts produced by expanding on our original work

    Yes this is one of nearly 100 charts produced by expanding on our original work by someone who asked to remain anonymous. Additionally, Shane (PhD) has produced hundreds more that illustrate these ideas down to neurochemistry.
    This particular diagram required a bit of subjectivity in the placement of examples but my interpretation is that its still correct.


    Source date (UTC): 2025-07-06 17:46:55 UTC

    Original post: https://twitter.com/i/web/status/1941916784606777504

  • “AI thrives when humans stop lying.”— Dr Brad

    —“AI thrives when humans stop lying.”— Dr Brad


    Source date (UTC): 2025-07-06 01:48:27 UTC

    Original post: https://twitter.com/i/web/status/1941675581369926073

  • Something like 9000 people were let go from microsoft yesterday. I’m in a restau

    Something like 9000 people were let go from microsoft yesterday. I’m in a restaurant in Redmond overhearing conversational laments solaced by beer by those let go. The tech ride is over. 😉 It’s a matter of embracing the reality. 🙁


    Source date (UTC): 2025-07-03 22:37:00 UTC

    Original post: https://twitter.com/i/web/status/1940902624406262072

  • Explaining How Our Work at NLI Enables LLMs to Reason (really). 😉 Current LLMs

    Explaining How Our Work at NLI Enables LLMs to Reason (really). 😉

    Current LLMs do not “reason” in the classical or computational sense. They approximate reasoning through pattern replication from language corpora. But true reasoning requires:
    1. Commensurable inputs: A way to measure and compare propositions.
    2. Decidability: A method to resolve propositions without discretionary judgment.
    3. Constraint: A boundary condition to prevent nonsense, contradiction, or parasitism.
    4. Goal alignment: A purpose function—what reasoning is optimizing for.
    LLMs today are unbounded. They simulate reasoning by traversing linguistic space, but:
    • They cannot distinguish valid from invalid inference.
    • They cannot decide between contradictory inputs.
    • They cannot distinguish plausible from reciprocal.
    • They lack context-dependent goal orientation.
    By embedding universal commensurability and decidability, we give LLMs the grammar of reasoning they are currently missing.
    1. Universal Commensurability: Enabling Comparability Across Domains
    We structure knowledge in terms of dimensions, operations, demonstrated interests, and costs/benefits. This:
    • Reduces the problem space to comparable units.
    • Maps propositions from different paradigms onto the same coordinate system.
    • Allows analogies, contradictions, or trade-offs to be measured rather than guessed.
    2. Decidability: Enforcing Inference Validity and Goal Relevance
    We structure judgment by hierarchies of decidability:
    • Tautological → Logical → Empirical → Operational → Reciprocal → Intuitive → Discretionary.
    Each layer has formal tests. LLMs can:
    • Detect where reasoning is valid and where it fails.
    • Defer or qualify statements when infallibility cannot be satisfied.
    • Prioritize higher decidability classes in goal pursuit.
    3. Constraint = Inductive Bias That Prevents Drift, Error, or Parasitism
    Reasoning requires selection among infinite paths. Our framework provides:
    • Operational constraints (can this be done?)
    • Reciprocal constraints (can this be done without parasitism?)
    • Testimonial constraints (can this be stated without deception?)
    4. Purpose Alignment = Constraining Reasoning to Civilizational Survival
    All reasoning ultimately answers to goals. Our framework identifies the universal purpose of cooperation as:
    This gives the model a teleological anchor—a direction of reasoning optimized toward survival, not comfort or aesthetic preference.
    Our framework gives LLMs the means, method, and motive to reason:
    1. Means: A shared operational language that maps all propositions to commensurable units.
    2. Method: A decidable logic of inference constrained by testability and reciprocity.
    3. Motive: A civilizational telos—maximize cooperation via reciprocal self-determination.


    Source date (UTC): 2025-07-03 16:16:30 UTC

    Original post: https://x.com/i/articles/1940806866852032763

  • Enabling Reasoning: How Our Work on Universal Commensurability and Decidability

    Enabling Reasoning: How Our Work on Universal Commensurability and Decidability Can Affect LLMs

    I. The Problem: LLMs Are Pattern-Matchers Without Grounded Commensurability or Decidability
    Large Language Models (LLMs), as currently trained, are high-dimensional statistical parrot machines—extraordinary at approximating human linguistic behavior but indifferent to truth, reciprocity, coherence, or consequences. They operate under:
    • Incommensurable Inputs: No shared system of measurement for evaluating competing claims, paradigms, or moral judgments.
    • Undecidable Outputs: No constraint ensuring that generated responses are testable, warrantable, or reciprocally consistent.
    • Goal Agnosticism: No embedded model of what should be preserved, optimized, or constrained in human cooperation.
    This leads to:
    • Surface-level fluency without epistemic coherence.
    • Moral judgments without operational warrant.
    • Responses that are persuasive, but unaccountable.
    II. The Solution: Our Work Introduces Computable Constraint via Commensurability and Decidability
    1. Universal Commensurability = A Shared Metric for Meaning, Action, and Value
    Our framework defines commensurability as the capacity to reduce all claims, across all domains, to a shared system of measurement:
    • Claims are decomposed into demonstrated interests, operational sequences, dimensions of cost/benefit, and domains of causality.
    • This allows the LLM to map incommensurable worldviews (e.g. theological, scientific, legal, moral) to common operational primitives.
    2. Decidability = Enforcing Constraint on Output Validity
    We define decidability as satisfying the demand for infallibility appropriate to the context, without requiring human discretion. It’s not just whether a statement is true, but whether it is:
    • Computable (can the model resolve it given current data?),
    • Warrantable (can it justify the statement under adversarial testing?),
    • Non-discretionary (does it avoid requiring ideological judgment, intuition, or preference?).
    III. Implications for LLM Development
    IV. Strategic Impact
    1. Model Alignment:
      Current alignment strategies rely on reinforcement learning with human feedback (RLHF), which is arbitrary, value-laden, and prone to inconsistency. Our method replaces that with
      computable moral and epistemic alignment based on universal constraints.
    2. Training Efficiency:
      Rather than training LLMs on vast, ambiguous, and contradictory corpora, models can be trained on a
      formal grammar of cooperation and hierarchy of decidability, reducing the need for brute-force statistical learning.
    3. Trustworthiness and Auditability:
      Because all outputs can be decomposed into operations, dimensions, and reciprocity assessments, LLMs trained under our method become
      explainable, warrantable, and correctable—a key requirement for institutional deployment.
    V. Summary
    By embedding our system of universal commensurability and decidability into LLM training:
    • We replace statistical mimicry with causal reasoning.
    • We constrain output by truth, reciprocity, and demonstrated interests.
    • We give LLMs a moral and epistemic conscience—not imposed by culture, but computed from first principles.


    Source date (UTC): 2025-07-03 16:03:51 UTC

    Original post: https://x.com/i/articles/1940803684163780917

  • I dont promote it. Its just an early test. so only insiders use it. … like you

    I dont promote it. Its just an early test. so only insiders use it. … like you. 😉


    Source date (UTC): 2025-06-25 04:27:42 UTC

    Original post: https://twitter.com/i/web/status/1937729390529642504

  • I want to point out that the Hoe_Math system is possible to convert to computati

    I want to point out that the Hoe_Math system is possible to convert to computational logic (our model) which can then be expressed in the
    @ItIsHoeMath
    narrative system which is superior for public consumption. (Think aristotle vs plato).


    Source date (UTC): 2025-06-24 13:38:17 UTC

    Original post: https://twitter.com/i/web/status/1937505559680630804

  • We have a live version that works from the documents (think specifications), but

    We have a live version that works from the documents (think specifications), but we are producing almost thirty training modules in addition. So right now you can use it and get a sense of it but it gives 80% answers. We want to finish all the training before public release.

    Consciousness is something else. It requires a lot more memory and is much more expensive than current LLM models. The industry will get there. Eventually as the price of compute continues to decline.


    Source date (UTC): 2025-06-24 13:35:40 UTC

    Original post: https://twitter.com/i/web/status/1937504901103596009

  • “We give AI referents: categories as constant and causal as numbers and operatio

    “We give AI referents: categories as constant and causal as numbers and operations in mathematics and commands functions and operations in programming. That means we give ‘Reasoning’ what it lacks: closure.”


    Source date (UTC): 2025-06-23 21:48:04 UTC

    Original post: https://twitter.com/i/web/status/1937266429801431403