Source: Twitter X

  • THE DEATH THREAT TO MICROSOFT, GOOGLE AND APPLE Look, back in ’04 I understood t

    THE DEATH THREAT TO MICROSOFT, GOOGLE AND APPLE
    Look, back in ’04 I understood the future of the computer interface, and by 2012 set out to produce it.

    REASONING
    1) The browser is a superior operating system vs the operating systems. (Google’s failure to take advantage of it.)
    2) The file system centric operating system is inferior to the task or process based system that contains files if necessary – this allows context to lead data not the other way around. (Microsoft’s failure)
    3) The interface that functions as a map and store of programs, projects, processes, tasks, and contexts so that every work product exists in a context is superior in organization and utility to an AI, as well as a human. (Microsoft, Google, and Apple’s failure)
    4) The AI-First ‘shell’ or ‘user interface’, especially when trained on (given rules for) following process and policy and innovation where necessary is superior to human memory retention and discipline and innovation where necessary given the normal distribution of users.
    5) The AI-first capacity to assimilate hundreds of causal dimensions compared to the human capacity for one to five is superior to human abilities.
    6) The AI-First capacity to evaluate deduce, infer, predict, and advise across large scales of data organized as such is superior to human ability.

    RESULT
    We designed Oversing and Runcible for this purpose. But we are seeing OpenAI follow the same incremental reasoning. They were far behind us in that understanding, but because of their success with LLMs they have generated the capital necessary to make it happen.
    This is a death sentence for every other operating system, user interface, and application.

    OUR CURRENT THINKING
    We can solve the two blocking problems for LLMs to develop into AGI/SI.
    1) Episodic memory as index and associative network.
    2) Constraint and closure (truth, reciprocity, possibility, historical evidence) as means of decidability and continuous recursive improvement.
    And the economic:
    3) Incremental (recursive) auto-association and prediction. (which is a cost problem)
    The remaining problem will haunt us:
    4) Neuromorphic computing is necessary to collapse costs. The current state of research is promising but underfunded.


    Source date (UTC): 2025-10-21 18:54:04 UTC

    Original post: https://twitter.com/i/web/status/1980709185978548548

  • We’ve done the work. At present it’s eight volumes of rigorous work. We will beg

    We’ve done the work. At present it’s eight volumes of rigorous work. We will begin publishing them incrementally this year.

    If you have particular ideas that then we would love to hear them. However, the majority of conservative thought presumes an equality of instinct, intuition, bias, ability, and interest that does not exist, and as such policies cannot be stated under the pretense that people will behave as desired.
    Instead we must govern with the human beings that exist and will exist, who are not bank slates, and relatively immutable, especially without education and continuous social and institutional enforcement.
    And as such those legal and organizational institutions we produce must maintain incentives independently of the biases of the people who occupy them.


    Source date (UTC): 2025-10-21 18:52:42 UTC

    Original post: https://twitter.com/i/web/status/1980708842905432467

  • THE HAPPY ACCIDENT: WHY WE SOLVED TRUTHFUL AND ETHICAL AI

    THE HAPPY ACCIDENT: WHY WE SOLVED TRUTHFUL AND ETHICAL AI…


    Source date (UTC): 2025-10-21 18:44:47 UTC

    Original post: https://twitter.com/i/web/status/1980706851953234400

  • WHY HASN’T THE AI FIELD DISCOVERED OUR SOLUTION? (imo: conflating answer with al

    WHY HASN’T THE AI FIELD DISCOVERED OUR SOLUTION?
    (imo: conflating answer with alignment instead of alignment from the truth.)

    Why the Field Hasn’t Discovered It
    Briefly:
    – Objective mismatch: most researchers optimize for fluency and safety, not falsifiability.
    – Epistemic fragmentation: few combine physics, logic, and jurisprudence into one causal grammar.
    – Institutional incentives: current benchmarks and funding reward novelty, not closure or accountability.
    – Cognitive bias: humans are narrative animals; operational reasoning feels “cold” and is culturally under-selected.

    More…
    Why most of the field hasn’t done this yet

    Different objective functions.
    – Mainstream systems are trained to maximise plausibility and user satisfaction, not falsifiable correctness.

    Fragmented disciplines.
    – Logic, physics, psychology, and jurisprudence live in separate silos. Few teams attempt to unify them under one causal grammar.

    Incentive structure.
    – Academic and commercial metrics reward novelty, fluency, or engagement—not truth-liability or operational precision.

    Tooling inertia.
    – Evaluation pipelines (benchmarks, loss functions) measure text similarity or preference, not closure or decidability.

    Cognitive and cultural bias.
    – Humans find narrative explanation more comfortable than constraint reasoning. Building institutions around constraint feels bureaucratic and “cold.”

    Cost of accountability.
    – A system that keeps full provenance and liability increases organizational risk; most labs are not ready for that level of auditability.

    In short, most current AI research optimizes for speech; what we’re proposing optimizes for law.
    The former produces correlation and persuasion; the latter produces computable, accountable reasoning.
    Different objective, different architecture.


    Source date (UTC): 2025-10-21 18:08:47 UTC

    Original post: https://twitter.com/i/web/status/1980697789945508248

  • Untitled

    [No text content]


    Source date (UTC): 2025-10-21 17:32:11 UTC

    Original post: https://twitter.com/i/web/status/1980688578930938065

  • (NLI/Runcible) Interesting that we are at the point where we are writing books f

    (NLI/Runcible)
    Interesting that we are at the point where we are writing books for AI’s under the assumption that most people will learn from AI’s translating any given knowledge into that format most accessible to the individual.

    We’ve consciously targeted AIs as the ‘reader’ in some sense because first we need to train them, but secondly we assume anything this complicated will need to be taught by AIs that tutor the individual on his or her terms.


    Source date (UTC): 2025-10-21 17:05:37 UTC

    Original post: https://twitter.com/i/web/status/1980681893399130582

  • (NLI/Runcible) You know, between Martin as the hard right masculine and Ariella

    (NLI/Runcible)
    You know, between Martin as the hard right masculine and Ariella as the not-so-radical feminine left of center, I’m feeling like the moderate centrist these days. 😉

    I don’t want to compare it to having male and female children but … I can’t help myself. 😉 “Just keep them apart so we prevent infighting.” lol

    Monocultures are dangerous and competitive cultures are challenging. What the heck are parents, ceos, prime ministers, presidents, and kings to do? 😉


    Source date (UTC): 2025-10-21 17:03:31 UTC

    Original post: https://twitter.com/i/web/status/1980681367257248134

  • the cycles can be exploited advance or restrained by anyone attempting to captur

    the cycles can be exploited advance or restrained by anyone attempting to capture power or advantage. The combination of industrialization > freeing women from household labor > adding women to workplace and polity created opportunity for the feminine strategy.


    Source date (UTC): 2025-10-21 16:28:28 UTC

    Original post: https://twitter.com/i/web/status/1980672543322521838

  • Efficient vs effective. Do we generate the same or better world model that produ

    Efficient vs effective. Do we generate the same or better world model that produces a same or better output, or is the a statistical fallacy?


    Source date (UTC): 2025-10-21 16:21:50 UTC

    Original post: https://twitter.com/i/web/status/1980670876707352586

  • HOW WE DEFINE “LOGOS” I avoid the term to prevent conflation with the supernatur

    HOW WE DEFINE “LOGOS”
    I avoid the term to prevent conflation with the supernatural, but Brad uses it consistently and correctly to demonstrate the continuity of thought across time.

    In our work, Logos doesn’t mean merely “word” or “speech” in the biblical sense — it refers to the structure of reality that binds matter, mind, and meaning into a self-consistent, computable order.

    To unpack it operationally:

    Etymologically: Logos in Greek philosophy (Heraclitus → Aristotle → Stoicism → Christianity) meant the rational principle organizing the cosmos — the grammar of being (existence and experience).

    Within this framework: Logos = law of laws — the recursive, self-verifying grammar that allows truth, reciprocity, and cooperation to converge across all scales. (consistent, coherent, laws of the universe: logical, physical, biological, behavioral, evolutionary.)

    At Maturity: Law “becomes Logos” when human systems (legal, computational, neural) reflect the same causal and reciprocal order as nature itself. Civilization, mind, and machine operate under a single testable logic — the computational grammar of reality.

    Operational definition: Logos is the fully closed feedback between measurement, computation, and cooperation — the state where truth and law are self-auditing, eliminating parasitism and error through reciprocal verification.

    So, in short:

    Logos = the realized unity of natural law, logic, and computation — the consciousness of the universe made explicit through reciprocal systems (human or artificial).

    CD

    (via
    @WerrellBradley
    – Brad Werrell)


    Source date (UTC): 2025-10-21 16:17:48 UTC

    Original post: https://twitter.com/i/web/status/1980669860574376398