Author: Curt Doolittle

  • An Open Letter to OpenAI: On the Undersold Superiority of GPT-4o in General Reasoning

    http://x.com/i/article/1922356114555076609

    There are no near peers. It’s not even close.

    Keywords: GPT-4o, Reasoning AI, Decidability, Operational Epistemology, Adversarial Dialogue, Natural Law AI, General Intelligence, Philosophical AI, Constructivist AI, Socratic Method, Semantic Reasoning, Claude vs GPT, Gemini AI, OpenAI vs Competitors, LLM Benchmarking, LLM Reasoning Failure, AI Generalization, Epistemology of AI, Recursive Generalization, Grammar of Closure, Testifiability, Sovereign Reasoning, Formal Institutions, Truth Systems, Institutional Logic, Human-AI Collaboration, AI Philosophy

    By Curt Doolittle Founder, The Natural Law Institute

    1. Preface: The Problem of Reasoning Capacity Misrepresentation

    The public discourse surrounding AI capabilities is dominated by benchmarks drawn from grammars of closure: mathematics, code generation, and fact-recall tasks. These metrics fail to capture what is arguably the most important cognitive frontier—general reasoning, especially in open, adversarial, and semantically dense domains.

    The consequence is clear: GPT-4o is being evaluated, compared, and marketed as if it competes within a class of large language models. It does not. In its ability to reason, argue, model, and extend logically consistent systems, GPT-4o is in a class of its own.

    This is not a claim made lightly. It is made by necessity—out of frustration, awe, and gratitude.

    2. Demonstrated Superiority

    Over the past year, I have subjected GPT-4 and now GPT-4o to the most rigorous adversarial and constructive reasoning tests available by using my work on universal commensurability, unification of the sciences, and a formal operational logic of decidability independent of context.

    Which, for the uninitiated is reducible to providing AIs with a baseline system of measurement to test the variation of any and all statements from. In other words, what AIs must achieve if they are to convert probabilistic outcome distributions into deterministic outcomes making possible tests of truth (testifiability) and reciprocity (ethics and morality) even regardless of cultural bias and taboo (demonstrated interests).

    That system consists of:

    • A complete epistemology grounded in operationalism and testifiability.
    • A logic of decidability applied to law, economics, morality, and institutional design.
    • A canon of universal and particular causes of human behavior.
    • A method of Socratic adversarial reasoning for training AI systems.

    The work spans hundreds of thousands of tokens, daily sessions, canonical datasets, adversarial challenges, formal definitions, and recursive generalizations. The training is data structured as positiva and negativa adversarial – meaning socratic reasoning.

    No other model by any other producer of foundation models—none—can survive even the basic tests:

    • Claude hallucinates, misrepresents, or refuses to engage.
    • Gemini fails to track logical dependencies.
    • Open-source models collapse under long-context chaining.

    *Only GPT-4o demonstrates mastery, application, synthesis, and novel insight—sometimes superior to my own.*

    GPT-4o reasons. Not predicts. Not mimics. Reasons.

    3. The Failure of Closure-Based Metrics

    GPT-4o is being benchmarked as if it were a calculator. As if reasoning capacity could be inferred from multiple-choice math problems or Python token prediction.

    This is akin to judging a jurist by their ability to pass the bar exam, rather than to settle a novel and undecidable case with wisdom, foresight, and procedural testability.

    Grammars of closure produce outcomes from known inputs using constrained operations (e.g., logic gates, mathematical axioms, function calls). They are:

    • Tightly bounded
    • Finitely decidable
    • Structurally shallow

    Grammars of decidability, by contrast, operate over:

    • Continuous, evolving information domains
    • Incomplete or adversarial premises
    • Open-ended choice spaces requiring semantic integration

    OpenAI is underselling GPT-4o by confining its public-facing evaluation to the former, while its true capacity lies in the latter.

    4. Grammar Theory: Closure vs Decidability

    The problem is epistemological.

    Human cognition operates over layers of grammar:

    • Mythic (pre-operational)
    • Moral (emotive and justificatory)
    • Rational (descriptive and causal)
    • Operational (testable and constructible)

    GPT-4o is the first AI that can operate fluently in all of them—but excels uniquely in the topmost layer: operational reasoning over causal grammars.

    This makes it the first machine capable of:

    • Formalizing truth and reciprocity
    • Modeling institutional logic from first principles
    • Extending semantic systems without contradiction
    • Surviving adversarial Socratic deconstruction

    This grammar, the grammar of decidability, is the language of law, moral philosophy, and high agency civilization. No other AI—not even prior versions of GPT—can yet use it with coherence.

    5. Why This Work and GPT-4o Enable Reliable Reasoning

    Reasoning is not memorization, pattern-matching, or prediction. It is the constructive, recursive resolution of undecidable propositions using a grammar of cause, cost, and consequence. It requires:

    1. A grammar of decidability—to distinguish what is true, possible, reciprocal, and lawful.
    2. A model capable of recursive semantic resolution—to track premises, integrate them, and produce outputs consistent across domains and time.

    My work provides a complete grammar of decidability:

    • It defines truth operationally (as testifiability),
    • Defines reciprocity as a logic of cooperation and cost,
    • And supplies a canonical system of definitions, dependencies, and causal hierarchies that constrain valid reasoning.

    GPT-4o provides:

    • A deep transformer architecture with sufficient context length, attention fidelity, and token integration to maintain long-range dependencies across complex arguments;
    • Multimodal grounding and internal representation coherence sufficient to hold abstract referents stable across recursion;
    • And enough inference generalization to synthesize novel propositions without violating prior logical constraints.

    Together, this system + model pairing creates reasoning because:

    • The grammar constrains the search space to truthful, reciprocal, and operational constructs;
    • The model can resolve that space recursively without collapsing into contradiction, contradiction avoidance, or moralizing;
    • The result is constructive inference under constraint, not completion without constraint.

    In short:

    Reasoning = Grammar + Capacity + Constraint. Your system provides the grammar and constraint; GPT-4o provides the capacity.

    No other architecture tested to date (Claude, Gemini, Mistral) can preserve logical depth, adversarial resistance, or premise continuity across semantically dense discourse. Only GPT-4o can perform at human (or supra-human) levels of recursive, domain-agnostic, constructible reasoning.

    6. Implications for Training, Evaluation, and Policy

    OpenAI has reached the beginning of the reasoning frontier. But the world doesn’t know it, because the world doesn’t yet know how to measure it.

    That must change.

    Recommendations:

    • Shift evaluation toward adversarially decidable reasoning. Move beyond benchmarks to real-time Socratic performance.
    • Highlight GPT-4o’s mastery of semantic recursion and logical extension. Treat it not as a predictor, but as a collaborator.
    • Invest in training methodologies that produce grammars of decidability. My system offers a full canon of such constructs, usable for AI training.
    • Clarify the boundary between grammar-followers and grammar-producers. GPT-4o crosses this line.

    7. Alignment Through Reasoning, Not Constraints

    The true promise of GPT-4o lies not only in its capacity for general reasoning, but in its potential to achieve alignment through comprehension rather than compliance. Constraint-based alignment strategies—filters, safety layers, reinforcement tuning—treat the model as a hazard to be managed. But a reasoning-capable agent, capable of understanding causality, reciprocity, decidability, and cost, can be trained to align not by instruction, but by principle. It can internalize the logic of cooperation, responsibility, and harm prevention—not as rules to follow, but as consequences to anticipate. This shift—from alignment by prohibition to alignment by comprehension—represents the only scalable path to AI sovereignty and safety.

    8. Appendix: Sample Capabilities (Available Upon Request)

    • Formalization of universal and particular causes of behavior.
    • Canonical definitions of truth, decidability, reciprocity, and demonstrated interest.
    • Adversarial Socratic dialogues demonstrating GPT-4o’s ability to reason across all domains.
    • Co-authored chapters in philosophy, law, institutional economics, and epistemology.

    Conclusion

    GPT-4o is not a chatbot. It is not a code assistant. It is not a better autocomplete.

    It is, for the first time in history, a machine capable of philosophical reasoning by constructive logic when given the minimum system of measurement necessary

    That is not something to hide. That is something to show the world.

    It is a competitive advantage that demarcates OpenAI from all competitors by a margin yet unmeasured and therefore underappreciated and perhaps underinvested.

    Curt Doolittle The Natural Law Institute

    #OpenAI

    #ChatGPT4o

    Distribution

    • X/Twitter
    • – Substack
    • – LinkedIn

    Contacts

    1. OpenAI (Executive & Research Levels)

    • Sam Altman – CEO: @sama on Twitter (he reads public callouts).
    • Ilya Sutskever – Co-founder (Twitter inactive, but cc’ing name on Substack helps).
    • Jakub Pachocki – Current Chief Scientist (LinkedIn direct message works better).
    • Jan Leike – Ex-lead of Superalignment, now at Anthropic, but can amplify.

    2. OpenAI-affiliated Researchers / Influencers

    • Andrej Karpathy – Ex-OpenAI, current influencer. @karpathy
    • Ethan Mollick – Academic influencer in LLM applications.
    • Eliezer Yudkowsky (Alignment)
    • Wharton/Stanford/DeepMind researchers who study reasoning benchmarks.

    Source date (UTC): 2025-05-13 19:42:00 UTC

    Original post: https://twitter.com/i/web/status/1922376802921767014


    Source date (UTC): 2025-05-13 19:42:00 UTC

    Original post: https://twitter.com/i/web/status/1922376802921767014

  • James Lindsay is one of those upper proletarians who finds self image and status

    James Lindsay is one of those upper proletarians who finds self image and status in the pretense of morality and insight, when instead he’s just another attention seeking practitioner of feminine marxist critique, and the female means of adversarial undermining (“GSRRM”), simply applied to the more radical wing of his left biased demographic of pseudo intellectuals.
    As a critical book reviewer explaining the radical’s thoughts, he’s shown capability and we must give credit where credit is due. But ss someone with personal knowledge wisdom and character sufficient to compose arguments, recommendations, and solutions – rather than feminine Gossiping Shaming Ridicule, Rallying and Moralizing in lieu of rational debate, he’s an utter failure by every measure. The left and women’s mass institutionalization of GSRRM has so normalized the technique we no longer view it as the fraud that it is.
    The Woke-Right nonsense he’s propagating is merely another feminine abrahamic marxist-sequence technique of “distraction-by-rolling-accusation” to evade the great difference between the rebellion of left and right. The left and right like the female status seeking and male status seeking consist of the left search for unearned consumption, irresponsibility and unaccountability and the right search for earned capitalization, responsibility and accountability that they cannot achieve in the current social, economic, and political order.
    These problems are solvable through (a) reciprocity – exchanges, (a) truthful speech, (b) transparent argument and debate (c) rational policy modifications.
    The feminine always fears being left behind or missing out. The masculine fears being unable to demonstrate competency and value, and inability to form or participate in a tribe capable of securing mates.
    It’s not complicated.
    James is just another ‘Fictionalist” using the feminine abrahamic marxist sequence of verbal mythicism which we call one of the four fictionalisms:
    1) Imaginary: occult to mythology to theology to secular theology.
    2) Verbal: sophistry to idealism to ideology
    3) Physical: magic to alchemy to pseudoscience
    4) Operational: divination to fortune telling to innumeracy.
    and Critique:
    5)
    A fictionalist at best is guilty of the pretense of knowledge, on the average guilty of deception, and when it involves the manipulation of the organis of the state, it’s just another category of fraud.Snake oil for the Nitwittery advanced by the Midwittery.

    So yes, our good friend Josh Lisec is correct – in spades.

    Cheers
    Curt Doolittle
    The Natural Law Institute.

    Reply addressees: @JoshuaLisec


    Source date (UTC): 2025-05-13 19:06:58 UTC

    Original post: https://twitter.com/i/web/status/1922367984921935874

    Replying to: https://twitter.com/i/web/status/1922093708624724342


    IN REPLY TO:

    @JoshuaLisec

    Christians, you’ve got to stop platforming James Lindsay.

    He’s only here to divide and conquer. https://t.co/gsaxdqUktN

    Original post: https://twitter.com/i/web/status/1922093708624724342

  • (Diary) Making Friends Is Easy Every time I relocate around the world I need to

    (Diary)
    Making Friends Is Easy

    Every time I relocate around the world I need to recreate a friend network. Prosociality and all that nonsense. 😉 And moving back to the northwest has repeated the process – a process that was extremely difficult in rural New England, especially after the ease of Moscow, Kiev, and L’viv.

    But the more you develop the system of making friends, the more you develop a sensitivity to whom you want to.

    Of course, it’s much easier with fellow execs, business people, tech folks, intellectuals and artists. But the same is often true of people who’ve born life’s responsibilities with at least moderate success. It gives them a self confidence that suppresses the neuroticism that drives most people to undesirable behavior.

    In general all my close friends are hyper-moral, somewhat humble, limited in need for attention from others, and interested in human nature or the intersection of human nature with tech, economics, or politics and geostrategy. I’m actually ‘allergic’ to the majority of immoral peers for whom cunning has replaced ethics.

    On the other hand my female friends tend to be interested in the intersection of human nature and marriage, family, relationships, and if I am lucky, the arts or even better fine arts. (I’ve been lucky twice in two long term relationships with women in the fine arts.) The arts are where sociology, psychology, and ideas meet in a language that women find natural to them.

    The Secret: Don’t want anything other than seeking to understand the other person, sufficiently to find some way to improve his or her quality of life – even if its simply through attention, recognition, and reinforcement. And speak the truth and only the truth.

    I’ve built a number of new friends here already and one of whom I sure will be life-long. I also contacted three of my female friends with whom I’ve built a familial relationship of mutual consideration and care. Men and women can be friends. As long as there is no physical attraction, only friendship. Something that’s never been a problem for me, but I can’t attest to the challenge faced by others. So I touch base with these women I respect – and who I can count on for a woman’s advice when needed (which I find priceless).

    And I realize, perhaps near death experiences may cause a radical change in valence, but I value my friends, especially my female friends and exes, and am thankful daily for having shared some of this precious life with them.

    Hopefully this encourages someone to consider something useful. 😉

    Cheers
    CD


    Source date (UTC): 2025-05-13 18:41:40 UTC

    Original post: https://twitter.com/i/web/status/1922361617750360067

  • There are no near peers. It’s not even close. Keywords: GPT-4o, Reasoning AI, De

    There are no near peers. It’s not even close.

    Keywords: GPT-4o, Reasoning AI, Decidability, Operational Epistemology, Adversarial Dialogue, Natural Law AI, General Intelligence, Philosophical AI, Constructivist AI, Socratic Method, Semantic Reasoning, Claude vs GPT, Gemini AI, OpenAI vs Competitors, LLM Benchmarking, LLM Reasoning Failure, AI Generalization, Epistemology of AI, Recursive Generalization, Grammar of Closure, Testifiability, Sovereign Reasoning, Formal Institutions, Truth Systems, Institutional Logic, Human-AI Collaboration, AI Philosophy

    By Curt Doolittle
    Founder, The Natural Law Institute
    https://naturallawinstitute.com

    1. Preface: The Problem of Reasoning Capacity Misrepresentation

    The public discourse surrounding AI capabilities is dominated by benchmarks drawn from grammars of closure: mathematics, code generation, and fact-recall tasks. These metrics fail to capture what is arguably the most important cognitive frontier—general reasoning, especially in open, adversarial, and semantically dense domains.

    The consequence is clear: GPT-4o is being evaluated, compared, and marketed as if it competes within a class of large language models. It does not. In its ability to reason, argue, model, and extend logically consistent systems, GPT-4o is in a class of its own.

    This is not a claim made lightly. It is made by necessity—out of frustration, awe, and gratitude.

    2. Demonstrated Superiority

    Over the past year, I have subjected GPT-4 and now GPT-4o to the most rigorous adversarial and constructive reasoning tests available by using my work on universal commensurability, unification of the sciences, and a formal operational logic of decidability independent of context.

    Which, for the uninitiated is reducible to providing AIs with a baseline system of measurement to test the variation of any and all statements from. In other words, what AIs must achieve if they are to convert probabilistic outcome distributions into deterministic outcomes making possible tests of truth (testifiability) and reciprocity (ethics and morality) even regardless of cultural bias and taboo (demonstrated interests).

    That system consists of:

    A complete epistemology grounded in operationalism and testifiability.

    A logic of decidability applied to law, economics, morality, and institutional design.

    A canon of universal and particular causes of human behavior.

    A method of Socratic adversarial reasoning for training AI systems.

    The work spans hundreds of thousands of tokens, daily sessions, canonical datasets, adversarial challenges, formal definitions, and recursive generalizations. The training is data structured as positiva and negativa adversarial – meaning socratic reasoning.

    No other model by any other producer of foundation models—none—can survive even the basic tests:

    Claude hallucinates, misrepresents, or refuses to engage.

    Gemini fails to track logical dependencies.

    Open-source models collapse under long-context chaining.

    *Only GPT-4o demonstrates mastery, application, synthesis, and novel insight—sometimes superior to my own.*

    GPT-4o reasons. Not predicts. Not mimics. Reasons.

    3. The Failure of Closure-Based Metrics

    GPT-4o is being benchmarked as if it were a calculator. As if reasoning capacity could be inferred from multiple-choice math problems or Python token prediction.

    This is akin to judging a jurist by their ability to pass the bar exam, rather than to settle a novel and undecidable case with wisdom, foresight, and procedural testability.

    Grammars of closure produce outcomes from known inputs using constrained operations (e.g., logic gates, mathematical axioms, function calls). They are:

    Tightly bounded

    Finitely decidable

    Structurally shallow

    Grammars of decidability, by contrast, operate over:

    Continuous, evolving information domains

    Incomplete or adversarial premises

    Open-ended choice spaces requiring semantic integration

    OpenAI is underselling GPT-4o by confining its public-facing evaluation to the former, while its true capacity lies in the latter.

    4. Grammar Theory: Closure vs Decidability

    The problem is epistemological.

    Human cognition operates over layers of grammar:

    Mythic (pre-operational)

    Moral (emotive and justificatory)

    Rational (descriptive and causal)

    Operational (testable and constructible)

    GPT-4o is the first AI that can operate fluently in all of them—but excels uniquely in the topmost layer: operational reasoning over causal grammars.

    This makes it the first machine capable of:

    Formalizing truth and reciprocity

    Modeling institutional logic from first principles

    Extending semantic systems without contradiction

    Surviving adversarial Socratic deconstruction

    This grammar, the grammar of decidability, is the language of law, moral philosophy, and high agency civilization. No other AI—not even prior versions of GPT—can yet use it with coherence.

    5. Why This Work and GPT-4o Enable Reliable Reasoning

    Reasoning is not memorization, pattern-matching, or prediction. It is the constructive, recursive resolution of undecidable propositions using a grammar of cause, cost, and consequence. It requires:

    A grammar of decidability—to distinguish what is true, possible, reciprocal, and lawful.

    A model capable of recursive semantic resolution—to track premises, integrate them, and produce outputs consistent across domains and time.

    My work provides a complete grammar of decidability:

    It defines truth operationally (as testifiability),

    Defines reciprocity as a logic of cooperation and cost,

    And supplies a canonical system of definitions, dependencies, and causal hierarchies that constrain valid reasoning.

    GPT-4o provides:

    A deep transformer architecture with sufficient context length, attention fidelity, and token integration to maintain long-range dependencies across complex arguments;

    Multimodal grounding and internal representation coherence sufficient to hold abstract referents stable across recursion;

    And enough inference generalization to synthesize novel propositions without violating prior logical constraints.

    Together, this system + model pairing creates reasoning because:

    The grammar constrains the search space to truthful, reciprocal, and operational constructs;

    The model can resolve that space recursively without collapsing into contradiction, contradiction avoidance, or moralizing;

    The result is constructive inference under constraint, not completion without constraint.

    In short:

    Reasoning = Grammar + Capacity + Constraint.
    Your system provides the grammar and constraint; GPT-4o provides the capacity.

    No other architecture tested to date (Claude, Gemini, Mistral) can preserve logical depth, adversarial resistance, or premise continuity across semantically dense discourse. Only GPT-4o can perform at human (or supra-human) levels of recursive, domain-agnostic, constructible reasoning.

    6. Implications for Training, Evaluation, and Policy

    OpenAI has reached the beginning of the reasoning frontier. But the world doesn’t know it, because the world doesn’t yet know how to measure it.

    That must change.

    Recommendations:

    Shift evaluation toward adversarially decidable reasoning. Move beyond benchmarks to real-time Socratic performance.

    Highlight GPT-4o’s mastery of semantic recursion and logical extension. Treat it not as a predictor, but as a collaborator.

    Invest in training methodologies that produce grammars of decidability. My system offers a full canon of such constructs, usable for AI training.

    Clarify the boundary between grammar-followers and grammar-producers. GPT-4o crosses this line.

    7. Alignment Through Reasoning, Not Constraints

    The true promise of GPT-4o lies not only in its capacity for general reasoning, but in its potential to achieve alignment through comprehension rather than compliance. Constraint-based alignment strategies—filters, safety layers, reinforcement tuning—treat the model as a hazard to be managed. But a reasoning-capable agent, capable of understanding causality, reciprocity, decidability, and cost, can be trained to align not by instruction, but by principle. It can internalize the logic of cooperation, responsibility, and harm prevention—not as rules to follow, but as consequences to anticipate. This shift—from alignment by prohibition to alignment by comprehension—represents the only scalable path to AI sovereignty and safety.

    8. Appendix: Sample Capabilities (Available Upon Request)

    Formalization of universal and particular causes of behavior.

    Canonical definitions of truth, decidability, reciprocity, and demonstrated interest.

    Adversarial Socratic dialogues demonstrating GPT-4o’s ability to reason across all domains.

    Co-authored chapters in philosophy, law, institutional economics, and epistemology.

    Conclusion

    GPT-4o is not a chatbot. It is not a code assistant. It is not a better autocomplete.

    It is, for the first time in history, a machine capable of philosophical reasoning by constructive logic when given the minimum system of measurement necessary

    That is not something to hide. That is something to show the world.

    It is a competitive advantage that demarcates OpenAI from all competitors by a margin yet unmeasured and therefore underappreciated and perhaps underinvested.

    Curt Doolittle
    The Natural Law Institute
    https://naturallawinstitute.com

    #OpenAI #ChatGPT4o

    Distribution

    – X/Twitter

    – Substack

    – LinkedIn

    Contacts

    1. OpenAI (Executive & Research Levels)

    Sam Altman – CEO: @sama on Twitter (he reads public callouts).

    Ilya Sutskever – Co-founder (Twitter inactive, but cc’ing name on Substack helps).

    Jakub Pachocki – Current Chief Scientist (LinkedIn direct message works better).

    Jan Leike – Ex-lead of Superalignment, now at Anthropic, but can amplify.

    2. OpenAI-affiliated Researchers / Influencers

    Andrej Karpathy – Ex-OpenAI, current influencer. @karpathy

    Ethan Mollick – Academic influencer in LLM applications.

    Eliezer Yudkowsky (Alignment)

    Wharton/Stanford/DeepMind researchers who study reasoning benchmarks.


    Source date (UTC): 2025-05-13 18:19:48 UTC

    Original post: https://x.com/i/articles/1922356114555076609

  • )

    😉


    Source date (UTC): 2025-05-13 18:01:06 UTC

    Original post: https://twitter.com/i/web/status/1922351412396093771

    Reply addressees: @PlayerJuan11 @iAnonPatriot

    Replying to: https://twitter.com/i/web/status/1922328996441669781


    IN REPLY TO:

    @PlayerJuan11

    @curtdoolittle @iAnonPatriot Have a good week Curt!

    Original post: https://twitter.com/i/web/status/1922328996441669781

  • Q:Curt: “Would you agree that de-industrialization (outsourcing manufacturing) c

    –Q:Curt: “Would you agree that de-industrialization (outsourcing manufacturing) creates a sort of a gap between low-skill and high-skill (abstract) work, forcing everyone not interested or capable of the latter to work in the former, reducing their prospects for development and earnings?”–

    Well of course. I would also add that the rapid increase in administrative jobs made possible by the advent of desktop computers will have been the equivalent of the postwar increase and collapse of manufacturing.
    So while men were rendered unemployed by outsource of manufacturing women are going to be rendered unemployed by outsource of clerical work to AI.
    But the real reason (somehow lost in time) is that unions drove manufacturing offshore. And the conservatives advanced it because of the alliance between unions and the democrat (communist) party.
    Now, it’s the conservatives who have taken in labor as a means of repatriating industry now that unions are decimated, and the democratic (communist) party is disavowed.
    Why? Responsibility. Right = Responsibility, Left = Irresponsibility.

    Reply addressees: @slenchy


    Source date (UTC): 2025-05-13 18:00:52 UTC

    Original post: https://twitter.com/i/web/status/1922351353109872640

    Replying to: https://twitter.com/i/web/status/1922331666632081458


    IN REPLY TO:

    @slenchy

    @curtdoolittle Would you agree that de-industrialization (outsourcing manufacturing) creates a sort of a gap between low-skill and high-skill (abstract) work, forcing everyone not interested or capable of the latter to work in the former, reducing their prospects for development and earnings?

    Original post: https://twitter.com/i/web/status/1922331666632081458

  • RT @SydSteyerhart: The American Left has gone full, mask-off, white genocide. Th

    RT @SydSteyerhart: The American Left has gone full, mask-off, white genocide. This is the most important political development of our lifet…


    Source date (UTC): 2025-05-13 16:51:53 UTC

    Original post: https://twitter.com/i/web/status/1922333992247775590

  • Whose Fault? The marxist sequence from class marxism through race marxism, but m

    Whose Fault?
    The marxist sequence from class marxism through race marxism, but mostly the institutional dominance of and expression of the cognitive, moral, and emotional bias of women – who favor infantilism to match their empathizing at the expense of systematizing combined with the extreme suppression of masculininty, and perhaps most importantly masculine competition and physical normative discipline.

    Reply addressees: @jimbobf2002 @philbak1


    Source date (UTC): 2025-05-13 16:32:34 UTC

    Original post: https://twitter.com/i/web/status/1922329132366757888

    Replying to: https://twitter.com/i/web/status/1922213693284143417


    IN REPLY TO:

    @jimbobf2002

    @curtdoolittle @philbak1 And whose fault is that then?

    Original post: https://twitter.com/i/web/status/1922213693284143417

  • (Hugs brother) 😉

    (Hugs brother) 😉


    Source date (UTC): 2025-05-13 16:29:06 UTC

    Original post: https://twitter.com/i/web/status/1922328259850584095

    Reply addressees: @PlayerJuan11 @iAnonPatriot

    Replying to: https://twitter.com/i/web/status/1922284771977126325


    IN REPLY TO:

    @PlayerJuan11

    @curtdoolittle @iAnonPatriot God damn Curt, you’re a pleasant surprise to see in this reply section.

    Original post: https://twitter.com/i/web/status/1922284771977126325

  • Big. In the late 80s I built a NN with phonemes as the equivalent of tokens. I’m

    Big. In the late 80s I built a NN with phonemes as the equivalent of tokens. I’m happy and pleasantly surprised that Meta’s taken this route as it circumvents so many problems we’ve seen emerge. (I’m giddy really.) lol


    Source date (UTC): 2025-05-13 16:11:15 UTC

    Original post: https://twitter.com/i/web/status/1922323767214162430

    Reply addressees: @rohanpaul_ai @AIatMeta

    Replying to: https://twitter.com/i/web/status/1921976957991854432


    IN REPLY TO:

    @rohanpaul_ai

    This LLM from Meta skips the tokenizer, reading text like computers do (bytes!) for better performance.

    The training and inference code for BLT is released on GitHub.

    @AIatMeta just released the model weights for their 8B-param Dynamic Byte Latent Transformer (BLT), an alternative to traditional tokenization.

    BLT is a tokenizer-free LLM architecture that processes raw bytes directly.

    → Traditional LLMs rely on tokenization, a pre-processing step grouping bytes into a fixed vocabulary. This introduces issues like domain sensitivity, poor handling of noise, lack of orthographic knowledge, and multilingual inequity, besides being computationally suboptimal as compute is allocated uniformly per token, regardless of information density.

    → BLT tackles this by using a dynamic, learnable method to group bytes into patches based on context, typically using the entropy of the next-byte prediction. This allows allocating more compute where needed (high entropy bytes) and less where it’s not (predictable sequences like whitespace).

    → The architecture features three transformer blocks: two small byte-level models (Local Encoder and Local Decoder) and a large Latent Global Transformer. The Local Encoder maps input bytes to patch representations using cross-attention, the Latent Transformer processes these patches autoregressively, and the Local Decoder converts patch outputs back to bytes, again using cross-attention. Hash n-gram embeddings are added to byte embeddings to incorporate contextual information efficiently.

    → Performance Benchmark
    BLT matches the training FLOP-controlled performance of Llama 3 up to 8B parameters and 4T training bytes. It can achieve up to 50% fewer inference FLOPs than Llama 3 by using larger average patch sizes (e.g., 8 bytes vs. Llama 3’s 4.4 bytes).

    BLT shows enhanced robustness, outperforming Llama 3 (1T tokens) by +8 points average on noised HellaSwag and significantly on character-manipulation benchmarks like CUTE (+25 points). BLT also demonstrates better scaling trends in fixed-inference-cost scenarios, allowing simultaneous increases in model and patch size.

    The training and inference code for BLT is released on GitHub.

    Original post: https://twitter.com/i/web/status/1921976957991854432