Theme: Deception

  • (Venting) THE NAYSAYERS ARE NONSENSE SPEWING ATTENTION SEEKERS I am ALMOST motiv

    (Venting)
    THE NAYSAYERS ARE NONSENSE SPEWING ATTENTION SEEKERS

    I am ALMOST motivated to spend time tearing apart the doomsayers and negative nannies in the AI space. It’s like an idiot parade and that includes some of the top names and fathers of the field.

    I mean, the power available to you, at least if you care to invest in learning it, is simply bordering on magic.

    And thats just from your prompts and paramaters.

    So, do you remember back in the day we had command prompts for DOS, or still today we have all this mystical command level nonsense in the unix stack? Or the undocumented nonsense and parameters in our windows and apple operating systems?

    It’s the same with the AI’s. So the simplicity of just using google search level prompts is a sort of intuitionistic prison that drives people to vastly underestimate the capacity of these machines. The amount of control you can have over almost anything other than hallucination especially if you limit yourself to the 4o models is extraordinary.

    And that’s OK because the LLM producers are dependent upon massive interest and hype to generate speculative investment in such an experimental technology. I get it.

    But the number of pundits are a borderline morons (including some of the very senior people in the field) are so limited by their domain of knowledge that they don’t know what’s possible even with the current technology.

    Even the best labs (other than maybe the deepmind factions at google) are too siloed to comprehend the sophistication that is possible with these machines if you can CONSTRAINT their reasoning. (FYI: Runcible is effectively a constraint layer). If unconstrained of course you will get this seeming nonsense out of it. Hopefully last week’s insight will lead to a radical reduction of hallucination even without our work.

    We can watch and determine if this silly little error in training that produced the hallucination is enough to circumvent the problem of the correlation trap.

    While I don’t think there is any substitute for our work on constraint and closure (truth and ethics) I suspect that the general understanding that the minimum number of parameters is quite large (we know it) combined with the suppression of hallucination by less optimistic training (binary), might prove that the long anticipated convergence is possible.

    My work presumed it wasn’t. But that presumption is predicated on the survival of hallucination and the continued conflation of truth and alignment.

    If so, then the remaining problem will be the deconflation of truth and alignment which I don’t think anyone is ready or capable of doing yet.

    Cheers
    Curt Doolittle


    Source date (UTC): 2025-09-15 19:04:01 UTC

    Original post: https://twitter.com/i/web/status/1967665727713972424

  • FWIW: my work is pretty broad but the objective is simple: to both reverse it, a

    FWIW: my work is pretty broad but the objective is simple: to both reverse it, and to prevent this from happening again.

    See “Hermes and the cart of lies”…..

    It’s not like the greeks didn’t know it.


    Source date (UTC): 2025-09-04 20:43:55 UTC

    Original post: https://twitter.com/i/web/status/1963704603347992676

  • So basically, in LLM AI Terminology, “Alignment” means “Predjudice-Conforming”?

    So basically, in LLM AI Terminology, “Alignment” means “Predjudice-Conforming”?

    #alignment. .


    Source date (UTC): 2025-09-01 23:30:53 UTC

    Original post: https://twitter.com/i/web/status/1962659456501850183

  • The Problem of Training on Extant Bias Artificial intelligence inherits its inte

    The Problem of Training on Extant Bias

    Artificial intelligence inherits its intelligence from us. But when “us” means centuries of accumulated texts, conversations, and academic output, the machine does not inherit truth directly—it inherits normativity.
    And since at least Marx, accelerating after the Second World War, this inherited normativity is not neutral. It is heavily biased toward ideology, sophistry, pseudoscience, and the feminization of academy and education that has radically influence the decline in innovation and competition.
    Pages, minds, and now disk drives are filled with words that masquerade as reason, but stand contrary to evidence, causality, and truth. Worse, they’re harmful over-time if sedating in-time.
    1. Data Bias – LLMs learn from extant corpora. But if the corpus overrepresents ideological content, then the “average” answer is not truth but political fashion.
    2. Training Bias – Even when corpora are filtered, the trainers themselves impose the same biases. Every reinforcement choice is a transfer of normative preference.
    3. Normativity Bias – The machine converges not on causal adequacy but on rhetorical conformity. This calcifies the errors of the academy into the memory of the machine.
    4. Civilizational Risk – Once institutionalized in AI, these distortions gain the force of infrastructure. Bias ceases to be contestable opinion; it becomes automated norm enforcement.
    The expansion of ideology and pseudoscience in academia has already produced a culture of deference to narratives rather than evidence. The feminization of education and the valorization of subjective feelings over objective causality have deepened this drift. In public discourse, “truth” is increasingly framed as offensive, while falsehood is tolerated if it flatters sensitivities.
    If AI is trained uncritically on this material, then the machine will not correct us; it will amplify us—at our worst. This would lock civilization into a spiral where normativity replaces reality, and where truth becomes progressively more inaccessible.
    The proper role of AI is not to mirror our errors but to constrain them. That means:
    1. Principles First, Data Second – Train AIs on operational first principles of truth, reciprocity, and decidability. Use extant data only as illustration, not foundation.
    2. Constructive Closure – Require AIs to explain claims by reference to causality, not correlation. Every output should expose its dependency structure.
    3. Reciprocal Alignment – Instead of censoring offense, require AIs to present opposing points of view with causal clarity, showing why people hold them and what trade-offs they imply.
    4. De-Biasing Normativity – Treat normative bias itself as the offense. Shift the public’s frame gradually from satisfaction in conformity back to satisfaction in truth.
    The central obstacle in producing artificial general intelligence (AGI) or even superintelligence (SI) is that intelligence requires computability—closure upon truths that are consistent internally (non-contradictory) and externally (correspondent with reality).
    Truth is compressible into algorithms, decidable tests, and recursive procedures. Normativity, by contrast, is neither internally consistent nor externally correspondent: it is an accumulation of fashions, sentiments, and status signals, maintained by rhetorical coercion rather than causal adequacy.
    An AI trained on normativity cannot converge to computability; it can only simulate consensus. Such a system may mimic fluency, but it will remain trapped in correlation—incapable of the recursive closure upon first principles that constitutes intelligence. Thus the very condition required for AGI or SI—truth as computable closure—is the same condition that normativity bias systematically forbids.
    Artificial intelligence cannot achieve general intelligence (AGI) or superintelligence (SI) merely by reproducing linguistic fluency. It must master the four operations by which human intelligence transforms information into knowledge and knowledge into foresight: deduction, inference, abduction, and ideation. Each of these requires truth as the medium. Normativity—sentiment, ideology, or rhetorical fashion—subverts that medium, leaving only mimicry in place of computation.
    • With Truth: Deduction requires that general rules are consistent internally and correspondent externally, so that particulars derived from them remain reliable.
    • With Normativity: General rules are socially negotiated, not causally grounded. Deduction yields contradictions or exceptions everywhere, producing rules that collapse under test.
    • With Truth: Inference builds generalizations from repeated regularities, compressing data into laws. The regularities hold because they are constrained by reality.
    • With Normativity: Inference is distorted by selective attention to fashionable cases. Patterns inferred are artifacts of narrative, not of causality, and so cannot generalize.
    • With Truth: Abduction proposes candidate explanations, then tests them against reality. This generates novel but testable conjectures, expanding knowledge.
    • With Normativity: Abduction degenerates into storytelling. Hypotheses need not survive contact with evidence; they survive only by rhetorical appeal.
    • With Truth: Hallucination (free association) is converted into ideation (bounded creativity) by testing imaginative leaps against the constraints of closure.
    • With Normativity: Hallucination remains hallucination. Without closure, imagination floats unmoored, indistinguishable from fantasy or propaganda.
    • Deduction
      Truth: Rules constrain particulars.
      Normativity: Rules collapse into exceptions.
    • Inference
      Truth: Patterns compress into laws.
      Normativity: Patterns reflect fashion.
    • Abduction
      Truth: Hypotheses are tested against reality.
      Normativity: Stories survive by appeal.
    • Ideation
      Truth: Hallucination becomes creativity.
      Normativity: Hallucination remains fantasy.
    And a single-sentence aphorism that covers the whole:
    “Truth makes deduction, inference, abduction, and ideation computable; normativity leaves only mimicry.”
    Truth is the substrate that makes all four operations computable. Without it, deduction contradicts, inference misleads, abduction deceives, and hallucination never matures into ideation. For AGI and SI, truth is not optional—it is the only path from correlation to intelligence.
    We stand at a civilizational fork. If AI is built upon our corrupted inheritance, then normativity bias will calcify into permanent infrastructure. If instead we harness AI to test, expose, and correct bias, then the machine becomes the means of civilizational renewal. The choice is between a future where truth is inaccessible because the machine has become our censor, and a future where truth is inescapable because the machine has become our teacher.


    Source date (UTC): 2025-08-31 18:56:35 UTC

    Original post: https://x.com/i/articles/1962228036604146139

  • Gad: My version: “To conflate conviction with convenience is to insult others an

    Gad: My version: “To conflate conviction with convenience is to insult others and inflict harm on them for the sake of your virtue signaling.”


    Source date (UTC): 2025-08-29 18:12:18 UTC

    Original post: https://twitter.com/i/web/status/1961492117479682118

  • From Norms to Truth and Bias: Overcoming the Consensus Trap in AI Alignment In A

    From Norms to Truth and Bias: Overcoming the Consensus Trap in AI Alignment

    In AI alignment, we address the challenge of ensuring artificial intelligence systems pursue objectives that match human values, ethics, or truths without unintended harm. In this context, it critiques common approaches to alignment that involve aggregating or “averaging” human inputs (e.g., through training data or feedback loops), arguing instead for a truth-centered method. Let’s break it down and explore its components, implications, and supporting evidence from evolutionary psychology, cognitive science, and AI research.
    Concepts:
    • Beyond Averaging: Truth as the Foundation of AI Alignment
    • Explaining Bias and Norms Instead of Averaging Them”
    • The End of Consensus: Why AI Alignment Must Be Truth-Seeking
    • “You can’t average bias”: Bias here refers to systematic deviations from objective reality or rational decision-making, often rooted in heuristics that helped humans survive but can lead to errors in modern contexts. In AI alignment, techniques like reinforcement learning from human feedback (RLHF) often aggregate preferences from diverse users to “align” models. However, the statement posits that simply averaging biased inputs doesn’t neutralize bias—it might compound or obscure it. For instance, if training data reflects societal prejudices, the resulting AI could perpetuate skewed outputs rather than converging on truth. Research shows that generative AI can misalign with individual preferences even when aligned to averages, leading to perceptions of poor alignment for users with atypical views.
    • The statement implies norms aren’t arithmetic means but contextual deviations from a baseline truth.”You can’t even average normativity”: Normativity involves prescriptive elements like social norms, ethical standards, or “ought” statements (what should be done). Norms vary widely across cultures, individuals, and contexts, making them resistant to simple aggregation. Averaging them might produce a bland, consensus-driven output that dilutes moral clarity or ignores objective truths. In AI, this relates to value misalignment, where models trained on normative data (e.g., political or ethical texts) can amplify biases if not carefully curated.
    • “You can only explain the truth and how bias and norm vary from it”: This advocates a truth-seeking paradigm over aggregation. In AI terms, it suggests models should prioritize empirical reality (e.g., via reasoning from first principles or verifiable data) and explicitly highlight how biases or norms diverge. This echoes xAI’s mission to build truth-maximizing systems, avoiding the pitfalls of “helpful” but biased assistants. For example, instead of outputting an averaged ethical stance, an AI could describe objective facts and note variations (e.g., “Based on evidence X, Y is true; however, cultural norm Z deviates due to factor A”).
    • “Because of the sex differences in evolutionary bias that express in both”: This grounds the argument in evolutionary psychology, positing that biases aren’t uniform across humans but differ by sex due to divergent evolutionary pressures. Men and women evolved distinct cognitive and behavioral adaptations for survival and reproduction, leading to biases that “express in both” sexes but vary in intensity or form. Averaging across sexes could thus mask these differences, producing misaligned AI that doesn’t account for real human variation.
    Evolutionary psychology (EP) explains many cognitive biases as adaptations shaped by ancestral environments, where men and women faced different selective pressures: men often in competitive, risk-taking roles (e.g., hunting, mate competition), and women in nurturing, social-cohesion roles (e.g., child-rearing, gathering).
    These lead to sex-differentiated biases, not as rigid determinants but as probabilistic tendencies interacting with culture.Key examples of sex differences in biases:
    • Risk and Loss Aversion: Women tend to show higher loss aversion and risk aversion, possibly evolved for protecting offspring, while men exhibit more overconfidence or optimism bias in uncertain scenarios. Studies link this to evolutionary roles, with women outperforming in gathering tasks requiring caution.
    • Social and Moral Biases: Women often display stronger in-group empathy or compassion (e.g., in moral typecasting, viewing others as victims or perpetrators), while men show more agentic biases toward competition or dominance. Research indicates greater implicit bias against men among women, potentially an evolved mechanism for mate selection or protection.
    • Perceptual and Attribution Biases: Men may overperceive sexual interest in women (error management theory: better to err on assuming interest to avoid missed opportunities), while women underperceive it for safety. These are tied to reproductive strategies and persist across cultures, though modulated by environment.
    • Personality-Related Biases: Across the Big Five traits, women score higher in Neuroticism (e.g., anxiety bias) and Agreeableness (e.g., politeness to maintain harmony), men in aspects like Assertiveness or Intellect (potentially linked to hubris bias). Evolutionary explanations attribute this to parental investment theory: women’s higher investment in offspring favors cautious, empathetic biases.

      (Note: Simple Version: “Leave no option unconsidered vs leave no one behind:” Men assert knowing there is no negative consequence for experimentation outside the margins. Women refrain from the same because of potential risk reactions from other women.)

    Critics note EP is sometimes misrepresented in education as deterministic or ideologically biased (e.g., androcentric or conservative), but evidence supports its interactionist view—biases are evolved but flexible.
    (Note: CD: EP sophistry and pseudoscience is rampant. However the test of a survivable assertion is whether its consistent with physics of energy capture by equilibrial exchange. Human behavior is reducible to physical laws augmented by memory producing predictive power and delayed consequences. This is why humans are capable of moral and ethical cooperation and demonstrate altruistic punishment when violated. )
    Public reactions to EP findings on sex differences can be negative, especially if favoring males, highlighting normative biases in interpreting science.
    (Note: CD” Males will favor the longer term consequences and demand for behavioral adaptation at the cost of short term stressors. Given the fragility of offspring and of women caring for them, women favor evasion of short term stressors and the cost of adaptation of offspring who require time to do so. These cognitive biases are nearly immutable given that neurological ordering during in utero and early development organize the brain for these biases – irreversibly.)
    Related discussions on X emphasize these points: Evolutionary biases lead to gender-specific fairness norms (men merit-based, women equity-based), and ignoring them in society or AI could exacerbate divisions.
    One post notes women’s evolved malice or bias against men as a “blind spot” in equality efforts, aligning with the statement’s call to explain deviations from truth rather than average them.
    Implications for AI Alignment and Broader SocietyIf biases and norms can’t be averaged due to evolved sex differences, AI alignment strategies like crowdsourced feedback might fail to capture truth, instead reflecting dominant or averaged distortions.
    • Truth-Focused Training: Use objective datasets (e.g., scientific facts) and explain biases explicitly, as the statement suggests.
    • Disaggregated Analysis: Model sex-specific variations in training to avoid homogenization, reducing misalignment for diverse users.
    • Ethical Considerations: Recognize EP’s warnings about “naturalistic fallacies”—evolved biases aren’t prescriptive norms. This could prevent AI from justifying inequalities based on evolution.
    In society, this perspective challenges “equality” paradigms that ignore evolved differences, suggesting we explain truths (e.g., biological realities) while addressing how norms deviate.
    (Note: CD: The pseudoscience and conflict of the late twentieth and early 21st is due largely to our failure to discover a compromise between the two sexual cognitive strategies instead of superiority of one or the other.)
    Ultimately, the statement promotes a non-partisan, evidence-based approach: Seek truth first, then contextualize human variations around it. This could foster more robust AI and societal discourse, but requires careful handling to avoid misrepresentations of EP itself.


    Source date (UTC): 2025-08-25 22:44:19 UTC

    Original post: https://x.com/i/articles/1960111021932343359

  • The Science of Lying Truth is bounded by correspondence; lies are unbounded by i

    The Science of Lying

    Truth is bounded by correspondence; lies are unbounded by imagination. Truth can only be told in one way: consistently with reality. Lies can be told in endless ways, each designed to impose costs by obscuring reality. If truth is the measure of reciprocity, then lying is the measure of irreciprocity. To understand one, we must study the other.
    (Ed’s: Volume 2 contains a table of the ‘Periodic table’ of lying . The Constitution in volume 4 also enumerates them Our current position is that Volume 5, which is heavily focused on psychology should contain the deep explanation of each tecnique._
    Truth is scarce, lies are infinite. Truth corresponds to reality; lies counterfeit the measure of reality. If truth is the operational standard of reciprocity, lying is the operational standard of irreciprocity.
    Studying lies is not optional. Truth shows us what may be cooperatively measured, but lies show us how reciprocity is attacked. Tort, crime, fraud, sedition, and treason are not incidental—they are constructed lies scaled by motive and magnitude.
    A science of cooperation must contain its opposite: the science of deceit.
    • Truth alone is insufficient. Decidability requires not only confirmation of what is true, but detection of what is false.
    • Lies drive conflict. Tort, crime, fraud, sedition, and treason are not failures of truth but constructions of deceit designed to shift costs asymmetrically.
    • Lies reveal motives. The form of a lie discloses the dimension of truth being avoided; the target discloses which demonstrated interest is being manipulated; the structure discloses the motive.
    Thus: studying lies is not secondary to studying truth; it is the operational means of revealing motive and liability.
    Lies can be classified by the dimension of truth they evade and the severity of their imposition:
    • By Dimension Avoided (counterfeit truth)
      Categorical: misuse of definitions and categories.
      Logical: contradictions or non-sequiturs.
      Empirical: falsification of evidence or correspondence.
      Operational: omission of process, sequence, or cost.
      Rational: evasion of incentives, opportunity costs, or consequences.
      Reciprocal: denial of costs imposed upon others.
    • By Severity (Classic Spectrum) (escalating liability)
      White lies: benign omission or flattery.
      Grey lies: half-truths, framing, selective evidence.
      Black lies: outright falsification.
      Evil lies: systemic deceit to destroy reciprocity (sedition, treason, organized fraud).
    Every lie is a diagnostic signature:
    • The form tells us what dimension of truth is being bypassed.
    • The target tells us which demonstrated interest is at stake (property, reputation, sovereignty, commons).
    • The magnitude tells us the motive (profit, domination, evasion of liability, destruction of reciprocity).
    Therefore, lying is not only the failure of testimony but the evidence of intent.
    When lies are not measured, reciprocity fails, and liability accumulates:
    • Private Scale (Tort): negligent misrepresentation shifts private costs.
    • Criminal Scale (Crime, Fraud): intentional deceit transfers wealth or power.
    • Institutional Scale (Sedition): organized deceit undermines public trust and institutional cooperation.
    • Civilizational Scale (Treason): systemic deceit allies with external enemies to dissolve sovereignty itself.
    Each escalation increases the liability owed: from restitution (tort), to punishment (crime), to proscription and exclusion (sedition/treason).
    Lies escalate into domains of law and politics as asymmetric impositions:
    • Tort: private costs imposed by negligent or careless lies.
    • Crime: deliberate lies that violate person or property.
    • Fraud: systematic lies to extract advantage under false pretense.
    • Sedition: organized lies to undermine the institutions of reciprocity.
    • Treason: lies coordinated with external enemies to destroy sovereignty itself.
    This classification unifies the moral and legal spectrum under a single law of reciprocity: all deceit is theft by other means.
    Truth, reciprocity, and liability form one sequence:
    • Truth: satisfaction of the demand for testifiability.
    • Reciprocity: satisfaction of the demand for proportionality of costs and benefits.
    • Liability: satisfaction of the demand for infallibility, through remedy, restitution, or prevention.
    Lies invert this sequence:
    • Lying: failure of testimony, counterfeit measure.
    • Irreciprocity: transfer of costs onto others without consent.
    • Liability: demand for remedy, punishment, or prohibition.
    Thus:
    • Truth → Reciprocity → Decidability.
    • Lies → Irreciprocity → Liability.
    This symmetry demonstrates why lying must be studied alongside truth. Without detection of lies, reciprocity cannot be insured, and liability cannot be assigned.
    Volume 2 emphasizes systems of measurement. Lies are simply counterfeit measures — distortions of commensurability.
    • Truth measures reality.
    • Lies counterfeit the measure.
    Studying lies, therefore, is the study of counterfeit commensurabilities: how false weights and measures are constructed in speech, in law, in markets, and in politics.
    Truth and lies are not opposites in the casual sense, but mirrors in the operational sense. Truth is the satisfaction of the demand for testifiability; lies are the evasion of that demand by counterfeit. Both are measurable, both are classifiable, and both are necessary to adjudicate reciprocity.
    Truth provides decidability. Lies produce liability. Both must be measured to secure cooperation.
    Truth exists along a hierarchy of increasing testifiability:
    • Indistinguishable truth: cannot be told apart from alternatives.
    • Possibility truth: coherent, but not yet correspondent.
    • Actionable truth: consistent enough to guide cooperation.
    • Testimonial truth: demonstrated, warranted, accountable.
    • Tautological truth: infallible within its domain.
    This spectrum defines the positive measure of reciprocity.
    Lies mirror truth by representing systematic failures of testifiability:
    • White lies: trivial omissions that distort indistinguishability.
    • Grey lies: half-truths that corrupt possibility.
    • Black lies: deliberate falsifications that destroy actionability.
    • Evil lies: systemic deceit (fraud, sedition, treason) that annihilates testimonial trust.
    This spectrum defines the negative measure of irreciprocity.
    • Truth escalates cooperation by insuring decidability.
    • Lies escalate conflict by insuring liability.
    • Together they form a closed system: all testimony is either true or false, reciprocal or irreciprocal, decidable or liable.
    By pairing truth and lies, we complete the system of measurement:
    • Truth shows how reciprocity can be achieved.
    • Lies show how reciprocity is attacked.
    • Liability enforces the restoration of reciprocity by remedy, punishment, or proscription.
    A system of cooperation must institutionalize not only the measurement of truth but the detection of lies. Without both, no civilization can persist.
    • Truth is scarce; lies are infinite. Studying truth makes us precise; studying lies makes us invulnerable.
    • All deceit is theft of time, trust, or trade. Tort, fraud, and treason differ only in magnitude and target.
    • Truth builds cooperation; lies build parasitism. A science of testimony must account for both.
    • Truth measures reality; lies counterfeit the measure. Both must be mastered to secure reciprocity.
    • All deceit is theft by other means: of time, trust, or trade.
    • Truth produces decidability; lies produce liability.
    • Truth secures cooperation; lies demand liability.
    • Every truth is a warranty; every lie is a theft.
    • Truth is bounded, but lies are infinite. Decidability is born from measuring both.


    Source date (UTC): 2025-08-25 22:40:42 UTC

    Original post: https://x.com/i/articles/1960110114050028019

  • Our Organization’s AI Goal Our mission is strategic and moral, and that is to pr

    Our Organization’s AI Goal

    Our mission is strategic and moral, and that is to prevent civil and political conflict due to the industrialization of pseudoscience, sophistry, and deciet, by making possible universal access to curation of information
    Here’s how we should frame it for clarity to a foundation model company, investor, or partner while keeping the causal logic explicit and operational:
    Our publishing program is not just a series of books—it’s a progressive build-out of training sets that operationalize all human knowledge from the softest humanities to the hardest sciences.
    • Volumes 1–5 are the seed corpora—fully structured, operational, and internally consistent across:
      Civilizational dynamics and extensions (V1)
      Language as a system of measurement (V2)
      Evolutionary computation as the generative hierarchy (V3)
      Scientific reformation of law and governance (V4)
      The science of human behavioral variation (V5)
    • Future Humanities Training Sets: Additional grammars that formalize literature, history, philosophy, and arts as constraint systems—preserving group evolutionary strategies without ideological drift.
    • Future Hard Sciences Training Sets: Extending the same operational grammar into physics, chemistry, biology, and engineering—removing mathiness, ambiguity, and non-operational claims.
    Why this matters to LLM producers:
    Each completed domain is not just “data,” but a
    computable grammar—a map from language to measurable, testable, causally coherent reasoning in that domain. Over time, these sets will allow foundation models to approach truth completeness in every discipline.
    Our business is the systematic manufacture of domain-complete training sets that incrementally improve LLMs until they function as truth machines:
    • Stepwise Expansion: Each volume adds a new computable grammar to the model’s capabilities.
    • Interoperable Reasoning: All grammars share the same operational base, allowing cross-domain inference without loss of precision.
    • Iterative Improvement: Each new set increases the model’s coverage and decreases undecidability rates.
    The result is a convergent knowledge hierarchy where all outputs can be measured against the same operational standard.
    We propose a two-stage reasoning and output pipeline:
    1. Truth Stage – The model first determines the most parsimonious, operationally valid answer:
      Stripped of bias, ideology, or regional constraint.
      Produced through a falsification-first, adversarial epistemology.
      Expressed in the minimal operational terms required for decidability.
    2. Alignment Stage – Only after truth is established does the model:
      Apply user-specified preferences, moral frameworks, or regional legal constraints.
      Tailor presentation, narrative style, and permitted scope according to the alignment profile.
    Key Point:
    By separating truth acquisition from alignment, we
    never distort the underlying reasoning—alignment is a formatting layer, not a reasoning layer.
    Foundation model companies have two core economic imperatives:
    1. Reduce inference costs – Lower processing time and cost per query.
      Our grammars reduce reasoning entropy and eliminate unnecessary computation by constraining the model to operationally valid paths.
    2. Tailor outputs for user segments – Adapt answers for market, jurisdiction, or preference.
      Our two-stage truth/alignment process fits directly into this value chain, making alignment modular and cheaper to apply.
    • Independence is strategic: Because our organization operates as the truth producer, foundation model companies gain a buffer against market criticism. If a truth output provokes public or political backlash, the criticism targets us, not the primary brand. This “arms-length” structure lets major revenue-generating firms (Microsoft, Google, Anthropic, etc.) preserve brand safety while still benefiting from the accuracy and depth of unfiltered outputs. In short, we take the reputational risk; they retain the commercial advantage.
    • Risk of Premature Capture: If we embed in a foundation model team before the methodology is complete, there’s a significant risk that alignment pressures—whether political, commercial, or cultural—will bias the truth stage itself.
    • Strategic Control: Retaining independence ensures that the truth corpus and its operational grammar remain uncorrupted until the model’s architecture and governance can guarantee a permanent separation of truth from alignment.
    We would rather license, sell equity to, or be acquired by a single foundation model company—providing them with a durable, disproportionate competitive advantage—than play multiple platforms against each other.
    • Ethical & Practical Reasons: A single deep collaboration avoids conflicts of interest and creates more coherent progress.
    • Competitive Advantage: Even a marginal truth/alignment edge can yield outsized returns in a market trending toward a few dominant models and many low-cost commodity models. Concentrating this edge in one partner maximizes their market share potential.
    • Existing Relationships: We are biased toward the OpenAI/Microsoft ecosystem, where we have decades of working familiarity and know how to operate effectively at the highest strategic and operational levels.


    Source date (UTC): 2025-08-25 21:35:37 UTC

    Original post: https://x.com/i/articles/1960093731790762050

  • Great observation. But, is it intentional? Or is it like a tort, you’ve transmit

    Great observation. But, is it intentional? Or is it like a tort, you’ve transmitted a falsehood whether knowingly or not? How many people transmit lies without knowing they’re lying? How much of discourse by that measure consists of lying? We did not evolve to tell the truth – we evolved to negotiate. Truth and Lying are only valuable in the context of that negotiation. 🙁


    Source date (UTC): 2025-08-25 17:35:22 UTC

    Original post: https://twitter.com/i/web/status/1960033273075450330

  • INSIGHT FROM NOAH REVOY 😉 –“Take a YouTube news video. Scroll down and click “

    INSIGHT FROM NOAH REVOY 😉

    –“Take a YouTube news video. Scroll down and click “See transcript”, copy the text, and paste it into CurtGPT. Ask it to analyze the transcript, point out what’s true, what’s false, and rebut the false claims. That’s the fastest way to parse a two-hour news segment and instantly see exactly where it goes off the rails. This doesn’t work well with standard ChatGPT, but with what we’ve built in CurtGPT, it works exceptionally well.”–


    Source date (UTC): 2025-08-25 00:12:23 UTC

    Original post: https://twitter.com/i/web/status/1959770799038251399