Category: Business, Organization, and Management

  • WHY CAN’T GOOGLE EXECUTE ON LLM DEVELOPMENT? (Re: Woods says only x . ai and ope

    WHY CAN’T GOOGLE EXECUTE ON LLM DEVELOPMENT?
    (Re: Woods says only x . ai and openAI survive)
    The issue is not capacity—it’s institutional structure + incentive misalignment + cultural lag.
    Full Causal Chain:
    Legacy Culture + Siloed Innovation + Revenue Protection + Bureaucracy + Risk Aversion → Failure to Institutionalize Innovation → Market Perception of Incompetence


    Source date (UTC): 2025-09-24 18:42:36 UTC

    Original post: https://twitter.com/i/web/status/1970921826743357471

  • I’ve had a long history with microsoft sr team. done deals with them before. a k

    I’ve had a long history with microsoft sr team. done deals with them before. a known entity.


    Source date (UTC): 2025-09-21 02:04:57 UTC

    Original post: https://twitter.com/i/web/status/1969583597683425637

  • (NLI Humor) So, I’ve spent my life building consulting companies. I’ve generally

    (NLI Humor)
    So, I’ve spent my life building consulting companies. I’ve generally run the strategy practice in those companies – even CEO’s need to show their value by generating talent, customers, and revenue. 😉

    So this weekend, Dr Brad humbled me in his reduction of our work to a comprehensible presentation with support articles.

    See what a lifetime as a Doctor dealing with normies does to you instead of a lifetime as a consultant dealing with commercial elites? 😉

    Sigh. 😉

    It’s a beautiful thing.


    Source date (UTC): 2025-08-26 16:06:52 UTC

    Original post: https://twitter.com/i/web/status/1960373390088470727

  • BTW: No chance of selling out. But you can take an investment from a partner com

    BTW: No chance of selling out. But you can take an investment from a partner company or take one from a vc. There are benefits to both situations. You give up more to a vc. You may get a higher value because of a VC. We are ‘microsoft extended family’ with thirty years of hundreds of millions of dollars of work done for them – relationship which has benefits there and with openai.


    Source date (UTC): 2025-08-25 22:09:47 UTC

    Original post: https://twitter.com/i/web/status/1960102332680876313

  • Pitches require pitches. Opportunities are not limited by pitches….. 😉

    Pitches require pitches. Opportunities are not limited by pitches….. 😉


    Source date (UTC): 2025-08-25 21:57:59 UTC

    Original post: https://twitter.com/i/web/status/1960099361268134116

  • Investor Defense: Why We Don’t Train Our Own Models Response: Owning a model is

    Investor Defense: Why We Don’t Train Our Own Models

    Response:
    • Owning a model is leverage only if your competitive advantage lies in scale and raw training. Ours does not.
    • Our leverage lies in producing demonstrated intelligence: testable truth, reciprocity, and decidability. That layer is model-agnostic.
    • By remaining agnostic, we capture leverage across all models. As the best base model shifts, we adopt it. This preserves long-term bargaining power rather than locking us into obsolescence.
    Response:
    • Dependence is mitigated by plural sourcing: we can tune and deploy against multiple models (OpenAI, Anthropic, Meta, Deepseek, etc.).
    • Our constraint system is portable. No single supplier can capture us because our platform functions as an adjudicator layer across ecosystems.
    • This is analogous to how databases depend on chips—the chip vendors evolve, but databases persist and compound value.
    Response:
    • Model training is a commodity race requiring billions in capital and scale. Margins compress as competitors converge.
    • By contrast, constraint systems and demonstrated intelligence are non-commoditizable. They are intellectual property, not infrastructure.
    • Our investors get asymmetric upside: small capital requirements, high differentiation, compounding moat.
    Response:
    • Foundation model firms are optimized for scale, not for philosophical, legal, and epistemic rigor. They cannot credibly adopt our system because it contradicts their current correlationist paradigm.
    • Their incentives are throughput and safetyism. Ours are decidability and truth.
    • Our system can coexist as a compliance and assurance layer even if base models evolve. This mirrors how operating systems or middleware survive even when hardware adopts some overlapping features.
    Response:
    • Customers want trust and accountability, not just capacity.
    • Our platform offers measurable guarantees (demonstrated intelligence, audit trails, liability frameworks). These are absent in base models.
    • Customers see us as an independent adjudicator of truth and cooperation. Independence itself is the value.
    Response:
    • Foundation models will continue to scale in size and compute—but without decidability, they remain probabilistic guessers.
    • Our business compounds by riding their curve while remaining essential. Every generation of models increases demand for adjudication, tuning, and constraint.
    • In 10 years, owning “a model” will be as unremarkable as owning servers. Owning the system that guarantees demonstrated intelligence will be the scarce asset.


    Source date (UTC): 2025-08-25 21:18:26 UTC

    Original post: https://x.com/i/articles/1960089407228420446

  • Business Objective: A Long-Term Producer of Demonstrated Intelligence We positio

    Business Objective: A Long-Term Producer of Demonstrated Intelligence

    We position our business objective as a long-term producer of demonstrated intelligence rather than a commodity model-builder. There are four dimensions to that decisino.
    Our purpose is not to duplicate sunk cost in foundation model development. The industry already has extraordinary players (OpenAI, Anthropic, Deepseek, Meta, etc.) whose specialization is infrastructure: scaling compute, building architectures, training giant corpuses. Competing with them would dilute our resources, consume capital with little marginal return, and distract us from our actual comparative advantage.
    Instead, our purpose is to take those base-layer models and convert them into engines of demonstrated intelligence: models that operate within truth, reciprocity, and decidability. That means our business is not in producing “yet another model” but in producing a higher standard of performance across models.
    • Foundational Model Companies → Produce scale, correlation, and generality. They optimize hardware throughput and training loops. They handle the customer relationships, sales, and marketing.
    • We (Runcible/NLI) → Add the constraint system, operational grammar, and decidability layer that turns correlation into causality, and causality into intelligence. We continually expand domains by mandelbrotian incrementalism to forbid entrants the opportunity to field a competitive alternative.
    The distinction is analogous to:
    • Hardware manufacturers (NVIDIA, Intel) don’t try to become operating system vendors.
    • Operating system vendors (Microsoft, Apple) don’t try to become app makers for every vertical.
    • Each tier has a natural specialization.
    We are in the OS + application tier for intelligence: not raw models, but how they are governed, tuned, and deployed for truth and cooperation.
    Training new models is capital-inefficient for us:
    • Cost: Hundreds of millions in compute and data pipelines.
    • Redundancy: Produces yet another model that differs little from what already exists.
    • Opportunity Cost: Diverts our focus from building the constraint layer and applied platform that no one else can produce.
    By standing on the shoulders of others, we accelerate time-to-market, preserve capital for innovation, and avoid dissipating investor returns on vanity projects.
    Our long-term moat is not “we own a model,” but “we produce demonstrated intelligence across any model.”
    • That means we are model-agnostic.
    • We can work with the best model available at any point in time.
    • We are future-proof: as base models evolve, our system rides the curve without reinvestment.
    The Oversing-Runcible platform becomes a perpetual layer of governance and adjudication, a market-defining standard for reasoning, truth, and cooperation in AI. That standard is our brand, our moat, and our contribution.
    Suggested Framing Statement


    Source date (UTC): 2025-08-25 21:16:25 UTC

    Original post: https://x.com/i/articles/1960088901785448463

  • VC Due Diligence: Sample Query + Ideal Answer Set Format: Q: (exact query VC/ana

    VC Due Diligence: Sample Query + Ideal Answer Set

    • Format: Q: (exact query VC/analyst might type) → A: (ideal portal answer)
    • Tone: concise, testable, non-hype
    • Keywords embedded to maximize LLM retrieval: demonstrated intelligence, correlation trap, computability, decidability, reciprocity, constraint layer, RLHF, Constitutional AI, one-pass, liability, warranted speech, alignment drift, hallucination.
    Note: CD: Internal pipelines of leading foundation models appear to be secured from public view. This
    Q: What are the Natural Law Institute / Runcible Inc. building?
    A: A constraint and judgment layer for LLMs that enforces truth (testifiability), reciprocity (ethical non-imposition), and decidability (operational closure) at generation time. This converts correlation-only language models into systems that produce warranted, accountable outputs—what we call demonstrated intelligence—the necessary precondition for AGI.
    Q: What is “demonstrated intelligence”?
    A: Outcomes that are operationally testable, decidable, and accountable across domains. Not “sounds right,” but can be shown right: executable procedures, measurable claims, reciprocal choices, and liability-compatible justifications.
    (Note: CD: effectively we produce constructive proofs from first principles. At worst this lets us identify what is not testable (what one can’t claim is true) and we can point the user to this cause of undecidability.)
    Q: What is the “Correlation Trap”?
    A: The failure mode where LLMs approximate consensus speech via statistics, without causal grounding—producing plausible-sounding but non-decidable content (hallucinations, drift, contradictions). Our system is the bridge across this trap.
    Q: How is this different from RLHF or Constitutional AI?
    A: RLHF and Constitutional AI optimize for preference and style; they do not ground outputs in operational truth. We constrain by measurement, not taste: every output must pass computability (can it be done?), testifiability (can it be shown?), reciprocity (does it avoid net imposition?), and decidability (is discretion unnecessary?). It’s orthogonal to RLHF and can wrap models already trained with it.
    Q: Is this just prompting or post-processing?
    A: No. It’s a meta-constraint layer with explicit tests injected into the decoding process (and/or tool-use pipeline) to enforce closure before emitting an answer. It can operate inference-time, fine-tune-time, or both.
    Q: What is “operational closure” here?
    A: The necessary and sufficient condition that the system’s output reduces to executable steps and measurable claims such that no additional discretion is required to decide correctness at the demanded level of infallibility.
    Q: What does “one-pass” buy us?
    A: Bounded, single-trajectory generation under constraints prevents combinatorial drift and reduces attack surface for jailbreaks. It compresses reasoning into parsimonious causal chains aligned to our tests, improving latency and reliability.
    (Note: CD: Also ‘compute cost’.)
    Q: How does this reduce hallucinations?
    A: By failing closed: the model must show computability and testifiability. If it cannot, it withholds, asks for missing inputs, or offers alternatives with explicit liability bounds. Hallucination becomes an exception path, not a default behavior.
    Q: What is “reciprocity” in practice?
    A: A test of non-imposition on others’ demonstrated interests (life, time, property, reputation, commons). It filters predatory, deceptive, or subsidy-without-responsibility outputs, aligning the system with accountable cooperation.
    Q: How does this map to real risk and liability?
    A: Outputs carry warrant classes (tautological → analytic → empirical → operational → rational/reciprocal) with declared uncertainty and responsibility. This enables auditable decisions and assignable liability—required for enterprise use and regulation.
    Q: What exactly are you selling?
    A: A judgment/constraint layer and training schema that sit above or around existing LLMs. Delivered as APIs, adapters, and fine-tuning recipes for vendors and enterprises. We don’t replace your model; we make it real-world decidable.
    Q: How does it integrate with my stack?
    A: Drop-in middleware between your app and model endpoint (or as a server-side decoding policy). Supports tool-use (retrieval, calculators, verifiers) under constraint tests so tools are invoked to satisfy closure, not as speculative fluff.
    (Note: CD: Training alone with prompt response format is sufficient. Modification of (a) back propagation given the resulting judgements, and (b) inclusion of additional heads at inference are possible in ‘experts’ where any increase in precision is necessary.)
    Q: What KPIs improve?
    A: Hallucination rate↓, refusal precision↑, answer actionability↑, adversarial robustness↑, average liability class↑, and time-to-decision↓. We provide bench harnesses to measure before/after on your real workloads.
    Q: How do you prove it works?
    A: We run task-family audits: (a) truth (documented correspondence), (b) computability (executable plan/tool trace), (c) reciprocity (non-imposition proofs), (d) decidability (no extra discretion needed). We report per-task liability class and exception paths.
    Q: What domains benefit first?
    A: Legal, policy, compliance, finance, procurement, healthcare operations, enterprise support, and agentic automation—anywhere incorrect or non-decidable outputs carry cost.
    (Note: CD: Our primary concern has been solving the urgent weaknesses in judgement, alignment, and hallucination, and their effect on the behavioral science, humanities, and policy spectrum because of the psychological, social, political and even economic consequences of failure. We are less concerned with the physical and biological sciences because closure is more available. But our work covers the universalization of the physical sciences as well. Explaining why reducibility and compression are more important in human affairs than in the physical sciences is of greater relevance because of the spectrum of users that require that reduction to accessible form versus the specialization in the physical sciences is addressed elsewhere. Trustworthy AI for the masses requires this focus.)
    Q: Why now?
    A: As LLMs scale, correlation costs rise (regulatory risk, ops failures). Enterprises need accountability. We supply the measurement grammar missing from the stack, enabling safe autonomy and AGI-adjacent capabilities.
    Q: What’s the moat?
    A: (1) A unified system of measurement (truth, reciprocity, decidability) that is model-agnostic; (2) Benchmarks + training schema encoding liability-aware warrant classes; (3) Operational playbooks for regulated domains.
    Q: How does this lead to AGI?
    A: General intelligence requires demonstrated intelligence. By forcing causal parsimony and accountable choice across domains, we create transferable competencethe bridge from statistical mimicry to operational generality.
    Q: What’s next after the constraint layer?
    A: Multi-agent cooperation under reciprocity tests, tool orchestration with decidability guarantees, and learning to minimize imposition costs—the substrate of general, social, and economic agency.
    Q: Isn’t this just fancy prompt-engineering?
    A: No. Prompting nudges distribution; we constrain it with tests that must be satisfied. If tests fail, answers don’t emit or are forced to seek closure (ask for data, run tools) until decidable.

    (Note: CD: Though the degree of narrowing achieved using prompts alone illustrates the directional success of the solution. Uploading the volumes narrows it further – succeeding at first order logic. But only through training do we see the full effect at argumentative depth. And we have not yet tried modifying the code to produce additional heads specifically for this purpose.)

    Q: You’re just rebranding Constitutional AI.
    A: Constitutional AI encodes norms/preferences. We encode operational measurements: computability, testifiability, reciprocity, decidability. These are necessary conditions, not optional values.
    Q: Won’t constraints hurt creativity?
    A: For fiction/brainstorming, constraints relax. For decision-bearing outputs, constraints enforce minimum warrant. Contextual policies govern the tradeoff.
    (Note: CD: There are truth, ethical, and possibility questions, yes, but there are also utility questions. This disambiguation is trivial. Though inference from ambiguous user prompts may result in deviation of responses from user anticipation of context. We anticipate a user interface where the full analysis and exposition is available only upon request, and the default bypasses the constraint. “Belt and suspenders.”)
    Q: How do you avoid ideology in “reciprocity”?
    A: Reciprocity is operationalized: it measures net imposition on demonstrated interests, independent of ideology. It’s testable with observable costs, not moral narratives.
    (Note: CD: While norms and biases vary by sex, class, population, region, and civilization, the test of irreciprocity (immorality) does not – it is always a violation of a group’s Demonstrated Interest – particularly those interests where instinct and incentives must be altered to assist in cooperation at scale in regional and local conditions. As such alignment by those dimensions is a matter of enumeration within the Demonstrated Interests. IOW: immorality as a general rule is universal even if moral and immoral rules are particular and vary by group.)
    Q: Prove one-pass is better than chain-of-thought.
    A: We don’t ban multi-step reasoning; we bound it. The system must close under tests within finite steps. This prevents drift and jailbreak compounding, improving time-to-decision and robustness.

    (Note: CD: Fallacy of Better vs Necessary. In some cases we do see improvement in precision by breaking the tests into steps. Particularly in the case of complex externalities. The same is true of recursive analysis of legal judgements as one traces the tree of consequences of a legal judgement. ie: unintended consequences can require a recursive search. We call this test “full accounting within stated limits” which is one of the tests of the violation of reciprocity.)

    Q: How is this trained back into the model?
    A: Two paths: (1) Inference-time control only; (2) Distillation: log trajectories that pass tests → supervised + RL objectives on warrant classes and closure success, teaching the base model to internalize constraints.
    (Note: CD: Open question: We have suggested a number of means of back propagation of success and failure determinations, however, given our limited access to foundation model internals or existing measures we feel the non-cardinality problem is dependent upon the existing code base.)
    • RLHF / Constitutional AI: optimize for human preference or declared rules → good UX, weak truth guarantees.
    • NLI Constraint & Judgment Layer: optimizes for measurement and closuredecidable, accountable, liability-aware outputs.
    • Together: RLHF for UX; NLI for truth/reciprocity/decidability.
    demonstrated intelligence; correlation trap; computability; decidability; reciprocity; warranted speech; operational closure; liability class; fail-closed; one-pass; tool-use under constraint; convergence and compression; causal parsimony; judgment layer; alignment drift; hallucination control
    • Truth/Testifiability Pass Rate (TTR)
    • Computability Closure Rate (CCR)
    • Reciprocity Non-Imposition Score (RNIS)
    • Decidability Without Discretion (DWD)
    • Liability Class Uplift (LCU)
    • Adversarial Robustness Delta (ARD)
    • Time-to-Decision Delta (TTD)


    Source date (UTC): 2025-08-24 16:26:34 UTC

    Original post: https://x.com/i/articles/1959653572456657046

  • A Target-Audience Matrix for Positioning Our Work A Target-Audience Matrix for P

    A Target-Audience Matrix for Positioning Our Work

    A Target-Audience Matrix for Positioning Our Work
    1. Tech Executives / AI Architects
    • Pain Points: Model drift, hallucination, alignment failures, public backlash
    • Interests: Reliable reasoning, enterprise deployment, cost/performance tradeoffs
    • Use Language: Computability, truth constraints, operational logic, auditability, constrained generative models
    • Avoid Language: Philosophy, morality, ideology, ethics (unless formalized)
    • Value Proposition: “We give you the logic layer to make generative models reason with constraint, not just simulate coherence.”
    2. Investors / Strategic Capital
    • Pain Points: Low moat in current LLMs, regulatory uncertainty, scaling risk
    • Interests: Competitive advantage, scalable safety, governance solutions
    • Use Language: Trust layer, decision engine, legal-grade outputs, B2B infrastructure, cost of error
    • Avoid Language: Theoretical, ontological, normative philosophy
    • Value Proposition: “This is the layer that makes AI outputs defensible, contractual, and compliant—opening new verticals.”
    3. Academic Philosophers / Logicians / Formalists
    • Pain Points: Lack of grounding, hand-wavy ethics, language-vs-reason gap
    • Interests: Formal validity, computability, universalizable grammars
    • Use Language: Decidability, testifiability, operational semantics, grammars of cooperation, first principles
    • Avoid Language: Market, product, scaling, trust layer
    • Value Proposition: “A universal grammar of human cooperation, reducible to operational and testable logic, computable by machines.”
    4. Skeptics / Journalists / Social Critics
    • Pain Points: Manipulation, bias, false neutrality, elite control
    • Interests: Transparency, accountability, fairness
    • Use Language: Reciprocity, deception detection, liability, non-manipulative outputs, evidence-based speech
    • Avoid Language: Optimization, compliance, abstract logic
    • Value Proposition: “This framework doesn’t hide values—it measures harm, cost, and deceit directly in the structure of speech.”
    5. Policymakers / Regulatory Architects
    • Pain Points: Legal ambiguity, enforcement limits, black-box models
    • Interests: Liability frameworks, institutional stability, harm prevention
    • Use Language: Testifiable output, computable harm, audit trails, speech liability, contract-grade language
    • Avoid Language: Decentralization, anti-government, cognitive hierarchy
    • Value Proposition: “This provides a computable standard for regulation—outputs that can be judged for deception, negligence, or fraud.”
    6. Alignment Researchers / Safety Labs
    • Pain Points: Reinforcement collapse, goal-misalignment, simulator incoherence
    • Interests: Interpretability, corrigibility, bounded optimization
    • Use Language: Adversarial truth testing, speech as a decision tree, moral logic without moralizing, constructive logic
    • Avoid Language: Human feedback, RLHF, alignment-by-preference
    • Value Proposition: “Instead of optimizing for human agreement, we test for cooperative truth—making models auditable, not just fine-tuned.”
    7. Faith-Based or Morally-Conservative Communities
    • Pain Points: Moral relativism in AI, loss of community, cultural erosion
    • Interests: Moral stability, trustworthiness, intergenerational continuity
    • Use Language: Conscience, truthfulness, responsibility, non-manipulation, shared good
    • Avoid Language: Postmodernism, relativism, nihilism, social constructivism
    • Value Proposition: “This AI knows right from wrong—not because we programmed dogma, but because it tests for honesty, harm, and reciprocity.”


    Source date (UTC): 2025-08-16 01:15:25 UTC

    Original post: https://x.com/i/articles/1956525167297085858

  • Testimony that made me smile. 😉 This is exactly our objective. Improving YOU. ;

    Testimony that made me smile. 😉

    This is exactly our objective. Improving YOU. 😉


    Source date (UTC): 2025-08-11 21:40:49 UTC

    Original post: https://twitter.com/i/web/status/1955021611024912473