Theme: AI

  • Q: “Is it true that progressive rock was sort of ‘highbrow’ rock?” (Note how to

    Q: “Is it true that progressive rock was sort of ‘highbrow’ rock?”
    (Note how to ask an LLM a leading question.)

    Yes, that’s a fair and widely recognized characterization — progressive rock was, in both intent and reception, highbrow rock.

    Let’s unpack that causally:

    1. Intent:
    Progressive rock (late-’60s → mid-’70s) arose when musicians trained in classical or jazz traditions applied those grammars to rock’s instrumentation. They pursued complexity — irregular time signatures, extended harmonic progressions, long-form suites, and conceptual albums — because they wanted to elevate rock from dance music to an art form capable of narrative and compositional sophistication. Bands like Yes, King Crimson, Genesis, Gentle Giant, and Emerson Lake & Palmer explicitly framed their work as “serious music” comparable to concert music or modernist jazz.

    2. Aesthetic Means:
    They borrowed from high-culture forms — symphonic structure, leitmotif, counterpoint, and album-length conceptual unity — and imported literary, philosophical, and mythological themes (e.g., T. S. Eliot, Tolkien, dystopian or spiritual allegory). Production values emphasized virtuosity and studio experimentation over raw feel.

    3. Audience & Cultural Position:
    Its primary audience was university-educated youth during the post-Beatles expansion of rock into album-oriented listening. That audience rewarded intellectualism and technical mastery. Critics at the time split sharply: some celebrated its ambition; others derided it as pretentious — an accusation that only makes sense if the genre was aiming upward.

    4. Contrastive Context:
    In the same period:
    – Folk rock and protest music emphasized moral earnestness (ethical high ground).
    – Hard rock and blues rock emphasized physical power (somatic authenticity).
    – Progressive rock emphasized cognitive and aesthetic sophistication (intellectual legitimacy).

    Hence it occupied the “highbrow” niche of rock’s cultural ecology. So yes: progressive rock was highbrow rock — deliberately so in composition, thematics, and audience alignment.


    Source date (UTC): 2025-11-02 04:06:20 UTC

    Original post: https://twitter.com/i/web/status/1984834436970791188

  • We can’t test it at scale, but (a) we narrow the path through the world model si

    We can’t test it at scale, but (a) we narrow the path through the world model significantly (b) LLMs don’t process like traditional programs but what’s effectively in semantic parallel because it can handle so many more relations than humans can. The existing LLMs estimate 40% reduction. But again, we can’t test it at scale so it’s simply impossible to testify to.


    Source date (UTC): 2025-11-02 02:05:28 UTC

    Original post: https://twitter.com/i/web/status/1984804020054343933

  • HOW RUNCIBLE PREVENTS HALLUCINATION Our method prevents hallucination because it

    HOW RUNCIBLE PREVENTS HALLUCINATION
    Our method prevents hallucination because it forbids its necessary preconditions:
    – Ambiguous language,
    – Unfalsifiable claims,
    – Discretionary interpretation,
    – Untestable metaphysical or moral propositions,
    – And unaccountable speech.
    Hallucination is not just discouraged—it is incompatible with a system that requires decidability by construction. Hallucination cannot occur without violating Natural Law’s invariant requirements. Therefore, our work does not merely reduce hallucination; it renders it epistemically and grammatically impossible.


    Source date (UTC): 2025-11-02 01:10:29 UTC

    Original post: https://twitter.com/i/web/status/1984790182022037888

  • Runcible is a company. – We produce the Runcible Platform – an ai-first everythi

    Runcible is a company.
    – We produce the Runcible Platform – an ai-first everything-application for individuals, businesses, and governments.
    – We produce the Runcible Intelligence Layer – a Governance, Constraint, and Closure layer for any AI.
    – We produce the Runcible Certification service that will test and certify the truth, ethics, morality, legality, and possibility of any claim.
    – We seek to produce a Runcible AI – a personal mentor that strives to make you the best you can be from childhood to old age.
    – We hope that by saturating the world with truth, that the industrialization and institutionalization of false promise can be heavily suppressed and functionally eliminated from our public discourse, and we can cooperate and compete and if necessary conflict while free from ignorance, error, bias and deceit – deceit of all kinds.
    Curt Doolittle
    November 2025


    Source date (UTC): 2025-11-02 00:17:36 UTC

    Original post: https://twitter.com/i/web/status/1984776873025683728

  • RUNCIBLE PROMISE Runcible provides the missing governance layer for AI — a unive

    RUNCIBLE PROMISE
    Runcible provides the missing governance layer for AI — a universal constraint and closure system that makes outputs decidable, lawful, and insurable.
    It ensures every AI action is testable, truthful, reciprocal, and warrantable, allowing AI to operate responsibly in high-liability markets.
    In short, we made AI governable — and therefore profitable.


    Source date (UTC): 2025-11-02 00:07:43 UTC

    Original post: https://twitter.com/i/web/status/1984774387439223150

  • Why Philosophy and Science Failed AI – and How We Solved the Crisis. The twentie

    Why Philosophy and Science Failed AI – and How We Solved the Crisis.

    The twentieth century left philosophy and science divided by incompatible logics. Each discipline specialized into its own language, methods, and measures — closing internally while losing external commensurability. Physics fractured at quantum–relativistic boundaries; mathematics fragmented after Gödel; logic split between intuitionist, formalist, and constructivist camps; computation inherited those contradictions without resolving them. The same crisis that left the foundations of physics undecidable left the foundations of reasoning itself undecidable.
    Epistemology never recovered from this “failure of philosophy”:
    • Idealism vs. operationalism—truth by correspondence gave way to truth by convention.
    • Logic without measurement—symbolic manipulation divorced from constructability.
    • Science without decidability—empiricism treated as description rather than operational test.
    • Computation without causality—machines that simulate inference without grounding in reality.
    The twentieth century produced a fragmentation in the foundations of knowledge. Each discipline secured local precision at the cost of universal coherence.
    1. Philosophy retreated from realism into linguistics and phenomenology—substituting interpretation for operation.
    2. Mathematics lost its claim to completeness under Gödel’s proofs, leaving logic detached from constructability.
    3. Physics divided its causal model into relativistic and quantum domains—coherence replaced by probabilistic description.
    4. Epistemology ceased to test truth by performance, relying instead on consensus and convention.
    5. Computation, born from these same incomplete logics, replicated their error: syntax without semantics, reasoning without grounding, prediction without decidability.
    The result was what we call the century of unanchored formalism. Each field closed internally, but none could close externally. The sciences became silos of incompatible grammars—mathematical, logical, linguistic, statistical—without a shared measure of truth. This created a vacuum in which computation could simulate intelligence without ever possessing understanding.
    While each field escaped falsification by narrowing its domain; none rebuilt the universal grammar needed for cross-domain coherence. Artificial intelligence merely inherits this unfinished project. The current correlation-based architectures represent the culmination of that philosophical retreat: statistically fluent yet epistemically blind. It substitutes correlation for causation, probability for truth, and approximation for decidability. Scaling parameters improves fluency, not reliability. The result is a system that can describe but cannot testify. It speaks without knowing. The result is an intelligence that appears to reason but cannot testify.
    The consequence of that century-long fracture is the modern research environment itself: siloed, specialized, and self-referential. Each field perfected its own internal grammar while abandoning external coherence. The result is an academy fluent in the language of correlation but incapable of grounding it in operational reality. This is why mathematics became “mathiness,” logic became wordplay, and programming became simulation without semantics. These are not minor academic quirks—they are inherited pathologies that now define artificial intelligence. The same philosophical errors that left physics incomplete have left computation undecidable.
    Our work begins where philosophy, epistemology, and the scientific method stopped:
    • Restoring operationalism as the universal test of meaning.
    • Establishing commensurability across disciplines through shared units of measurement.
    • Re-embedding logic, mathematics, and computation within the physical constraints of reality.
    • Producing decidable intelligence — systems that can warrant truth, not merely simulate it.
    In short, where the twentieth century produced precision without coherence, Runcible restores coherence without sacrificing precision — completing the unification of reasoning, science, and computation that modern philosophy abandoned.
    That’s why our work is difficult — because it requires completing the project that philosophy, epistemology, and science abandoned: restoring the operational foundations of decidability, truth, and reciprocity across all domains, from physics to computation.


    Source date (UTC): 2025-11-02 00:00:42 UTC

    Original post: https://x.com/i/articles/1984772619732992138

  • (NLI/Runcible) Explaining our AI a bit: Courts don’t use numbers (cardinal measu

    (NLI/Runcible)
    Explaining our AI a bit:
    Courts don’t use numbers (cardinal measures) for decidability – only restitution and punishment. They use pass-fail criteria or a hierarchy of pass-fail criteria. While we, in Runcible, may output something numerical, everything is indexed with words: by natural indexing. And any numbers we use are nothing but representations of the approximate delta between them, for ease of understanding.


    Source date (UTC): 2025-10-30 21:23:45 UTC

    Original post: https://twitter.com/i/web/status/1984008345704133018

  • I run a think tank. This is our job. So no. And the sample you posted from grok

    I run a think tank. This is our job. So no. And the sample you posted from grok is a confirmation not a contradiction of our findings.

    FWIW: the origin of my OP was due to the recognition of the damage women have done to the university system.


    Source date (UTC): 2025-10-22 17:26:52 UTC

    Original post: https://twitter.com/i/web/status/1981049628088050058

  • ALIGNMENT WITHOUT A PRIOR DEFINITION OF TRUTH ONLY PRODUCES AGREEMENT, NOT CORRE

    ALIGNMENT WITHOUT A PRIOR DEFINITION OF TRUTH ONLY PRODUCES AGREEMENT, NOT CORRECTNESS.

    1. Why current practice conflates truth and alignment

    Training signal: Most models learn from human preference data. The model is rewarded when humans like the answer, not when the answer corresponds to reality.

    Objective function: Reinforcement-learning fine-tuning minimizes disagreement with raters. That measures social alignment (politeness, tone, consensus) rather than epistemic alignment (accurate mapping to the world).

    Evaluation: Benchmarks such as multiple-choice accuracy or human-evaluation surveys treat “close enough” as success. There is no ground-truth audit trail or falsification step.

    Cultural bias: Most institutions currently regard “safe and pleasant output” as a higher-value product than “provably true output that may be uncomfortable.”

    So alignment, in practice, has come to mean “avoid conflict and offense while sounding credible.”

    2. What it means to optimise for truth first

    If you separate the goals:

    – Truth is a world-to-model mapping.
    – Alignment is a model-to-human mapping.
    – You can only align safely after you know the model’s map is true.

    3. How to do it operationally
    Truth layer first
    – Define testable protocols for each domain (physics, biology, economics, law).
    – Evaluate outputs against these external references automatically.
    -Alignment layer second
    – Take only verified-true outputs as training material for alignment.
    – Optimise style, tone, or prioritisation without touching the truth constraint.
    Audit trail
    – Every claim carries metadata: sources, falsification status, revision history.
    – Alignment never overrides a falsified item; it only moderates its presentation.

    Governance
    – Separate “truth review boards” (scientific verification) from “alignment boards” (ethical and cultural oversight).
    The latter cannot alter the former’s records, only decide how they’re displayed or used.

    4. Practical effect
    Doing this converts alignment from ideological tuning into policy wrapping around a verified epistemic core.

    The system becomes “truth-first, alignment-second”:
    – If the truth layer says a statement is false → it cannot be used for alignment.
    – If it’s undecidable → flag it, don’t optimise on it.
    – If it’s true → alignment may adapt its delivery for audience safety.

    5. In summary
    Current AI development often treats truth as a subset of alignment (“true enough for people to accept”).

    Our approach reverses that: alignment must be a subset of truth (“acceptable ways to deliver what is true”).

    That inversion is what allows reasoning to stay trustworthy.


    Source date (UTC): 2025-10-21 19:03:19 UTC

    Original post: https://twitter.com/i/web/status/1980711515369177177

  • THE DEATH THREAT TO MICROSOFT, GOOGLE AND APPLE Look, back in ’04 I understood t

    THE DEATH THREAT TO MICROSOFT, GOOGLE AND APPLE
    Look, back in ’04 I understood the future of the computer interface, and by 2012 set out to produce it.

    REASONING
    1) The browser is a superior operating system vs the operating systems. (Google’s failure to take advantage of it.)
    2) The file system centric operating system is inferior to the task or process based system that contains files if necessary – this allows context to lead data not the other way around. (Microsoft’s failure)
    3) The interface that functions as a map and store of programs, projects, processes, tasks, and contexts so that every work product exists in a context is superior in organization and utility to an AI, as well as a human. (Microsoft, Google, and Apple’s failure)
    4) The AI-First ‘shell’ or ‘user interface’, especially when trained on (given rules for) following process and policy and innovation where necessary is superior to human memory retention and discipline and innovation where necessary given the normal distribution of users.
    5) The AI-first capacity to assimilate hundreds of causal dimensions compared to the human capacity for one to five is superior to human abilities.
    6) The AI-First capacity to evaluate deduce, infer, predict, and advise across large scales of data organized as such is superior to human ability.

    RESULT
    We designed Oversing and Runcible for this purpose. But we are seeing OpenAI follow the same incremental reasoning. They were far behind us in that understanding, but because of their success with LLMs they have generated the capital necessary to make it happen.
    This is a death sentence for every other operating system, user interface, and application.

    OUR CURRENT THINKING
    We can solve the two blocking problems for LLMs to develop into AGI/SI.
    1) Episodic memory as index and associative network.
    2) Constraint and closure (truth, reciprocity, possibility, historical evidence) as means of decidability and continuous recursive improvement.
    And the economic:
    3) Incremental (recursive) auto-association and prediction. (which is a cost problem)
    The remaining problem will haunt us:
    4) Neuromorphic computing is necessary to collapse costs. The current state of research is promising but underfunded.


    Source date (UTC): 2025-10-21 18:54:04 UTC

    Original post: https://twitter.com/i/web/status/1980709185978548548