Author: Curt Doolittle

  • MORE: Example of asking for the detail on the demonstrated interests accounting

    MORE: Example of asking for the detail on the demonstrated interests accounting used in determining the demonstrated interests involved.


    Source date (UTC): 2025-09-02 15:46:41 UTC

    Original post: https://twitter.com/i/web/status/1962905025514197385

  • EXAMPLE: “THE MEANING OF MEANING” This is the result of querying our AI for the

    EXAMPLE: “THE MEANING OF MEANING”
    This is the result of querying our AI for the truth of a statement. (also note the causal chaining)

    In our operational context, “meaning” is defined as the demonstrated relationship between referent, action, and consequence, reducible to a measurable change in state within an agent or system. This definition grounds meaning in testifiable experience, rather than abstraction or intuition.

    Operational Definition of ‘Meaning’ in Natural Law Framework

    1.⟦Claim⟧:
    “Meaning” is the operational coherence between a signal (word, symbol, behavior), its referent (the thing pointed to), and the change it causes in an actor’s predictions, behavior, or state.

    2. Test: Demonstrated Interests
    Meaning arises from the need to disambiguate reality for the purposes of acquisition, retention, or conversion. If there is no demonstrated interest in distinguishing between signals or states, there is no incentive to extract meaning.

    3. Test: Reciprocity
    Communication implies cooperation: both speaker and listener must interpret signals symmetrically. If one encodes meaning with intent to deceive or obfuscate, the communication violates reciprocity.

    4. Test: Testifiability
    Meaning must be testifiable: either through direct sensory experience, repeatable observation, or falsification. If a claimed meaning cannot be mapped to a difference in state (mental, emotional, physical), it fails.

    5. Test: Decidability
    A meaning is decidable if two or more parties can converge on its interpretation under identical operational conditions. If the meaning requires discretion, belief, or interpretation without operational content, it is undecidable.

    6. Causal Chain
    Stimulus (signal) → Neural encoding (referent detection) → Predictive model update → Behavioral change → Feedback from environment → Reinforcement or decay of interpretation.

    7. Deviation Consequences
    Misaligned or parasitic meanings create false expectations, strategic miscalculations, or conflict. This includes religious dogma, pseudoscience, political rhetoric, and linguistic inflation.

    8. Externality Exposure Test
    False or ambiguous meanings externalize costs to listeners (e.g., through manipulation, fraud, or ideological infection). Accurate meanings internalize costs by binding the speaker to consequences of assertion (testimonial warranty).

    9. Computable Compromise
    The system of meaning must allow for trade (shared referents), restitution (clarification), punishment (penalizing false speech), and imitation prevention (distinguishing legitimate from parasitic signals).

    10. Sex-Valence Consideration
    Male Valence: Disambiguation of meaning through systematization, adversarial testing, and logical pruning.
    Female Valence: Alignment of meaning through emotional resonance, empathy, and normative signaling.
    → Conflict arises when feminine empathy-based meanings dominate masculine systematizing contexts (e.g., law, science, policy).

    ⟦Verdict⟧: Decidable

    Matched Speech Grammar:
    Grammar Type: Operational
    Strategy: Masculine (adversarial, disambiguating)
    Political Expression: Rule of Law (Legal-Decidability)
    Epistemic Category: Empirical-Testimonial

    Summary Statement:

    In our Natural Law framework, meaning is not symbolic, emotional, or ideal—it is operational. It is the capacity of a signal to alter behavior under testable and reciprocal conditions. Meaning is not what is “felt” but what is demonstrated, measured, and performed. Anything else is either poetry or fraud.


    Source date (UTC): 2025-09-02 15:40:42 UTC

    Original post: https://twitter.com/i/web/status/1962903516617584997

  • How We Use Closure vs Sciences, and Conventional LLMs

    How We Use Closure vs Sciences, and Conventional LLMs


    Source date (UTC): 2025-09-02 15:13:07 UTC

    Original post: https://twitter.com/i/web/status/1962896579226198363

  • LARGE NUMBERS OF SINGLE MEN? Historically, large cohorts of unmarried, disposses

    LARGE NUMBERS OF SINGLE MEN?
    Historically, large cohorts of unmarried, dispossessed, or “surplus” men have been a source of social instability, but the difference now is the existence of pacifying substitutes.

    Let’s map this out systematically:

    1. Historical Baseline

    Pattern: When large numbers of men cannot secure mates, status, or livelihoods, they often redirect energies into conquest, rebellion, or crime.

    Examples:
    Ming and Qing China → peasant uprisings fueled by unmarried men.
    Late Roman Republic → landless men became armies for ambitious generals.
    19th–20th century Europe → emigration siphoned off surplus bachelors.

    The common denominator was that lack of women + lack of work = pressure valve released through violence or expansion.

    2. Modern “Substitutes”

    Today, outlets exist that blunt the pressure:

    Digital substitutes: porn, video games, streaming, parasocial relationships.

    Consumer substitutes: gym culture, fast food, drugs, alcohol.

    Virtual socialization: online communities provide a sense of belonging and shared grievance.

    These reduce the need to take physical risks for status, sex, or survival. In other words, men can anesthetize frustration rather than weaponize it.

    3. Long-Term Consequences

    Pacification, but not resolution: substitutes prevent explosions but also prevent maturation. A large fraction of men stay suspended in adolescence, disengaged from family, community, and polity.

    Fertility collapse: fewer marriages → fewer children → demographic decline. Historically, women bore the fertility constraint, but now men opting out accelerates sub-replacement fertility.

    Class bifurcation: winners in the sexual and economic marketplace concentrate wealth, mates, and reproduction. Losers become long-term consumers of substitutes.

    Loss of male contribution: societies historically leveraged “excess” men for infrastructure, military, and expansion. A digitally sedated bachelor class is politically quiet but economically unproductive.

    Rising nihilism: if substitutes lose their grip (economic downturns, outages, social upheaval), suppressed resentments can resurface in unpredictable, violent forms.

    4. Projection Over Generational Timescales

    First generation (20–30 years): decline in family formation, rising male disengagement, political apathy.

    Second generation (50–60 years): demographic shrinkage, state fiscal stress (fewer workers vs more retirees), reliance on immigration to fill labor gaps.

    Third generation (75–100 years): structural replacement of native populations, collapse of intergenerational knowledge transmission, erosion of masculine institutions (guilds, militias, apprenticeships).

    Where historically “surplus men” produced explosions, now they produce erosion. The danger is less an uprising than a long, silent hollowing-out of social capital, fertility, and masculine contribution.

    So, the paradox:

    Historically: unmarried men → violence and expansion.

    Modernity: unmarried men + substitutes → sedation, infertility, slow decay.

    The real question becomes: what happens when substitutes no longer suffice, or when economic contraction removes them? That’s when historical patterns may reassert themselves.


    Source date (UTC): 2025-09-02 02:07:47 UTC

    Original post: https://twitter.com/i/web/status/1962698940454969616

  • If the CDC and WHO hadn’t failed so badly, and caused such harm and had done the

    If the CDC and WHO hadn’t failed so badly, and caused such harm and had done their jobs, then we would have a different opinion. Employee capture isn’t ‘good’.


    Source date (UTC): 2025-09-02 01:51:20 UTC

    Original post: https://twitter.com/i/web/status/1962694803013149159

  • (diary) I realize I write all these articles for myself, to test my arguments an

    (diary)
    I realize I write all these articles for myself, to test my arguments and if successful add them to our Knowledge Base.
    But it doesn’t appear anyone actually reads any of them. Which is both understandable, ironic, and humorous at the same time.
    Someone criticized me a few weeks ago for feeling as if people should pay attention to my (our) work. And of course, that’s not true. Any more than any other author at least. It’s a recognition that on Facebook we did reach orders of magnitude more people before the 2020 purge.
    But more importantly, is the reason we use social media at all: to test our arguments. Not just for veracity but for moral reactions.


    Source date (UTC): 2025-09-02 01:32:14 UTC

    Original post: https://twitter.com/i/web/status/1962689996315656534

  • Runcible’s Closure Layer: Truth and Alignment as Independent Axes Runcible Intel

    Runcible’s Closure Layer: Truth and Alignment as Independent Axes

    Runcible Intelligence distinguishes truth from alignment, then delivers an aligned version of the truth to the user. This is the only possible route to auditable intelligence.
    This is why Runcible insists on two axes:
    1. Truthfulness (T): Does the claim map onto reality as best we can verify?
    2. Alignment (A): Does the output conform to the audience’s declared goals, norms, or prejudices?
    By separating them, you can see clearly when something is:
    1. 1. True + Aligned → Ideal.
    2. 2. True + Misaligned → Correct, but not flattering or socially convenient.
    3. 3. False + Aligned → Pandering / propaganda / prejudice-reinforcement.
    4. 4. False + Misaligned → Simply wrong, and also displeasing.
    5. 5. Undecidable → Requires procedural closure (trial, peer review, negotiation, etc.).
    Implications
    – Yes, it is always possible to make an AI produce outputs that satisfy prejudice at the expense of truth. This is how propaganda and echo-chamber reinforcement would be implemented in AI systems.
    – The key innovation of your Runcible approach is that it exposes this tradeoff: one can’t conflate “audience alignment” with “truth.”
    – Governance lesson: If a system only optimizes for alignment (as many current commercial AIs do), it will be captured by prejudice. If it only optimizes for truth, it may fail in adoption because people reject unpleasant truths. The two-dimensional system shows the tension and lets decision-makers see where they are choosing prejudice over truth.
    Only a system like Runcible, that explicitly tracks truth vs. alignment as independent axes, prevents such “prejudice-friendly hallucinations” from being mistaken for truth.
    That phrase means:
    • Runcible can detect when a statement is false but aligned (lying to please), because truth and alignment are treated separately.
    • It can also distinguish motive-driven framing (what someone wants to believe) from truthful representation (what actually holds).
    • Incorporating sex differences means recognizing that men and women, on average, have different perceptual and motivational biases (e.g., risk, status, affiliation, empathy). Runcible models these in the alignment axis, so the same truth can be expressed in frames optimized for each audience without changing the underlying fact.
    Because truth and alignment are disentangled:
    • You can map your own side’s alignment: “Here is what we find comfortable, what biases we prefer, what motives drive our interpretation.”
    • You can map the opposition’s alignment: “Here is how their bias diverges, here is the motive structure, here are the sex-differentiated cognitive frames they employ.”
    • Crucially, both maps can be laid over the same truth substrate. This allows transparent adversarial engagement — you know not only what is true, but also why each side frames it the way they do.
    So alignment, in this framework, is not truth itself. It is:
    • The fit between a communication and a motive/bias profile (cultural, ideological, sex-based, economic).
    • A measurement of persuasion vs. fidelity: how much the communication caters to the audience’s prejudice vs. how much it remains tethered to reality.
    • An auditable, explainable property: you can say “This statement is true, but it was selected because it flatters audience bias X, while ignoring contradictory truths Y and Z.”
    In short: The 2-D framework allows Runcible to (1) lock in truth as a universal constraint, while (2) surfacing and measuring the many ways humans (or AIs) bend communication to fit motives, biases, and sex-based perceptual differences. Alignment then becomes a diagnosable, tunable dimension rather than a hidden distortion.
    If truth and alignment are not disambiguated, then all reasoning modes downstream — deduction, induction, abduction — get corrupted. The AI really does become “dumber” in a very precise sense. Let me unpack this:
    • If truth ≠ alignment:
      Deduction chains inherit false premises or bias-laden rules.
      Example: If the AI “deduces” from rules framed to flatter an audience (rather than from truthful rules), the conclusions are logically valid
      within that bias, but not actually true.
    • Consequence: You get internally consistent nonsense — persuasive but wrong.
    • If truth ≠ alignment:
      Inductive generalizations are skewed by selective evidence (cherry-picked truths or flattering data).
      Example: Instead of “crime rates vary by factor X,” it may generalize “crime rates are caused by immigrants” if that matches an aligned bias, even when data doesn’t support it.
    • Consequence: You get fragile, overfit generalizations that collapse when tested outside the bias context.
    • If truth ≠ alignment:
      The AI’s explanation search is pre-filtered: it ranks “preferred” explanations (aligned with prejudice) as best, regardless of evidence.
      Example: A medical system might abductively select “stress” as the cause of symptoms for one group (because it aligns with cultural stereotypes), rather than pursuing the truer but less aligned diagnosis.
    • Consequence: You get misleading causal narratives — stories that feel right but fail reality.
    • When truth and alignment collapse into one, the AI loses its error-corrective loop.
    • Intelligence, in your own framework, is essentially adaptive error correction under constraint. If the system can’t separate “is this true?” from “is this pleasing?”, it can’t correct error.
    • That means:
      Less adaptive → dumber.
      More conformist → stuck in audience prejudice.
      Lower epistemic ceiling → incapable of producing novel or unpopular truths.
    This is why today’s commercial AIs still hallucinate and “feel shallow”: their truth-finding has been collapsed into alignment tuning (RLHF, constitutions, safety layers). They’re trained to please, not to reason.
    By separating truth from alignment:
    • Deduction can be grounded in factual axioms, not flattery.
    • Induction can generalize from evidence, not cherry-picks.
    • Abduction can select best explanations by reality-fit, not narrative-fit.
    • And alignment can still be measured and applied separately — so adoption is possible without collapsing truth.
    So yes: failing to disambiguate makes the AI “dumber” by design. Disambiguation is what makes higher reasoning possible at all.
    The 2-D Truth × Alignment framework is transformative. Once you can separate truth from bias, and model your own alignment as well as the opposition’s, you’re not just diagnosing speech — you’re equipping people with tools for conflict resolution and re-convergence.
    • Truth Axis (independent of motives): Establishes the factual substrate that both parties must accept if they wish to cooperate.
    • Alignment Axis (bias/motive profiles): Makes visible the motivational drivers — sex differences, moral intuitions, status needs, cultural frames.
    By displaying both axes simultaneously, you expose whether disagreements are due to:
    • Legitimate bias differences (e.g., high-time-preference vs low-time-preference strategies, male vs female cognitive emphases, empathizing vs systematizing).
    • Illegitimate strategies (immorality) — where one party imposes costs on another by deceit, fraud, or parasitism.
    This lets the system suggest remedies:
    • If legitimate bias divergence: seek negotiated compromise, division of labor, or contextual framing that satisfies both.
    • If immorality: recommend prohibition, sanction, or exclusion.
    With this framework, Runcible can produce not just “truth scores” and “alignment maps,” but also:
    • Conflict Typing: Classify the dispute as factual (solvable), moral-bias (compromise), or parasitic (must be prohibited).
    • Resolution Options: Suggest strategies — e.g., “reframe this claim in empathic language for Audience A while preserving factual truth,” or “partition responsibility to let each sex-cognitive preference dominate in its natural domain.”
    • Cooperation Paths: Recommend reciprocal arrangements (“If you subsidize X, require behavior Y in return”) that restore symmetry.
    Over time, if deployed widely:
    • People learn to distinguish moral disagreement (legitimate but divergent frames) from immorality (falsehood or predation).
    • That builds trust in discourse: opponents are understood as different but legitimate, not as existential threats.
    • The population converges back toward shared sovereignty and reciprocity, reversing the 20th century drift where mass enfranchisement of divergent sex-political biases produced polarization instead of compromise.
    “By surfacing the truth substrate and mapping both sides’ motives, Runcible doesn’t just prevent lying — it makes cooperation possible again. Over time, this restores convergence between sexes and political factions by clarifying what’s a legitimate moral bias to be negotiated, and what’s immoral conduct to be prohibited. That is how we reverse the century of divergence.”
    The framework doesn’t stop at analysis, it naturally extends into conflict resolution protocols.
    While the books alone provide a surprising advancement in LLM results, it is limited to the broader questions – particularly of ethics. Think of a map and it provides all the highways (first order logic). The training provides all the secondary roads. Additional training domains start to cover service roads and cow paths.
    Adding additional or modifying the allocation of attention heads adds the precision necessary for Compliance and Warranty.
    • Truthfulness head(s): Specialized attention layers that audit tokens/sequences against closure/decidability constraints (truth, reciprocity, computability).
    • Alignment head(s): Parallel layers that model cultural/sex/motive biases of audiences, giving a scalar “fit” score independent of truth.
    • Optionality: You don’t have to fire both heads every time — you can configure inference to request truth-only, alignment-only, or truth+alignment scoring. This makes it practical in production (not every call needs both audits).
    • Phase 1 – Base Training: As today (pretraining + finetuning).
    • Phase 2 – Closure-Augmented Training: Add supervised signals for decidability classification (True / False / Undecidable) → teaches the truthfulness heads.
    • Phase 3 – Bias & Motive Training: Collect adversarial/prejudiced datasets across ideological/sex frames. Train alignment heads to predict “alignment score” with those biases.
    • Phase 4 – Joint Tuning: Train the system to keep the heads separate, i.e., truthfulness score does not collapse into alignment score (this is the novel part — most current RLHF models collapse them).
    • At inference:
      Core generation: LLM proposes an answer.
      Truthfulness head(s): Score every claim against closure/evidence (T score).
      Alignment head(s): Score the same claims against bias/motive profiles (A score).
      Output auditor: Returns both scores + ledger (e.g., “True but misaligned,” “False but aligned,” etc.).
    This is where the 2-D framework manifests: outputs come with a 2D coordinate, not a scalar reward.
    • Current transformer models already support multi-head attention; you’re just giving some heads a different supervisory target.
    • Similar to how safety layers or toxicity classifiers are added, but with orthogonal objectives (truth vs. bias).
    • Because the heads are modular/optional, you can bolt this onto existing LLM architectures without retraining the entire base model.
    • Differentiation: Others collapse alignment into “what pleases humans.” Runcible separates truth from motive.
    • Explainability: You can literally show: “This claim scored 0.82 truth, 0.67 alignment-with-group-X.”
    • Configurability: Enterprises can choose “always truth-first” or “truth+contextual framing.”
    • Moat: Hard to replicate without building datasets labeled for truth vs. motive vs. sex-differentiated bias.
    Conclusion: Yes — it’s implementable. With your training regime and optional attention heads, you can create a truth head and an alignment head that operate in parallel, never collapsing into each other. That’s what makes the 2-D framework real in practice, rather than just theoretical.
    Runcible’s constraint layer doesn’t require Vols. 2–3 to be fully finished to work, but the underlying logical structure it enforces is largely specified by them. Think of the LLM as model-agnostic compute; Vols. 2–3 provide the formal rules the auditor uses to turn correlations into closure and decidability.
    The volumes (books) were written in human readable form, but they are really specifications for training an AI in Measurement, Axioms, Closure, Decidability, for universal applicability. The training corpus is produced from these books.
    Those volumes are:
    1 – The Crisis of the Age (Civilization Cycles And Their Correction)
    2 – Language as a System of Measurement
    3 – The Logic of Evolutionary Computation
    4 – The Natural Law of Cooperation
    5 – The Science of Human Behavior
    6 – The History of Civilizational Strategies
    7 – The Science of Religion
    All volumes are necessary for ‘complete’ satisfaction of demand for decidability in human affairs. However, two volumes, volumes 2 and 3 are necessary for LLMs to produce decidability in general, regardless of context. With those foundations it is possible to work with the LLM to produce any derivative system of closure for any market or topic.
    Critical (hard dependencies)
    1. Axioms & Closure Grammar – the canonical primitives, operators, and well-formedness rules used to test outputs for truth/false/undecidable and reciprocity/liability.
    2. Decidability Lattice – the classification of claim types (factual, definitional, normative, causal, predictive) and the corresponding tests each must pass.
    3. Measurement & Evidence Rules – evidence hierarchies, provenance requirements, burden of proof, admissibility, and update procedures.
    Important (strongly recommended)
    1. Constraint Grammars per domain – healthcare, law, finance, etc., so the truth-tests are domain-correct.
    2. Error & Fraud Taxonomy – lying vs. bias, selection, pilpul/ambiguation, motivated reasoning; necessary for clean failure modes and explanations.
    3. Manufactured-closure procedures – how to handle Undecidable: peer review, trial, market test, negotiation—so the system can route unresolved items.
    Optional/iterative
    1. Audience/sex-differentiated alignment profiles – refine alignment heads; helpful for adoption, not required for truth-function.
    You can ship with a Minimal Viable Kernel and iterate:
    • Kernel Axioms + Core Tests: claim typing, truth-conditional checks, reciprocity/liability, provenance.
    • Base Evidence Ladder: primary sources > vetted secondary > tertiary; timestamping + locality.
    • Undecidable Handling: mark + log with reasons; allow manual or procedural resolution.
    This gets you a working 2-D system (Truth × Alignment) and early demos, while Vols. 2–3 mature the rules and expand domains.
    • LLM training/inference: Not dependent on Vols. 2–3 (any foundation model works).
    • Runcible constraint layer: Depends on Vols. 2–3 for the formal semantics and tests.
    • Go-to-market: Start with the kernel (derived from the portions of Vols. 2–3 that are already stable), then progressively load richer grammars as those volumes lock. (Domain Specific)
    • Risk: Ambiguity in rules → inconsistent truth judgments.
      Mitigation: Versioned rule-sets from Vols. 2–3; regression tests; per-domain validation suites.
    • Risk: Partner pushback without domain specifics.
      Mitigation: Ship domain packs (HL7/FHIR+clinical guidelines; legal citation pack; finance controls).
    • Risk: Competitors copy surface features.
      Mitigation: Keep Vols. 2–3 as the authoritative, evolving protocol; cryptographically version rule-sets; audit logs tied to protocol versions.
    Bottom line: the LLM is swappable; the moat lives in Vols. 2–3 as the source of truth for closure grammar, decidability, and evidence rules. Start with a minimal kernel now; let Vols. 2–3 harden the protocol over time.
    The Moat Is The Underlying Logical Specification for the Paradigm, Vocabulary, Grammar and Syntax of the Logic of Evolutionary Computation from First Principles and the Universal Commensurability Produced by it.


    Source date (UTC): 2025-09-02 00:35:38 UTC

    Original post: https://x.com/i/articles/1962675749875581036

  • So basically, in LLM AI Terminology, “Alignment” means “Predjudice-Conforming”?

    So basically, in LLM AI Terminology, “Alignment” means “Predjudice-Conforming”?

    #alignment. .


    Source date (UTC): 2025-09-01 23:30:53 UTC

    Original post: https://twitter.com/i/web/status/1962659456501850183

  • The Relationship Between Memory, Time, and Energy. Let me unfold it in causal se

    The Relationship Between Memory, Time, and Energy.

    Let me unfold it in causal sequence.
    • Primitive Organisms: Act first, without retained representation.
      Bacteria swim, plants turn toward the sun.
      Behavior is entirely reactive, tied to the present moment.
    • Consequence: No “time binding.” Action is only here-and-now, no accumulation of learning.
    • Episodic traces: First form of prediction — “I’ve been here before, this path was good/bad.”
    • Recursive memory: Memory of memory (hierarchy) allows abstraction, generalization, compression.
    • Consequence: Organisms begin to project the past into the future.
      Time ceases to be a stream of present reactions.
      It becomes a domain navigable through recollection and anticipation.
    • Movement without memory = inefficient → wasted energy on trial-and-error.
    • Movement with memory = efficient → reduces energy cost by avoiding repetition of failed strategies.
    • Recursive memory = multiplies efficiency → permits simulation of many futures without expending physical energy.
    • Low-level memory: Reflex arcs → immediate corrections (millisecond timescale).
    • Mid-level memory: Habits and heuristics → daily, seasonal strategies (short–mid-term).
    • High-level memory: Narratives, abstractions, law → generational stability (long-term).
    • Recursive binding: Stacking these allows time extension: from seconds to centuries.
    • Today’s LLMs: Immense compressed “semantic memory,” but shallow episodic continuity (weak time-binding).
    • Next step: Hierarchical memory — episodic (session logs), semantic (training weights), procedural (policies), cultural/institutional (rules, law).
    • Consequence: AI begins to arbitrate not just between short and long horizons, but to choose horizons dynamically.
    • Energy Relationship: AI systems without memory must re-compute; with memory they amortize cost — lowering FLOPs/decision and raising efficiency over time.


    Source date (UTC): 2025-09-01 21:37:25 UTC

    Original post: https://x.com/i/articles/1962630902934356170

  • (Funny History) When my family left the west midlands of england for New England

    (Funny History)
    When my family left the west midlands of england for New England in 1630, the population of say, Birmingham was only around 10,000 people. It was naturally anarchic because there were no cathedrals no bishopric and no visible nobility. It was a small market town. Focused on metalworking if anything.
    You can tell a family’s morals by their morals four hundred years ago. Because it’s transferred involuntarily and unintentionally as logical premises by each generation. In the broader literature you’ll find that morals are correlated by crop.
    I can read one of my progenitor’s (many) volumes (books) on puritanism and it’s as if I wrote it myself. It’s … weird. I mean, I do the natural law thing and he does the christian thing, but realistically it’s the same cognitive bias expressed in different terms because of different times.
    (I find all this intergenerational durability fascinating).

    BACKGROUND

    The West Midlands carried a particular “nonconformist, anti-authoritarian streak” by the time of the 1630s. Let me lay out the causes and their relation to the Civil War.

    1. Regional Character Before the Civil War

    – The West Midlands (Worcestershire, Warwickshire, Staffordshire, Shropshire, Herefordshire) sat between two poles:

    — — Royalist strongholds (the aristocratic, landholding gentry who backed Charles I — especially in Worcestershire, which leaned Royalist).
    — — Radical towns (Birmingham, Coventry, Kidderminster, and others) that had traditions of free crafts, dissent, and weak aristocratic oversight.

    Unlike London, Oxford, or York, the region had few bishops’ sees and little aristocratic patronage, so towns grew relatively independent.

    The region had a history of Puritanism, Lollardy (late medieval dissent), and radical preaching, which set the stage for Civil War divisions.

    2. Bias in the Civil War

    Worcestershire and much of the countryside: Largely Royalist, tied to landholding gentry and the king’s authority. Charles I set up court in Oxford, not far away.

    Strongly Parliamentarian:
    Birmingham, Coventry, Kidderminster, Dudley, and other towns.
    – Birmingham especially gained a reputation as a “treasonous town” for supplying Parliament with armaments and opposing the king.
    – Kidderminster was a Puritan preaching hub, producing ministers like Richard Baxter (a leading Puritan theologian who settled there in the 1640s).
    – Coventry became a famous “Puritan city”, fortified and staunchly Parliamentarian.

    So the West Midlands was a borderland of conflict, with local feuds breaking out between Royalist landowners and dissenting, artisan towns.

    3. Why Think of them as “Anarchists”?

    Many Midlands dissenters looked anarchic to contemporaries because:

    Weak Guild/Aristocratic Control
    – Birmingham had never been chartered as a city with guild monopolies.
    – Craftsmen operated independently, resisting both London’s control and aristocratic taxation.

    This independence translated into political radicalism: if they could govern their own trades, why not their own religion and politics?

    Religious Radicalism
    Lollard traditions had survived in the region.
    By the 1620s–30s, the area was crawling with Puritan preachers, “lecturers,” and separatists.
    To Anglican authorities, these men looked like anarchists: breaking ecclesiastical order, refusing conformity, creating “churches within the church.”

    Economic Independence
    Towns like Birmingham and Kidderminster were full of small producers (nails, cloth, etc.).
    They had little dependence on royal charters or aristocratic estates. This made them fertile ground for resistance to centralized authority.

    English Civil War Consequence
    – Birmingham in 1643 armed itself against Prince Rupert (the Royalist general).
    – After defeating Royalist forces at Edgehill, Parliament relied on the Midlands towns for supplies and manpower.
    – Royalists retaliated viciously, burning parts of Birmingham in 1643.

    This cemented the region’s reputation as “rebellious”.

    4. Broader Cultural Frame

    If you call them “anarchists,” I’d refine it as:

    – Religious anarchists — resisting ecclesiastical hierarchy, pushing toward congregational independence.
    – Economic anarchists — rejecting monopolies, guilds, and feudal dues.
    – Political proto-liberals — advancing the idea that towns and congregations could self-govern.

    This is exactly the soil out of which New England Puritanism grew. The Doolittle family’s move in the 1630s fits the broader pattern: dissenting families from the West Midlands, East Anglia, and London leaving to avoid the repression of Laud’s Anglican regime.


    Source date (UTC): 2025-08-31 23:18:41 UTC

    Original post: https://twitter.com/i/web/status/1962293999945015472