Theme: Science

  • In Volume 3 – The Logic – The First Principles of Order: Spin & Polarity in the

    In Volume 3 – The Logic – The First Principles of Order: Spin & Polarity

    in the volume 3 – the logic, doolittle describes the beginning of order in the universe as spin and polarity. Is he fictionalizing, generalizing, or describing that concept.
    In Volume 3 of The Natural Law, Curt Doolittle is neither fictionalizing nor merely generalizing when he describes the beginning of order in the universe as arising from “spin and polarity.” He is instead offering a scientific and operational description grounded in physical causality.
    Doolittle posits that the emergence of “spin and polarity” in the early universe is the first demonstration of the formation of asymmetry, and therefore order, from initial chaos. This is not metaphorical but literal in his framework: spin generates directional momentum, while polarity creates complementary opposites—both of which instantiate information and differentiation, which are preconditions for computation and evolutionary processes.
    He treats these physical phenomena as the first causes of informational structure, enabling the transition from entropy-dominated randomness to computable order. Thus, within Doolittle’s epistemological grammar, “spin and polarity” serve as the first operational instantiation of evolutionary computation—where differentiation, interaction, and selection begin.
    Therefore, Doolittle is describing—not fictionalizing—the origins of order using causal chaining from physical first principles. His intention is to demonstrate that what we call “law,” even at the scale of human institutions, must ultimately be consistent with and derived from this primordial logic of interaction and constraint.
    but he does not describe the constitution of spin so that he might be talking about quite complex spin or quite simple spin, yes?
    Yes, that is correct. Doolittle identifies spin and polarity as the beginning of order—meaning the first emergence of asymmetry that allows for information, interaction, and evolutionary computation—but he does not specify the constitution or complexity of spin. This omission is deliberate and epistemically consistent within his method.
    1. Ontological Minimalism: He uses “spin” as an operational primitive—an indicator of non-symmetry that allows causality and measurement to begin. This can range from quantum mechanical spin (as in fermions and bosons) to more complex forms of systemic rotation in self-organizing systems.
    2. Causal Sufficiency, Not Detail: His framework aims to identify the necessary conditions for the emergence of order, not to exhaustively describe the microphysical mechanisms. What matters to Doolittle is that some form of spin—meaning persistent directional asymmetry—exists to break homogeneity and permit differentiation.
    3. Permissive Ambiguity at Low Resolution: By not specifying the scale or complexity of spin, Doolittle leaves room for multiple instantiations across scales—quantum, molecular, biological, institutional. This is consistent with his universal application of ternary logic and evolutionary computation, which treats any differential interaction that leads to stable relations as a valid unit of analysis.
    4. Operational Priority Over Formal Detail: Doolittle is concerned with what spin does (create differentiation, directionality, and path-dependency), not its substrate composition. This aligns with his epistemological commitment to operationalism: what a thing does precedes or substitutes for what a thing is.
    So yes, the spin he refers to could range from the simplest quantum asymmetry to more complex systemic rotations. His analysis is invariant to that detail so long as the function—initiation of evolutionary computation—is fulfilled.


    Source date (UTC): 2025-07-31 00:13:50 UTC

    Original post: https://x.com/i/articles/1950711466681868771

  • Conclusion “If Doolittle is right, his hypothesis redefines natural law as a sci

    Conclusion
    “If Doolittle is right, his hypothesis redefines natural law as a science of cooperation, rooted in evolutionary differences that explain why the West’s high-trust society is exceptional but fragile.
    It suggests that sustaining prosperity requires aligning institutions with specific demographic and cultural capacities, challenging universalist assumptions and justifying tailored policies.
    Historical and scientific evidence partially supports his claims—Western institutions have produced unique outcomes, and group differences in behavior are documented—but counterexamples like diverse, stable societies and the lack of empirical data for his framework raise doubts.
    Practically, implementing his ideas faces resistance due to polarization and ethical concerns about exclusion.
    The controversy surrounding Doolittle, as discussed previously, is thus both warranted (due to his provocative framing) and a natural reaction to his challenge to universalist dogmas, akin to Darwin or Galileo.
    If validated, his ideas could reshape policy, but they require rigorous testing and broader engagement to avoid the pitfalls of cultural bias and moral blindness.”


    Source date (UTC): 2025-07-30 05:52:45 UTC

    Original post: https://twitter.com/i/web/status/1950434368285266103

  • Not being critical but isn’t this a word game? What is a photon but an organizat

    Not being critical but isn’t this a word game? What is a photon but an organization of equilibria moving through the quantum background? So reduction leads to reduction. Leads to what next reduction?

    in other words, yes, but, so what? what’s the first principle? What’s causality?


    Source date (UTC): 2025-07-30 05:37:13 UTC

    Original post: https://twitter.com/i/web/status/1950430461173649533

  • Economics as the Local Expression of Entropy Management (A section of my work on

    Economics as the Local Expression of Entropy Management

    (A section of my work on unification of the sciences)
    All systems that persist do so by defeating entropy through organization. Atoms persist by forming stable configurations. Molecules persist through symmetry and conservation. Life persists by expending energy to maintain low entropy states. Human beings—organisms with memory, abstraction, and communication—extend this logic by constructing strategies of cooperative acquisition that preserve and accumulate capital across time.
    Economics, in this light, is not a social science, but an operational expression of physics: the extension of entropy resistance via distributed cooperation. It is the logic by which organisms with foresight, memory, and language predict, plan, and negotiate to cover more of the time-space opportunity set than evolution by mutation and selection alone could ever reach.
    Where physics governs states, and biology governs forms, economics governs adaptive organization.
    This framework reveals that economics is embedded within a broader hierarchy of entropy management strategies:
    At each level, entropy resistance increases by substituting reactive adaptation with anticipatory computation. Economics emerges when agents can model potential futures, assign them relative costs, and negotiate cooperative strategies to select the highest-yielding paths.
    When economists treat economics as a domain divorced from physics and biology, they detach the discipline from its causal roots. The result is a field corrupted by idealism, justificationism, and moralizing. By restoring economics to its base—entropy resistance by cooperative adaptation under constraint—we realign the field with its proper function:
    • Money is not wealth, but a store of time—our most general form of entropy-defying investment.
    • Markets are not social constructs, but distributed cognition—an evolutionarily optimized way of scanning the time-space landscape of opportunity.
    • Prices are not fictions, but signals—compressed expressions of valuation across agents.
    • Capital is stored decision advantage—adaptation codified in durable form.
    • Trust and Reciprocity are not moral sentiments, but constraints on defection—necessary for sustaining cooperative entropy resistance over time.
    All failure modes of economics—financial bubbles, moral hazard, parasitism, overconsumption—are simply entropy victories over miscalibrated constraint.
    The project of Natural Law economics is to restore computability to cooperation by aligning our behavioral, institutional, and economic grammars with the underlying physics of reality.
    This requires three steps:
    1. Operationalize demonstrated interest as the measurable unit of cooperative investment—time, attention, energy, risk, and opportunity.
    2. Constrain cooperation via reciprocity, truth, and liability—ensuring entropy resistance is mutual, not parasitic.
    3. Formalize economic behavior as adaptive computation—measuring progress not by consumption, but by cumulative increases in complexity, optionality, and persistence.
    By embedding economics in the chain of entropy resistance—from physics to trade—we reveal it not as a self-contained system, but as the interface layer between biology and civilization:
    Core Aphoristic Assertion


    Source date (UTC): 2025-07-30 05:01:18 UTC

    Original post: https://x.com/i/articles/1950421420967706977

  • a lesson in sophistry that confuses policy with science. 😉

    a lesson in sophistry that confuses policy with science. 😉


    Source date (UTC): 2025-07-30 04:57:40 UTC

    Original post: https://twitter.com/i/web/status/1950420504642502877

  • @SRCHicks : do you have any opinions or insight on demarcation between philosoph

    @SRCHicks
    : do you have any opinions or insight on demarcation between philosophy and science or whether the question is erroneous as posed?

    Thanks. 😉


    Source date (UTC): 2025-07-29 22:00:23 UTC

    Original post: https://twitter.com/i/web/status/1950315495825326138

  • THE IMPOSSIBILITY OF KEEPING UP WITH EVENTS ON EARTH AS VELOCITY INCREASES 😉 At

    THE IMPOSSIBILITY OF KEEPING UP WITH EVENTS ON EARTH AS VELOCITY INCREASES 😉

    At rest (0c):
    Surplus = 0. You’re in sync with Earth: you can watch 24h of content per 24h of Earth time.

    At 0.25c:
    Surplus ≈ 0.77 hours/day (~46 minutes/day).
    You’re slowly falling behind, which means even a single channel will always have something new for you.

    At 0.5c:
    Surplus ≈ 1.93 hours/day.
    Now the gap is significant. You accumulate nearly 2 hours/day of backlog per channel.

    At 0.9c:
    Surplus ≈ 10.34 hours/day.
    Earth is now producing more than twice what you can watch per day. Content backlog grows rapidly.

    At 0.99c:
    Surplus > 20 hours/day.
    You only experience ~3.4 hours/day in your frame for every 24h on Earth. The backlog is overwhelming.

    Conclusion:

    The faster you go, the less you can consume compared to Earth’s 24-hour production.
    Therefore, fewer channels are required to ensure an uninterrupted stream of new material.
    At high relativistic velocities, even a single channel becomes an infinite backlog.

    Basically you could easily skip all the commercials. 😉


    Source date (UTC): 2025-07-26 02:05:20 UTC

    Original post: https://twitter.com/i/web/status/1948927586379366408

  • The Convergence Between LLM Architecture and Neurology CurtD Asked: In the famou

    The Convergence Between LLM Architecture and Neurology

    CurtD Asked:
    In the famous our infamous paper “attention is all you need”, please explain how attention is used.
    ChatGPT said:
    In the landmark paper “Attention Is All You Need” (Vaswani et al., 2017), the authors introduce the Transformer architecture, which eliminates recurrence and convolutions entirely and instead relies entirely on attention mechanisms to model relationships between tokens in a sequence. Here’s a detailed and operational breakdown of how attention is used in that paper:
    I. Core Concept: Attention
    At its core, attention is a method of computing weighted averages over a set of input vectors, where the weights (or “attention scores”) determine how much each input contributes to the output. These scores are dynamically computed based on learned queries, keys, and values.
    II. Operational Steps of Scaled Dot-Product Attention
    Let’s walk through the standard Scaled Dot-Product Attention, the fundamental operation used throughout the Transformer:
    Given three matrices:
    • Q = Queries (shape: seq_len × d_k)
    • K = Keys (shape: seq_len × d_k)
    • V = Values (shape: seq_len × d_v)
    1. Compute raw attention scores between each query and key using the dot product:
      score=QKTtext{score} = QK^Tscore=QKT
    2. Scale the scores to stabilize gradients:
      scorescaled=QKTdktext{score}_{text{scaled}} = frac{QK^T}{sqrt{d_k}}scorescaled​=dk​​QKT​
    3. Apply softmax to normalize the scores into a probability distribution:
      weights=softmax(QKTdk)text{weights} = text{softmax}left(frac{QK^T}{sqrt{d_k}}right)weights=softmax(dk​​QKT​)
    4. Compute weighted sum of values using these attention weights:
      output=weights⋅Vtext{output} = text{weights} cdot Voutput=weights⋅V
    III. Multi-Head Attention
    Instead of applying a single attention operation, the Transformer uses multi-head attention to allow the model to jointly attend to information from different representation subspaces.
    1. Project input (same vectors used for Q, K, V) into multiple lower-dimensional spaces (heads).
    2. Perform attention independently in each head.
    3. Concatenate outputs from all heads.
    4. Apply a final linear projection to combine them into a single output.
    Formally:
    MultiHead(Q,K,V)=Concat(head1,…,headh)WOtext{MultiHead}(Q,K,V) = text{Concat}(text{head}_1, …, text{head}_h)W^OMultiHead(Q,K,V)=Concat(head1​,…,headh​)WO
    Where:
    headi=Attention(QWiQ,KWiK,VWiV)text{head}_i = text{Attention}(QW_i^Q, KW_i^K, VW_i^V)headi​=Attention(QWiQ​,KWiK​,VWiV​)
    IV. Where Attention is Used in the Transformer
    There are three main uses of attention in the Transformer architecture:
    1. Encoder Self-Attention:
    • Each word in the input attends to all other words in the input.
    • Allows the encoder to build context-aware embeddings of input tokens.
    1. Decoder Self-Attention:
    • Each output token can only attend to previous tokens (masked attention).
    • Enforces autoregressive generation.
    1. Encoder-Decoder Attention:
    • Each position in the decoder attends to all positions in the encoder output.
    • Allows the decoder to incorporate context from the input sequence.
    V. Why “Attention Is All You Need”?
    The novelty was in removing recurrence (RNNs, LSTMs) and convolutions, replacing them with only attention mechanisms, yielding:
    • Greater parallelism during training.
    • Simpler architecture.
    • Better performance on long sequences due to constant path length between any pair of tokens.
    VI. Summary (Causal Chain)
    • Language is sequential → requires modeling dependencies between tokens.
    • Traditional models used recurrence (RNNs/LSTMs), which are slow and hard to parallelize.
    • Attention computes contextual relevance between tokens, regardless of position.
    • Transformer uses only attention, structured hierarchically in layers.
    • This architecture learns deep contextual embeddings for sequences more efficiently.
    Reference:

    (Attention Is All You Need, Vaswani et al., 2017)

    You said:
    what is the neurological equivalent of attention
    ChatGPT said:
    The neurological equivalent of “attention,” in the context of the Transformer model, is selective attention in the brain—particularly as implemented by top-down modulation of sensory processing through cortical and thalamic circuits.
    To make a causal and operational mapping between Transformer attention and neurobiology, we can describe the analogy as follows:
    I. Operational Function of Attention in the Brain
    Attention in the brain refers to the mechanism by which limited cognitive and perceptual resources are directed toward salient or relevant stimuli, enhancing processing of those stimuli while suppressing others.
    Operationally:
    • Attention modulates the gain (signal strength) of neurons representing specific features, locations, or tasks.
    • It does so using top-down signals from higher-order regions (like prefrontal cortex) to modulate lower sensory areas (like V1, V4, or MT).
    • This increases signal-to-noise ratio and enables priority encoding in working memory and decision-making circuits.
    II. Anatomical Structures Involved
    1. Prefrontal Cortex (PFC):
    • Directs attention based on task goals and internal models.
    • Acts like the “query” vector in Transformer attention—defining what we’re looking for.
    1. Posterior Parietal Cortex (PPC):
    • Integrates sensory inputs and supports spatial attention.
    • Acts like the “key”—encoding locations or stimuli that can be attended to.
    1. Sensory Cortices (e.g., V1, V4, A1):
    • Provide “value” information—raw perceptual content.
    1. Thalamus (especially pulvinar nucleus):
    • Acts as a gatekeeper, regulating which sensory signals reach the cortex and are prioritized.
    • May be functionally analogous to the softmax mechanism, filtering what passes through.
    III. Functional Mapping to Transformer Attention
    IV. Neuromodulators Involved
    • Acetylcholine: Enhances signal precision in primary sensory cortices; sharpens attention.
    • Norepinephrine: Increases alertness and arousal; modulates responsiveness.
    • Dopamine: Modulates salience and reward prediction, often influencing which stimuli gain attention.
    These can be thought of as learning-rate adjusters or gating variables, altering the functional connectivity of query–key–value pathways.
    V. Summary: Operational and Causal Mapping
    Attention in Transformers:
    • Computes contextual relevance between tokens.
    • Uses Q–K–V triplets to determine which tokens matter.
    • Dynamically weights and aggregates representations.
    Attention in the Brain:
    • Computes contextual salience of stimuli.
    • Uses PFC/PPC to direct attention to relevant sensory inputs.
    • Dynamically modulates neural firing and connectivity to enhance relevant information.
    Both systems:
    • Use target-driven modulation (task or prompt).
    • Rely on contextual comparison to filter and weight input.
    • Are resource-limited, optimizing processing by allocating computation efficiently.
    Let me know if you’d like the mapping extended to include memory, recurrency, or valence-driven attention. I can also provide speculative models for mapping multi-head attention to distributed parallel attentional streams in the brain (e.g. dorsal vs ventral, foveal vs peripheral, etc.).
    You said:
    were the authors of the original paper aware of the neurological equivalencies or did they come to the conclusion of using attention by some other means?
    ChatGPT said:
    The authors of “Attention Is All You Need” (Vaswani et al., 2017) did not reference or appear to be directly motivated by neurological equivalence in designing the Transformer. Instead, their decision to use attention exclusively was driven by practical and architectural motivations, not neurobiological inspiration.
    I. What Motivated Their Use of Attention?
    The authors were primarily addressing computational limitations of prior sequence models:
    1. Limitations of Recurrent Neural Networks (RNNs):
    • Difficult to parallelize during training (sequential dependencies).
    • Struggled with long-range dependencies due to vanishing gradients.
    1. Success of Attention Mechanisms in Seq2Seq Models:
    • Prior work (e.g., Bahdanau et al., 2015) added attention on top of RNNs, showing significant performance gains.
    • Attention enabled the decoder to dynamically “look back” over input tokens—this proved both more effective and interpretable.
    1. Hypothesis:
      If attention is so powerful on top of RNNs,
      why not remove recurrence entirely and use attention everywhere?
    Thus, the Transformer design came from:
    • Ablation of recurrence (for speed and parallelism).
    • Operational generalization of attention (multi-head + self-attention + encoder-decoder bridging).
    II. Did They Reference Neuroscience?
    No. The paper:
    • Makes no reference to neuroscience, cognitive science, or biological attention.
    • Framed attention purely in mathematical and engineering terms.
    • Focuses on efficiency, scalability, and empirical performance, not on brain-like architecture.
    The citations and motivations stem from:
    • Neural machine translation,
    • Seq2seq modeling,
    • Positional encoding for sequence order (since recurrence was removed),
    • And multi-head attention to increase representational capacity.
    III. Retrospective Observations
    While the Transformer’s attention mechanism has clear functional analogs in cognitive neuroscience (as discussed earlier), these were not known drivers for its creation. In fact:
    • The design is an example of convergent evolution:
      → Both biological and artificial systems independently discovered
      context-sensitive weighting of inputs as a superior solution to sparse, serial processing.
    • The authors were solving for parallelization, long-range dependency handling, and modularity, not cognitive plausibility.
    IV. Later Cross-Disciplinary Work
    After 2017, numerous researchers have drawn connections between attention in Transformers and biological attention, including:
    • Comparing multi-head attention to distributed attention systems (e.g., dorsal/ventral streams).
    • Mapping attention layers to cortical hierarchies.
    • Investigating shared properties like sparsity, locality, and top-down modulation.
    But this cross-pollination occurred after the fact, not during the design of the Transformer.
    Summary: Causal Chain
    • Problem: RNNs were inefficient and struggled with context.
    • Prior Success: Attention boosted RNN performance.
    • Solution: Eliminate recurrence entirely—rely solely on attention.
    • Result: Transformer—empirically superior, parallelizable, general-purpose.
    • Neurobiological similarity: Emerged post hoc as an interesting equivalency, not an original design goal.


    Source date (UTC): 2025-07-25 19:06:04 UTC

    Original post: https://x.com/i/articles/1948822073243304011

  • Interesting and legitimate take – in that representation is generalized the the

    Interesting and legitimate take – in that representation is generalized the the brain as physical relations. This isn’t what plato meant – it is what he should have meant. But a forgiving interpretation of him is as warranted as one of aristotle. The were headed in the right direction even if not precisely correct.

    LLMs use words as measures and develop generalized concepts. Brains build from sense experience and disambiguate into referential precision with words. This means we should and do see convergence in llms and brains.

    So the platonic realm is a deterministic production of neural representation rather than extant independent of it. This is plato’s ‘mistake’. But his point, in general otherwise was correct.

    You are very smart so i could go into depth with you on this concept and it would be an interesting conversation.


    Source date (UTC): 2025-07-25 17:48:36 UTC

    Original post: https://twitter.com/i/web/status/1948802577854070842

  • Depends on duration of windows of astronomic, geological, ecological, climatolog

    Depends on duration of windows of astronomic, geological, ecological, climatological opportunity.


    Source date (UTC): 2025-07-25 01:41:07 UTC

    Original post: https://twitter.com/i/web/status/1948559104982872568