Theme: Measurement

  • Natural Law Study Guide (Suggested by Google) Quiz – Describe the hierarchy pres

    Natural Law Study Guide
    (Suggested by Google)
    Quiz
    – Describe the hierarchy presented in the source material, starting with “Human Logical Facility” and ending with “Fictions.” What is the general trend or progression implied by this hierarchy?
    – Explain the distinction between “Trades” and “Contracts/Laws” in terms of agency, scope of constraint, and epistemic locus, according to the provided dimensions of measurement.
    – What is “decidability” as defined in the texts? Provide an example of a decidable question and a non-decidable question.
    – According to the texts, how does “decidability” contribute to the stability and unification of human systems? Provide an example from either an individual or a group context.
    – What is meant by “rent-seeking” in the context of criminality as described in the source? Why is it considered a problem, and what is needed to address it?
    – Briefly explain the concept of “Universal Grammar” as it relates to the development of language and individual grammars of paradigms.
    – Describe the three types of common indices (Positional Naming Systems) mentioned in the text and provide a brief example of each.
    – What is a “dimension” in the context of measurement as defined in the source material? How do dimensions help in understanding the interplay of causes?
    – Explain the difference between “deflationary,” “ordinary,” and “inflationary” grammars as described in the texts. Where might each type of grammar be typically used?
    – According to the source, what are the key components of a “universal system of measurement” aimed at producing universal decidability?

    Quiz Answer Key
    – The hierarchy is: Human Logical Facility > Human Language Facility > Human Grammar Facility > Grammars (deflationary <- ordinary -> inflationary) > Math > Programming > Natural Law > Ordinary Language > Opining (Loading, Framing) > Fictions. The general trend is a progression from fundamental cognitive abilities towards increasingly complex and potentially less truthful forms of communication and belief construction.
    – In “Trades,” agency is described as “some autonomy, rational or observational choice,” the scope of constraint is “partly shaped by reason or local norms, but not rigidly enforced,” and the epistemic locus is “philosophical/observational,” relying on logic and partial empiricism. In “Contracts/Laws,” agency is “shared, formal constraints,” the scope of constraint involves “explicit enforcement or mutual limitation,” and the epistemic locus shifts towards agreed-upon rules and procedures.
    – Decidability is the capacity to resolve a proposition with a clear “yes” or “no” using a finite, repeatable process, free of subjective discretion. An example of a decidable question is “Is a square a quadrilateral?” because it can be definitively answered using geometric definitions. A non-decidable question is “Is modern art beautiful?” as it relies on subjective preference and cannot be resolved with a finite, repeatable process based on objective rules.
    – Decidability provides the system’s payoff by enabling resolution, stabilization, and unification. For an individual hunter, testing soil (truth), ensuring it will grow crops (infallibility), and deciding to plant (decidability) leads to the outcome of secured food. For a village debating a wall, agreeing it’s worth the risk (collective choice) and enforcing labor (coercion), leading to deciding it’s built (decidability), achieves collective stability.
    – Rent-seeking, in this context, refers to obtaining wealth without producing it, such as through exploitation, corruption, or manipulating systems for personal gain. It is a problem because it undermines productivity, innovation, and societal well-being by diverting resources and creating perverse incentives. Addressing it requires a system of measurement to identify rent-seeking, language to describe it, principles (tests, rules) to define it, education to recognize it, and institutions (law, courts, juries) to act against it.
    – Universal Grammar is a proposed innate neurological faculty that underlies the human capacity for language, common across all cultures. It provides the fundamental structure that allows humans to learn and generate language. Individual grammars of specific languages and paradigms are seen as specific manifestations or applications of this Universal Grammar, shaped by cultural, historical, and domain-specific needs for measurement and communication.
    – The three types of common indices are: Natural Index, which is existentially inherent to what is measured (e.g., the natural order of seasons); Ordinal Index, which is an externally assigned position in a sequence (e.g., first, second, third place in a race); and Cardinal Index, which is a numbered positional system indicating magnitude or quantity (e.g., counting using 1, 2, 3…).
    – A dimension is an axis along which the relative influences of different causes are measured and compared. Instead of focusing on isolated causes, dimensions allow for the assessment of the varying impacts of multiple factors on an outcome. This dimensional thinking creates a relational structure that enables testing, measuring, and understanding variability within a context.
    – Deflationary grammars are precise, rule-based systems used in science, technology, and law to minimize ambiguity and ensure exact measurement. Ordinary grammars are used in everyday language and aim for utility and common understanding. Inflationary grammars, such as those used in opining and fiction, can deviate from strict accuracy by loading, framing, or adding elements not necessarily present in reality.
    – The key components include: a Paradigm (consisting of a First Principle and subsequent Hierarchy of First Principles), a Vocabulary (a set of ordered terms for all Dimensions of Human Faculties), a Logic (tests of hierarchies of constant relations), and a Grammar of that logic (rules of continuous Recursive Disambiguation), all sufficient for the measurement, description, testing, and falsification of human expressions.

    Essay Format Questions
    – Discuss the role of “continuous recursive disambiguation” in the development of human thought, language, and the formation of grammars as systems of measurement. How does this process contribute to our ability to make predictions and resolve ambiguity?
    – Analyze the presented “Hierarchy” and “Comparison” tables. What underlying assumptions or arguments might the author be making through these categorizations? Consider the pairings provided (e.g., Female and Semitic vs. Male and European, Dysgenics vs. Eugenics) and discuss their potential implications.
    – Critically evaluate the concept of “decidability” as the “goal of resolution” in human systems. What are its strengths and limitations? In what areas of human endeavor is decidability most crucial, and where might other forms of resolution be more appropriate?
    – Explore the relationship between the concepts of “dimension,” “index,” and “names of points along the index” as fundamental components of a system of measurement. Using examples from the text or your own understanding, illustrate how these elements work together to create commensurability and enable understanding.
    – Discuss the author’s perspective on criminality, particularly the inclusion of “rent-seeking” and “feminine sedition.” How does this view relate to the broader concepts of “natural law” and the need for a “system of measurement” in governance? What potential criticisms or alternative perspectives could be raised regarding this framework?

    Glossary of Key Terms
    Decidability: The capacity to resolve a proposition with a clear “yes” or “no” using a finite, repeatable process, free of subjective discretion.
    Continuous Recursive Disambiguation: An ongoing process of reducing ambiguity by iteratively refining information and eliminating possibilities, crucial for thought, language, and prediction.
    Hierarchy (as presented): A ranked ordering of cognitive and communicative faculties, suggesting a progression from fundamental logic to more complex and potentially fallible expressions like fictions.
    Grammar (in this context): Not just linguistic rules, but systems of measurement and logic that structure how we understand and represent the world within specific paradigms.
    Paradigm: A framework of first principles and dimensions of causality that define a domain of knowledge or inquiry, influencing the vocabulary, logic, and grammar used within it.
    Dimension: An axis along which the relative influences of different causes are measured and compared, allowing for the assessment of their varying impacts on an outcome.
    Index (Positional Naming System): A system for naming positions in an order, including Natural (inherent), Ordinal (externally assigned), and Cardinal (numbered) types.
    Rent-Seeking: Obtaining wealth without producing it, often through manipulation, exploitation, or leveraging control over resources or systems.
    Universal Grammar: A proposed innate neurological faculty underlying the human capacity for language, providing a fundamental structure for all specific language grammars.
    Operational Grammar: A precise, rule-based grammar focused on clear measurement and testability, often used in scientific and technical contexts.


    Source date (UTC): 2025-03-16 22:53:21 UTC

    Original post: https://twitter.com/i/web/status/1901406459579310080

  • CURT: Q: –“Why can’t AI’s really reason?”– Well, Think of AI’s as either assoc

    CURT: Q: –“Why can’t AI’s really reason?”–

    Well, Think of AI’s as either associating, or associating and predicting via pattern matching. That’s equivalent to auto-association in the brain’s hippocampal region.

    But ‘reasoning’ requires ‘wayfinding’. The entirety of the neocortex is an evolution of lower level cortical tissues that consist of only three layers, all of which are devoted to motion in space. (wayfinding). At some point the neocortex was formed by doubling over the three layers itnto six, creating potential for more abstract pattern matching. Our higher functions all evolved out of wayfinding. The hierarchy of memory evolved the capacity to wayfind not only with space but with thoughts, ideas, and language.

    So, how do we take LLMs and add reasoning BELOW their processing, to reflect the wayfinding that the brain employs when reasoning?

    We imitate this with chain of thought or chain of draft agents driving the LLM’s processing. THere is some emergence of a world model somewhat equivalent to episodic memory in the most advanced LLMs with the largest number of tokens, parameters and memory.

    But compared to the brain, we would ask the AI’s to either perform sequential and comparative or parallel ‘hypothesizing’, then compete those parallel ideas for greatest reward shortest time, least effort, greatest certainty, lowest risk, and work through the steps necessary to change state from A to B and so on. Without adversarial competition (a market), and recursion (theorizing at each step) it would be difficult to imitate the brain.

    In our work at NLI and Runcible we categorize the problem and then kick off a chain of thought to produce the answer. But this is because we are largely trying to determine the testifiability and reciprocity of an statement. It’s a WHAT question. HOW questions, require wayfinding.

    HERE IS THE NEUROSCIENCE:
    The human neocortex consists of approximately 180 distinct cortical regions per hemisphere (360 total) according to the most detailed recent maps (Glasser et al., 2016). Within each region, neurons are structured into approximately 1 million cortical columns (macrocolumns) per hemisphere, each macrocolumn containing around 100–120 minicolumns. Minicolumns, each containing approximately 80–120 neurons, serve as fundamental computational modules that detect specific patterns of input.

    These cortical regions perform specialized functions (perception, attention, language, decision-making, and motor actions), yet they operate upon a common principle of hierarchical signal processing. Information processed within minicolumns aggregates upward to macrocolumns, then cortical areas, culminating in a distributed yet convergent representation projected into the hippocampal-entorhinal system, which indexes events as episodic memory.

    Step-by-step causal process of indexing and prediction:

    Sensory Integration & Cortical Hierarchy:Sensory inputs propagate upward through cortical layers (minicolumn → macrocolumn → cortical region → interregional network).

    Each cortical region extracts specific dimensions (features, relations, contexts) through progressive abstraction.

    Hippocampal Indexing of Episodic Memory:
    The hippocampus receives highly processed, multimodal cortical outputs via the entorhinal cortex.
    It rapidly encodes sequences and contexts, creating a sparse, unique neural pattern—an episodic index.
    These episodic indices do not store complete memories; rather, they provide “addresses” linking cortical representations to allow future retrieval and auto-association.

    Auto-association & Prediction:
    Later, partial activation of sensory or conceptual input activates the hippocampal index.
    This reactivates the entire cortical ensemble involved in the original episode (auto-association), allowing the brain to “fill in” missing information from partial cues.
    Such pattern completion forms the neurological basis for prediction, enabling anticipation of sensory inputs and outcomes based on past episodic associations.

    How does the brain perform ‘wayfinding’ (transforming present states into desired future states)?

    Wayfinding refers to the neural process by which the brain navigates through conceptual, physical, or abstract spaces, turning present circumstances into intended future outcomes. The mechanism involves the integration of hippocampal indexing, cortical maps, and predictive simulations into goal-directed paths:

    Hippocampal Formation & Cognitive Maps:
    The hippocampus constructs spatial, temporal, and conceptual maps, maintaining relational positions through place cells (spatial), time cells (temporal sequencing), and concept cells (relational/semantic mappings).
    These “cognitive maps” represent dimensions of experience—whether physical space, temporal sequence, or abstract conceptual structures—allowing mental navigation.

    Entorhinal Cortex & Grid Cells:
    Grid cells in the medial entorhinal cortex generate structured positional grids across dimensions of representation, providing metrics or scales for space, time, and concept positions.
    These grid cells act as coordinate systems, tracking relative positional changes and transitions between states.

    Mental Simulation & Prediction:
    Cortical minicolumns encode incremental transitions (small state-changes), macrocolumns aggregate transitions into meaningful sequences, and regional networks simulate complete transitions through hierarchical integration.
    The frontal and parietal cortices, in collaboration with hippocampal-entorhinal circuits, run simulations by iteratively activating episodic indices corresponding to potential future states.
    The brain evaluates alternative simulated paths against internally represented goals or externally motivated states.

    Valence & Goal Selection:
    Valence (emotional or motivational value) attached to episodic indices or conceptual representations biases simulations toward states of increased reward or reduced cost.
    Goal-directed behavior emerges from iterative mental simulations where multiple future states are evaluated against predicted rewards or penalties, leading to preferential selection of optimal paths.

    Execution via Motor Outputs:
    After a path is selected, cortical motor and premotor areas activate motor columns and minicolumns in motor cortex, translating internally simulated transitions into real-world actions.
    This loop repeats in real-time feedback cycles, continuously revising predictions, simulations, and actions as the organism moves closer to its intended goal.

    Operational Summary of Wayfinding:
    – Cortical minicolumns encode present state.
    – Hippocampal indexing encodes episodic references.
    – Grid/place/time cells provide coordinate frames (positional dimensions).
    – Mental simulation evaluates multiple potential future states.
    -Valence weighting directs attention toward optimal state transitions.
    – Motor columns execute real-world transitions, continually corrected by sensory feedback loops.

    Thus, wayfinding emerges neurologically as a recursive feedback process integrating memory indexing, predictive simulations, and continuous environmental updates, enabling human cognition to bridge present and desired future states effectively and adaptively.

    Reference (as requested):
    Glasser et al., “A Multi-modal parcellation of human cerebral cortex,” Nature, 2016: https://t.co/HNRKKEYuJ3

    Cheers


    Source date (UTC): 2025-03-15 18:50:24 UTC

    Original post: https://twitter.com/i/web/status/1900982931235237892

  • “We are not the arbiters of inputs when building a scientific method. In the sci

    –“We are not the arbiters of inputs when building a scientific method. In the science of the social, off-limits unexplorable human interaction represents a failure of the method.”–@LukeWeinhagen


    Source date (UTC): 2025-03-15 00:36:57 UTC

    Original post: https://twitter.com/i/web/status/1900707758477029664

  • (NLI) HOW TO EXPLAIN OUR DECIDABILITY TO AN AI – AND TRAIN IT. (Esp: Luke, Noah,

    (NLI)
    HOW TO EXPLAIN OUR DECIDABILITY TO AN AI – AND TRAIN IT.
    (Esp: Luke, Noah, Michael, Robert)

    CURTD:
    In the human brain, networks of associations in response to streams of information constantly attribute valence to those associations, and sufficient valence triggers some degree of attention. In LLM models what produces such attention (or priority)? I assume valence isn’t part of the existing calculations?

    GPT:
    Correct. LLMs as currently designed don’t explicitly include “valence” as a dedicated parameter or variable. Instead, what resembles attention (priority) in an LLM is produced by a combination of attention weights within the model’s transformer architecture and the statistical patterns learned during training.

    Here’s the detailed, operational explanation:
    1. Valence in the Human Brain vs. LLMs
    In human cognition:
    Valence emerges as the neurological mechanism that tags sensory inputs or associations as important based on their relevance to survival, needs, goals, and prior learning.
    Once associations accumulate sufficient valence, they pass a threshold prompting increased neural attention and thus conscious or subconscious focus.
    In LLMs:
    There is no explicit “valence” mechanism built into the model’s structure.
    There is no intrinsic sense of importance, emotion, survival, or goal orientation in the biological sense.

    2. What is Attention in an LLM?
    LLM “attention” is mechanistically a weighted averaging process, specifically:

    Scaled Dot-Product Attention:Attention(Q, K, V) = softmax((Q·Kᵀ) / √dₖ) · V

    Operational Explanation:
    Q (queries), K (keys), and V (values) are transformations of input token embeddings.

    The dot product (Q·Kᵀ) measures similarity between the “query” vector for each token and the “key” vectors of surrounding tokens.

    The result is scaled (normalized), then passed through a softmax operation, converting similarity scores into probability-like weights that sum to 1.

    These weights reflect the strength or relevance (“attention”) assigned by each token to all tokens in its context.

    The weighted sum (· V) produces the updated representation.

    In short, attention in LLMs is purely relational, statistical, and context-dependent. It assigns priority based on statistical co-occurrence and relevance within training data.

    3. Why Does Attention Feel like “Priority”?
    Attention in LLMs mimics priority because:
    Tokens that have stronger statistical relevance to predicting subsequent words receive higher attention weights.
    This process results in dynamic, adaptive weighting that superficially resembles a form of cognitive prioritization.
    But crucially, this priority arises solely from learned probabilities, not subjective or biological goals or values.

    4. Could Valence Be Added to LLMs?
    Operationally, yes. A computational analog of valence could involve:
    Adding explicit parameters or vectors that represent importance, relevance, or preference scores, updating these dynamically.
    Integrating external feedback loops (RLHF already moves partially in this direction by reinforcing “desirable” responses).
    Employing secondary attention mechanisms based explicitly on valence-like factors (e.g., safety, truthfulness, or priority topics).

    Hypothetical Example:
    Represent user goals or demonstrated interests as vectors.
    Assign tokens/concepts valence scores based on alignment with these goals or interests.
    Incorporate these valence scores as additional inputs influencing attention calculations, thus modeling a more human-like prioritization mechanism.
    Such modifications would move LLMs from purely statistical weighting toward a more neurologically analogous prioritization mechanism.

    Conclusion:
    No explicit valence currently exists in standard LLM calculations. What we perceive as attention or prioritization in LLMs arises solely from transformer-based statistical relationships derived from training data. Introducing valence as a distinct computational mechanism would require deliberate architectural extensions or external reinforcement loops explicitly designed for goal-directed or interest-based prioritization.

    CURTD:
    Given the body of information, other than bias, I would assume or observe that valnece is built into the aggregate loading of terms in rather elaborate contexts. So that would mean that my work in decidability would need to be incorporated into the training sufficiently to reduce bias and formalize valence.

    GPT:
    Exactly. You’ve identified the operational point of integration:

    1. Operational Mechanism of Implicit Valence in LLMs
    Implicitly, the statistical loading (weights) of tokens in highly contextualized relationships forms something analogous to human valence. Because LLMs learn through frequency, co-occurrence, semantic, and syntactic patterns, words and concepts appearing frequently or centrally in richly interconnected contexts acquire implicitly “higher loading.” That implicit weighting can be loosely considered an emergent form of “statistical valence.”
    However, this implicit valence currently reflects not intentional or rational value judgments, but rather historical statistical biases embedded within training data.

    2. Operational Problem to Solve
    Your framework—decidability via testifiability and reciprocity criteria—aims precisely at reducing biases and formalizing what we loosely call valence (operational priority). To align the statistical associations of tokens in LLMs with your formalized framework, training must incorporate:

    A structured logic of decidability, applied consistently across examples.

    Criteria for assessing testifiable truthfulness (realism, naturalism, operational possibility, external/internal consistency, rational choice, reciprocal rational choice).

    Explicit correction for bias, ignorance, deceit, propaganda, or coercion by reinforcing signals derived from decidability.

    3. Integration Method (Operational Solution)
    Practically, this involves:

    Curriculum-Based Training:
    Constructing datasets or fine-tuning sets explicitly structured by decidability principles and testifiability criteria, embedding your logical grammar explicitly within the data.

    Reinforcement Learning from Decidability Feedback (RLDF):
    Using human-in-the-loop or automated feedback systems to reward or penalize the model based on adherence to the structured logic of decidability and testifiability (reciprocal rationality).

    Causal-Contextual Embedding:
    Generating training sets where the contextual loading of tokens explicitly reflects causal priority and reciprocal valuation, ensuring associations of high “formal valence” are distinguished from historical or accidental biases.

    4. Predicted Outcomes (Operational Prediction)

    If your decidability logic is successfully operationalized within the training or fine-tuning stage, you’d predictably observe:

    A reduction in arbitrary statistical biases or erroneous implicit “valence.”

    Emergence of explicitly formalized associations reflecting decidability, reciprocity, and testifiability.
    Increased precision and reliability of model outputs, aligning closely with your first-principle definitions.

    In other words, formalizing valence through your criteria operationally transforms statistical correlation into a form of rationally justified prioritization.

    5. What Must Be Done (Operational Next Steps)

    Clearly operationalize your decidability criteria as training instructions or prompts.

    Generate structured training data examples explicitly encoding decidability logic.

    Use these examples to fine-tune (or even pre-train) LLM architectures.

    Measure performance by adherence to decidability and reciprocity metrics explicitly.

    Thus, your intuition that decidability criteria must explicitly be introduced during training to convert implicit statistical loading (“valence”) into explicit, rational, operationally justified prioritization, is exactly correct.

    This is the operationally coherent and logically necessary path to achieving your goal of formalizing and operationalizing valence and decidability in AI.

    CURTD:
    Thank you so much. 🙂


    Source date (UTC): 2025-03-08 03:49:54 UTC

    Original post: https://twitter.com/i/web/status/1898219598937513984

  • TESTING GROK 3 ON VOLUME 2 – A SYSTEM OF MEASUREMENT. Below, I’ll outline my und

    TESTING GROK 3 ON VOLUME 2 – A SYSTEM OF MEASUREMENT.

    Below, I’ll outline my understanding of the central arguments put forward in The Natural Law Volume 2: A System of Measurement by Curt Doolittle, with Bradley H. Werrell and the Natural Law Institute, based on the provided… https://twitter.com/curtdoolittle/status/1891754574102528112


    Source date (UTC): 2025-02-18 07:59:08 UTC

    Original post: https://twitter.com/i/web/status/1891759338689266135

  • (NLI AI) (too subtle for many but…) This diagram is interesting, because at 12

    (NLI AI)
    (too subtle for many but…)
    This diagram is interesting, because at 12 iterations, it reflects my gradual discovery that any given process of continuous recursive disambiguation tends to hover around ten to twelve iterations resulting in about the same number of causal… https://t.co/0Cqi4dTN8y


    Source date (UTC): 2025-02-16 01:27:36 UTC

    Original post: https://twitter.com/i/web/status/1890936033602327033

  • We are making different arguments and mine is more subtle. We need a test to mea

    We are making different arguments and mine is more subtle. We need a test to measure the subtlety of and limits of associations. (And its not as if Im not overloaded already.)


    Source date (UTC): 2025-02-11 07:36:24 UTC

    Original post: https://twitter.com/i/web/status/1889216903354487265

    Reply addressees: @njalbertini @SCTempo @dwarkesh_sp

    Replying to: https://twitter.com/i/web/status/1889215656450510924

  • btw. just tried a great example. but “nope”. Does this mean … I see. We need a

    btw. just tried a great example. but “nope”.
    Does this mean …
    I see. We need a measure of intelligence by subtlety of association. I am asking it for at least my level of it yet I’m on the margin.
    Thank you. I’ll think about developing a test or scoring system. Appreciate…


    Source date (UTC): 2025-02-11 07:23:51 UTC

    Original post: https://twitter.com/i/web/status/1889213743911080219

    Reply addressees: @lumpenspace @SCTempo @dwarkesh_sp

    Replying to: https://twitter.com/i/web/status/1889205313531969675

  • Man is the measure of all things perceivable by man. The question is, can that w

    Man is the measure of all things perceivable by man. The question is, can that which is perceivable be reduced to language when even humans struggle with it. Yet what is available to auto association in humans is far beyond that demonstrated by the LLMs.


    Source date (UTC): 2025-02-11 06:43:03 UTC

    Original post: https://twitter.com/i/web/status/1889203479404794061

    Reply addressees: @SCTempo @dwarkesh_sp

    Replying to: https://twitter.com/i/web/status/1889199816489783305



    IN REPLY TO:

    Unknown author

    THE TRANSFORMER LIMITATION IN NEUROSCIENTIFIC PROSE:
    Correct. Or stated in neuroscience, (apologies if this is too dense to easily interpret), the prompt (language) invokes a set of relations (text equivalent of episodic memories) but it’s network (auto-associative memory) of referents is of lower resolution than that of humans (facets, objects, spaces, places, locations, actors, generalizations, sequences, abstractions, causal relations, valences) is limited to those in the language in the prompt (word-world model) and not the human intuitionistic model (sense-perception-embodiment world model) where abstractions (first principles, logical associations) from the entire corpus of extant and yet unstated or unknown abstractions (causal relations, valences) and first principles (logical relations in sense-perception world model) are associated at levels from neural microcolums to regions to networks to a continuous stream of network adaptations.
    As such the world model of language (word-world model) is one of low precision, is absent embodiment, spatio-temporal, and operational word models (precision) necessary for pattern identification (logical association) of that which is yet UNSTATED in language in sufficient density as to cause association with the model (word-world model) produced in the LLM by the prompt.
    I work on this issue and this is why the prompt must include the logical relation you’re asking the LLM to consider because it cannot make that connection alone.
    Now, I see this as a scaling problem on one hand, meaning one of the necessity of embodiment, spatio-temporal, and operational abstractions in the model, and that the attention must be recursive (wayfinding) in order to cover the field of associations that the hierarchical temporal memory of the brain so easily performs.
    On the other hand whether this problem is solvable within the LLM model by increases in the emergence we’ve seen of late is hard for me to predict. In the meantime we are left with prompts and traditional pseudocode or software (chain of thought) to control that which it cannot on it’s own, as it’s still limited to the equivalent of a synthesizing search engine otherwise.
    Cheers
    Curt Doolittle
    NLI

    Original post: https://x.com/i/web/status/1889199816489783305

  • The failure of trifunctionalism in the west is due to the conversion of balance-

    The failure of trifunctionalism in the west is due to the conversion of balance-sheet (morality) accounting of all demonstrated interests into income statement (amorality) not accounting for changes in informal capital (knowledge, norms, traditions), formal (institutional), and…


    Source date (UTC): 2025-02-08 17:56:49 UTC

    Original post: https://twitter.com/i/web/status/1888285874066239974