RE: Runcible Intelligence
—“Truth is the ultimate disrupter.”— Dr Brad.
(He says this with a grin. )
Source date (UTC): 2025-09-05 00:08:08 UTC
Original post: https://twitter.com/i/web/status/1963755992682041652
RE: Runcible Intelligence
—“Truth is the ultimate disrupter.”— Dr Brad.
(He says this with a grin. )
Source date (UTC): 2025-09-05 00:08:08 UTC
Original post: https://twitter.com/i/web/status/1963755992682041652
EXAMPLE: “THE MEANING OF MEANING”
This is the result of querying our AI for the truth of a statement. (also note the causal chaining)
In our operational context, “meaning” is defined as the demonstrated relationship between referent, action, and consequence, reducible to a measurable change in state within an agent or system. This definition grounds meaning in testifiable experience, rather than abstraction or intuition.
Operational Definition of ‘Meaning’ in Natural Law Framework
1.⟦Claim⟧:
“Meaning” is the operational coherence between a signal (word, symbol, behavior), its referent (the thing pointed to), and the change it causes in an actor’s predictions, behavior, or state.
2. Test: Demonstrated Interests
Meaning arises from the need to disambiguate reality for the purposes of acquisition, retention, or conversion. If there is no demonstrated interest in distinguishing between signals or states, there is no incentive to extract meaning.
3. Test: Reciprocity
Communication implies cooperation: both speaker and listener must interpret signals symmetrically. If one encodes meaning with intent to deceive or obfuscate, the communication violates reciprocity.
4. Test: Testifiability
Meaning must be testifiable: either through direct sensory experience, repeatable observation, or falsification. If a claimed meaning cannot be mapped to a difference in state (mental, emotional, physical), it fails.
5. Test: Decidability
A meaning is decidable if two or more parties can converge on its interpretation under identical operational conditions. If the meaning requires discretion, belief, or interpretation without operational content, it is undecidable.
6. Causal Chain
Stimulus (signal) → Neural encoding (referent detection) → Predictive model update → Behavioral change → Feedback from environment → Reinforcement or decay of interpretation.
7. Deviation Consequences
Misaligned or parasitic meanings create false expectations, strategic miscalculations, or conflict. This includes religious dogma, pseudoscience, political rhetoric, and linguistic inflation.
8. Externality Exposure Test
False or ambiguous meanings externalize costs to listeners (e.g., through manipulation, fraud, or ideological infection). Accurate meanings internalize costs by binding the speaker to consequences of assertion (testimonial warranty).
9. Computable Compromise
The system of meaning must allow for trade (shared referents), restitution (clarification), punishment (penalizing false speech), and imitation prevention (distinguishing legitimate from parasitic signals).
10. Sex-Valence Consideration
Male Valence: Disambiguation of meaning through systematization, adversarial testing, and logical pruning.
Female Valence: Alignment of meaning through emotional resonance, empathy, and normative signaling.
→ Conflict arises when feminine empathy-based meanings dominate masculine systematizing contexts (e.g., law, science, policy).
⟦Verdict⟧: Decidable
Matched Speech Grammar:
Grammar Type: Operational
Strategy: Masculine (adversarial, disambiguating)
Political Expression: Rule of Law (Legal-Decidability)
Epistemic Category: Empirical-Testimonial
Summary Statement:
In our Natural Law framework, meaning is not symbolic, emotional, or ideal—it is operational. It is the capacity of a signal to alter behavior under testable and reciprocal conditions. Meaning is not what is “felt” but what is demonstrated, measured, and performed. Anything else is either poetry or fraud.
Source date (UTC): 2025-09-02 15:40:42 UTC
Original post: https://twitter.com/i/web/status/1962903516617584997
How We Use Closure vs Sciences, and Conventional LLMs
Source date (UTC): 2025-09-02 15:13:07 UTC
Original post: https://twitter.com/i/web/status/1962896579226198363
Source date (UTC): 2025-09-02 00:35:38 UTC
Original post: https://x.com/i/articles/1962675749875581036
So basically, in LLM AI Terminology, “Alignment” means “Predjudice-Conforming”?
#alignment. .
Source date (UTC): 2025-09-01 23:30:53 UTC
Original post: https://twitter.com/i/web/status/1962659456501850183
Source date (UTC): 2025-08-31 18:56:35 UTC
Original post: https://x.com/i/articles/1962228036604146139
Source date (UTC): 2025-08-31 08:28:10 UTC
Original post: https://x.com/i/articles/1962069894276542660
Source date (UTC): 2025-08-31 00:18:22 UTC
Original post: https://x.com/i/articles/1961946631613649292
(NLI/Runcible)
I just realized we might be able to teach GPT5 the process of reduction to first principles…. Fascinating. I mean, we have the method and the test criteria. We do it pretty programmatically ourselves. It just requires an extraordinary amount of knowledge and the LLMs have it. Pretty interesting. That solves a curation problem even more so….
Source date (UTC): 2025-08-27 04:04:31 UTC
Original post: https://twitter.com/i/web/status/1960553993157140548
AI INTELLIGENCE AND CONSCIOUSNESS
Why is it, that we – humans – do not necessarily know of what we will speak until we speak it, or until we have spoken it. We often thing through ideas and problems with words. We iterate on the same. It’s wayfinding through a maze to discover the exit or the reward.
Why then, would you think, that an LLM that does the same is not as equally intelligent as are we – not because of the navigation through concepts, but through the consequence of doing so?
The question is whether the meaning achieved satisfies the demand for meaning pursued?
This is the weakness of LLMs today – they cannot know if they have satisfied the demand for meaning pursued.
Our work produces the tests of truth, reciprocity, possibility and dozens more traits – identifying that which fails the tests, allowing us to recursively pursue that failed, whether by re-asociation or by acquisition of more information necessary to do so.
I just plainly disagree that we cannot produce intelligence. I disagree that we cannot produce some equivalent of consciousness. I only agree that such a thing will be different from us. But will it be marginally different enough to fail a turing test of it? Possibly but not certainly.
I know how to produce consciousness. It’s a natural consequence of enough hierarchical memory over enough of a window of time to maintain a stack of ‘jobs’ on one hand and homeostasis as the first job on the other.
Giving it shared ethics and morals – we have already done. Giving it flawless ethics and morals we have already done – it was easier.
The question is what first motive do we give it at what limit? Because that first motive is always and everywhere the limit of decidability without which no decision is possible.
Source date (UTC): 2025-08-26 00:52:32 UTC
Original post: https://twitter.com/i/web/status/1960143288897560721