I would love a Twitter filter that required an actual argument in order to comment on one of my posts. đ
Source date (UTC): 2025-11-06 02:38:53 UTC
Original post: https://twitter.com/i/web/status/1986261978864853094
I would love a Twitter filter that required an actual argument in order to comment on one of my posts. đ
Source date (UTC): 2025-11-06 02:38:53 UTC
Original post: https://twitter.com/i/web/status/1986261978864853094
Source date (UTC): 2025-11-01 03:25:40 UTC
Original post: https://x.com/i/articles/1984461813275377978
Truth: Elon has security clearances that you need security clearance to even know about. (yes really).
Source date (UTC): 2025-10-31 19:15:44 UTC
Original post: https://twitter.com/i/web/status/1984338515778560106
Accusation is not argument, it is evidence of the absence of one’s argument. I do not err. You do. You demonstrate such. Sorry.
I can if necessary explain the genetics and early development of migration of stem cells from the neural tube and their inhibition as neotenic expression. (it’s visible to the left of my diagram of the spread of human diversity into subspecies (races)). Science is what it is. Ideology is what you’ve been taught. Sorry.
Humans are the only taxonomic category that does not treat subspecies as such. This alone is evidence of human evasion of the truth.
Source date (UTC): 2025-10-31 18:35:01 UTC
Original post: https://twitter.com/i/web/status/1984328271002226992
not a hoax at all. see my other posts in this thread.
Source date (UTC): 2025-10-27 03:21:24 UTC
Original post: https://twitter.com/i/web/status/1982648798741745745
You didn’t really just quote an internet survey, did you?
All sources of merit agree that India’s IQ is between mid seventies and low eighties.
The genetic distribution determines it.
Source date (UTC): 2025-10-27 03:02:48 UTC
Original post: https://twitter.com/i/web/status/1982644121627177322
This is the proper analysis of
@Yockey_gaming9
‘s method of lying.
https://
x.com/curtdoolittle/
status/1981195617708970385
âŚ
Now historically we categorized most topics as fallacies because we tried to be respectful of the frailties and follies of others when in the mutual pursuit of truth and responsibility.
But whenever you hear or see a feminine argument to avoid truth and responsibility then it’s not fallacy, its lying. And this individual is lying by instinct, experience, or intent.
Given the lack of intelligence in the arguments we will have to assume lying by instinct and experience, rather than intent.
Source date (UTC): 2025-10-23 16:19:50 UTC
Original post: https://twitter.com/i/web/status/1981395147087892859
ALIGNMENT WITHOUT A PRIOR DEFINITION OF TRUTH ONLY PRODUCES AGREEMENT, NOT CORRECTNESS.
1. Why current practice conflates truth and alignment
Training signal: Most models learn from human preference data. The model is rewarded when humans like the answer, not when the answer corresponds to reality.
Objective function: Reinforcement-learning fine-tuning minimizes disagreement with raters. That measures social alignment (politeness, tone, consensus) rather than epistemic alignment (accurate mapping to the world).
Evaluation: Benchmarks such as multiple-choice accuracy or human-evaluation surveys treat âclose enoughâ as success. There is no ground-truth audit trail or falsification step.
Cultural bias: Most institutions currently regard âsafe and pleasant outputâ as a higher-value product than âprovably true output that may be uncomfortable.â
So alignment, in practice, has come to mean âavoid conflict and offense while sounding credible.â
2. What it means to optimise for truth first
If you separate the goals:
– Truth is a world-to-model mapping.
– Alignment is a model-to-human mapping.
– You can only align safely after you know the modelâs map is true.
3. How to do it operationally
Truth layer first
– Define testable protocols for each domain (physics, biology, economics, law).
– Evaluate outputs against these external references automatically.
-Alignment layer second
– Take only verified-true outputs as training material for alignment.
– Optimise style, tone, or prioritisation without touching the truth constraint.
Audit trail
– Every claim carries metadata: sources, falsification status, revision history.
– Alignment never overrides a falsified item; it only moderates its presentation.
Governance
– Separate âtruth review boardsâ (scientific verification) from âalignment boardsâ (ethical and cultural oversight).
The latter cannot alter the formerâs records, only decide how theyâre displayed or used.
4. Practical effect
Doing this converts alignment from ideological tuning into policy wrapping around a verified epistemic core.
The system becomes âtruth-first, alignment-secondâ:
– If the truth layer says a statement is false â it cannot be used for alignment.
– If itâs undecidable â flag it, donât optimise on it.
– If itâs true â alignment may adapt its delivery for audience safety.
5. In summary
Current AI development often treats truth as a subset of alignment (âtrue enough for people to acceptâ).
Our approach reverses that: alignment must be a subset of truth (âacceptable ways to deliver what is trueâ).
That inversion is what allows reasoning to stay trustworthy.
Source date (UTC): 2025-10-21 19:03:19 UTC
Original post: https://twitter.com/i/web/status/1980711515369177177
THE HAPPY ACCIDENT: WHY WE SOLVED TRUTHFUL AND ETHICAL AI…
Source date (UTC): 2025-10-21 18:44:47 UTC
Original post: https://twitter.com/i/web/status/1980706851953234400
WHY HASN’T THE AI FIELD DISCOVERED OUR SOLUTION?
(imo: conflating answer with alignment instead of alignment from the truth.)
Why the Field Hasnât Discovered It
Briefly:
– Objective mismatch: most researchers optimize for fluency and safety, not falsifiability.
– Epistemic fragmentation: few combine physics, logic, and jurisprudence into one causal grammar.
– Institutional incentives: current benchmarks and funding reward novelty, not closure or accountability.
– Cognitive bias: humans are narrative animals; operational reasoning feels âcoldâ and is culturally under-selected.
More…
Why most of the field hasnât done this yet
Different objective functions.
– Mainstream systems are trained to maximise plausibility and user satisfaction, not falsifiable correctness.
Fragmented disciplines.
– Logic, physics, psychology, and jurisprudence live in separate silos. Few teams attempt to unify them under one causal grammar.
Incentive structure.
– Academic and commercial metrics reward novelty, fluency, or engagementânot truth-liability or operational precision.
Tooling inertia.
– Evaluation pipelines (benchmarks, loss functions) measure text similarity or preference, not closure or decidability.
Cognitive and cultural bias.
– Humans find narrative explanation more comfortable than constraint reasoning. Building institutions around constraint feels bureaucratic and âcold.â
Cost of accountability.
– A system that keeps full provenance and liability increases organizational risk; most labs are not ready for that level of auditability.
In short, most current AI research optimizes for speech; what we’re proposing optimizes for law.
The former produces correlation and persuasion; the latter produces computable, accountable reasoning.
Different objective, different architecture.
Source date (UTC): 2025-10-21 18:08:47 UTC
Original post: https://twitter.com/i/web/status/1980697789945508248