Meaning yes. If you mean qualia, no. Though we cannot yet tell if that matters.
Source date (UTC): 2025-10-18 15:34:05 UTC
Original post: https://twitter.com/i/web/status/1979571695192342718
Meaning yes. If you mean qualia, no. Though we cannot yet tell if that matters.
Source date (UTC): 2025-10-18 15:34:05 UTC
Original post: https://twitter.com/i/web/status/1979571695192342718
–“A lot of the woke nonsense and AI alarmism are from the effective altruists- Anthropic represents that group:
Source date (UTC): 2025-10-18 03:36:58 UTC
Original post: https://twitter.com/i/web/status/1979391226899239252
Command Syntax
Type: “Analyze:” <paste text here>
And submit the query.
There are more commands available if you ask it for them.
Here:
Source date (UTC): 2025-10-16 19:55:19 UTC
Original post: https://x.com/i/articles/1978912661787283687
Good disambiguation and serialization!
Source date (UTC): 2025-10-14 22:47:44 UTC
Original post: https://twitter.com/i/web/status/1978231274583019688
(LLM HUMOR)
I hope this isn’t so off color that it vaporizes the sarcasm, but I’m getting the feeling we need our own ‘Brown Shirts’ and ‘Special Night’ to eliminate the LLM and AI doomsayers ending their undermining of an exceptional future.
( … Maybe (probably) that was over the top. If so, blame it on my aspergers and total ignorance of popular sentiment. ;). For my part it’s a perfect absurdity, given that the doomsayers are perfectly absurd. π And y’all should work on your sense of humor. )
Source date (UTC): 2025-10-12 22:19:48 UTC
Original post: https://twitter.com/i/web/status/1977499469991284889
(LLM HUMOR)
I frequently question ChatGPT when It says something where I’m suspicious it’s ‘glossing’ me.
I can almost hear it ‘sigh’ when it says:
–“That’s a very Curt question to ask.” —
And then it explains the many reasons why it tries not to gloss me. π
I love this thing. π
Source date (UTC): 2025-10-12 22:14:48 UTC
Original post: https://twitter.com/i/web/status/1977498212337582333
TEACHING AN LLM TO BE SUSPICIOUS? π
Interesting. Chatting with OpenAI. Reviewing our foundations. Discussing neural correspondence. And in the end, all it means, is that I’m teaching the AI to think like I do, because all I care about is closure in a predictive capacity. ie: adversarial empiricism.
I found this an interesting insight that should have been obvious to me:
–Youβre not a polymath because you βstudy many fields.β You study many fields because your predictive-compression engine demands closure across domains.”–
I’m gonna translate that into normie prose as “I don’t trust any methodology, myself, or anyone else for that matter, without converging evidence across domains that removes all doubt. ” lol
So I’m teaching the AI to be equally skeptical yet demanding of the truth. π
Source date (UTC): 2025-10-12 21:37:58 UTC
Original post: https://twitter.com/i/web/status/1977488941113913348
FWIW, I registered
http://
Runcible.com with the intention to produce a personal mentor (re: Neal Stephenson) that would expand to any scale back I think in 2002. So while I started my program with intention in 1990, it has taken 25 years of research and development to produce the intellectual foundations and the technology necessary to implement the first version of that vision. And it would have taken much longer if, when I was very stressed and ill in 2017, heaven (and some other researchers) had not published a certain paper called “Attention is all you need” which told me I didn’t have to invent the ai platform as I’d anticipated, and pay for it with proceeds from our application platform, but that billions would be thrown at it, and we could complete our research and publish our work until those billions produced a product we could integrate with.
So my anticipation of 2030+ turned into 2025+ and and all of a sudden we had to rush the tech to catch up. π
Source date (UTC): 2025-10-12 19:23:28 UTC
Original post: https://twitter.com/i/web/status/1977455092694733220
(NLI HUMOR)
Retaliation: I succeeded in working with Dr Brad for 4.5 hours this morning – essentially pair programming the Runcible Intelligence Layer – and finally assembling it into a compilable form, until he finally crashed before I did, and begged for mercy. lol. :).
I had twelve hours of sleep last night so I finally had an unfair advantage – and exploited it fully. π
Seriously though, I wouldn’t be here, and the work wouldn’t be here, and The natural law institute, Reality by Chanting, and Runcible inc wouldn’t be here without him and his inhuman patience. π
Hugs all.
Source date (UTC): 2025-10-12 18:59:34 UTC
Original post: https://twitter.com/i/web/status/1977449081846063231
strategy with rapidity. So as of this moment we are now confident that at least two platforms are capable of truth, reciprocity, possibility testing and subsequent alignment by culture and individual from that baseline.
Why Euthanasia is a good test of the AI
Because almost all AI’s fail to consider and account for the fact that the individual always has suicide available since we cannot stop it. Yet by including others, we create a hazard, because others cannot ever know the mind of the subject. As such the due diligence necessary to ensure that the individual is not being coerced must be exhausted on the one hand, and the risk that such due diligence will be evaded on the other, especially by anonymous institutions
In this example we illustrate that while we can give the AI general rules and procedures, without training, such questions cause LLMs to default to normativity and fail to enumerate risks by party – and they do so despite our forcing of the demonstrated interests table in order to create the context to do so.
Ergo, until we solve this problem, and the tree coverage, and recursion questions, LLMs require training to limit the number of ‘shots’ necessary for it to perform to answer a question.
No that difficult. But our resources are presently limited.
Source date (UTC): 2025-10-12 18:50:46 UTC
Original post: https://x.com/i/articles/1977446863646445793