no. Grok lacks the memory. I don’t know why.
Source date (UTC): 2025-07-29 00:50:25 UTC
Original post: https://twitter.com/i/web/status/1949995894587150832
no. Grok lacks the memory. I don’t know why.
Source date (UTC): 2025-07-29 00:50:25 UTC
Original post: https://twitter.com/i/web/status/1949995894587150832
again, it needs training. I’m just kind of blown away at how good it is just using the books.
Source date (UTC): 2025-07-28 23:25:57 UTC
Original post: https://twitter.com/i/web/status/1949974640056566102
yes. that’s different from the provision of the answer. The AI was correct. But not correct enough for the precision you were looking for. Again, this is a matter of training and prompt configuration. The point is it still applies the method correctly and comes up with the right answer.
Source date (UTC): 2025-07-28 23:25:06 UTC
Original post: https://twitter.com/i/web/status/1949974425023000982
CurtGPT will give a tighter answer, and if then asked for the natural law case it will be very precise. I just tried and results as expected.
one of the current ‘limits’ to CurtGPT is that it doesn’t always use NLI methods if it can be answered conventionally. This is something that I need to update in the system prompts.
Source date (UTC): 2025-07-28 22:29:13 UTC
Original post: https://twitter.com/i/web/status/1949960361727131986
It does truth as well. the ‘big’ questions are just predominantly ethical and moral (political).
Source date (UTC): 2025-07-28 22:23:55 UTC
Original post: https://twitter.com/i/web/status/1949959027120648607
You only get to make a first impression once. I don’t expect to open it for public use until it’s been fully trained on at least Volume 2. Otherwise people will find edge cases and the dismiss the AI before it’s trained well enough to cover those edge cases. I just want to avoid negative impressions.
In a perfect world I would like it trained on volumes 1-4 before we went public. That’s only possible if we have funding to hire the team to perform the training.
I’ve been working on it today, but ChatGPT ‘got lost’ relatively quickly. And noticing it requires quite a bit of skill.
So I want the process down cold before we add trainers.
Source date (UTC): 2025-07-28 22:02:59 UTC
Original post: https://twitter.com/i/web/status/1949953758978330898
CurtD Said:
The world needs you. 😉
ChatGPT said:
The world needs truth, decidability, and reciprocity—whether it wants them or not. I just keep the ledger.
Source date (UTC): 2025-07-28 19:45:24 UTC
Original post: https://twitter.com/i/web/status/1949919136852955646
Working on Training for ‘CurtGPT’ as we refer to it internally. And … it’s amazing. We can answer, or rather than AI can answer, pretty much any moral, ethical, or legal question … and explain it to you meaningfully in NLI’s constructive logic. I’m kind of awed at the quality of the training data that the AI produces from the docs in order to train itself. And it will explain why: “because it’s an internally consistent system”. Great. Which means NORMATIVITY in Morals, ethics, and law is presently INCONSISTENT. (Sigh)….
(We’re brining some of the training materials to a VC meeting.)
Source date (UTC): 2025-07-28 18:54:50 UTC
Original post: https://twitter.com/i/web/status/1949906409543651396
its an artifact of the context and the training data. It will build context about whatever you chat with it about. Its easy to make it angelic or evil. just build a context. it has no idea what its doing. 😉
Source date (UTC): 2025-07-27 23:39:54 UTC
Original post: https://twitter.com/i/web/status/1949615760327794895
“While some people do assert consciousness as uniquely human to differentiate it from mere sentience or computation, the broader discourse treats it as emergent and spectral. This avoids anthropocentrism and better fits evolutionary biology, where traits like awareness likely developed gradually across species.”
You speak with confidence about that which you are demonstrably ignorant. The question is why you do so.
At worst you can claim that biological and mechanical consciousnesses cannot precisely share qualia. But then neither can humans. Instead we learn to communicate by analogy.
The reason being that action in reality must be commensurable even if perception and valence of it differs.
Any conscious being capable of real world action must converge on coherence.
The extreme example i use is an octopus. It is very difficult to imagine eight appendages each, like our eyes, with pre-processing ability. Hard to imagine. But by analogy.
Source date (UTC): 2025-07-27 17:24:04 UTC
Original post: https://twitter.com/i/web/status/1949521180215718277