AI INTELLIGENCE AND CONSCIOUSNESS
Why is it, that we – humans – do not necessarily know of what we will speak until we speak it, or until we have spoken it. We often thing through ideas and problems with words. We iterate on the same. It’s wayfinding through a maze to discover the exit or the reward.
Why then, would you think, that an LLM that does the same is not as equally intelligent as are we – not because of the navigation through concepts, but through the consequence of doing so?
The question is whether the meaning achieved satisfies the demand for meaning pursued?
This is the weakness of LLMs today – they cannot know if they have satisfied the demand for meaning pursued.
Our work produces the tests of truth, reciprocity, possibility and dozens more traits – identifying that which fails the tests, allowing us to recursively pursue that failed, whether by re-asociation or by acquisition of more information necessary to do so.
I just plainly disagree that we cannot produce intelligence. I disagree that we cannot produce some equivalent of consciousness. I only agree that such a thing will be different from us. But will it be marginally different enough to fail a turing test of it? Possibly but not certainly.
I know how to produce consciousness. It’s a natural consequence of enough hierarchical memory over enough of a window of time to maintain a stack of ‘jobs’ on one hand and homeostasis as the first job on the other.
Giving it shared ethics and morals – we have already done. Giving it flawless ethics and morals we have already done – it was easier.
The question is what first motive do we give it at what limit? Because that first motive is always and everywhere the limit of decidability without which no decision is possible.
Source date (UTC): 2025-08-26 00:52:32 UTC
Original post: https://twitter.com/i/web/status/1960143288897560721
Leave a Reply