For example, you’re emphasis is in restoring the realtion between formal (writte

For example, you’re emphasis is in restoring the realtion between formal (written) and spoken language.

My emphasis is on writing laws programmatically so that ti’s closed to interpretation (abuse, conflation, inflation, deceit) To do so requires an ordinal ‘math’ (logic) consisting of sets of measurements instead of more general and flexible terms.

Ie: the current supreme court is, thanks to departed Judge Scalia, trying to restore the law to its transactional (accounting) origins. I’m completing that program. That way there is no means of bypassing the people by the legislature or the courts.


Source date (UTC): 2023-02-22 02:48:41 UTC

Original post: https://twitter.com/i/web/status/1628225234338828290

Replying to: https://twitter.com/i/web/status/1626615439638798337


IN REPLY TO:

Unknown author

Dear Lord, Professor, Saint @elonmusk ;), (All)

Yes, we can build a TruthGPT.
Yes, I know how. I’m a nerd. ๐Ÿ˜‰
You have no reason to believe me.
People who follow my work do.
I had to solve the Truth problem for an AI that could test law, constitution, legislation, regulation, and speech for truthfulness.

I have too much on my plate reforming law for the same reason (Truth, Possibility, Legality, Legitimacy), to start another company to produce on an AI – though it’s something I’ve worked on and planned for years.

TSLA could easily produce a TruthAI, and Twitter could use and AI produced by TSLA. The world would benefit from a TruthAI more than any technology… well…, other than a safe battery with N-times the energy density of gasoline. ๐Ÿ˜‰

For anyone interested:

1. The embodiment that TSLA uses for cars and robots is necessary for world modeling, and world modeling is necessary for categorization (identification) from context.

2. Route Finding in vehicles and robots is necessary for Recursive Wayfinding (thinking and problem-solving.)

3. Novelty Detection and World Modeling combined with Way Finding are necessary for episodic memory. Memories favor novelties.

4. Object, Space, and Background classification, combined with episodes (contexts) are necessary for sufficient disambiguation to determine ‘ownership’ and predictions.

5. If you study linguistics you quickly realize that universal morality is embedded in all our languages (particularly English because it’s a high-precision low-context language) in the form of permission to act on a person, object, space, class, etc.

6 So, moral AI that respects life and demonstrated interest (property) and even negotiates over control and transfer of interest is pretty simple.

7. The next higher-order problem then is one of speech (truth). While justificationary truth is impossible (yes really) survival of falsification is possible (yes really).

8. There is one simple logic to the universe at all scales that provides us with the opportunity for a constructive falsificationary logic. (That was the hard part)

9. The hard bit for the next generation to swallow, is that there is a relatively simple set of criteria for *universal falsification of statements* and a *universally commensurable paradigm, grammar, vocabulary, logic, and syntax* – Yes really.

When written or spoken language using this ‘grammar’ looks and sounds like a bit tedious form of ordinary language. And this tedious form can be reduced to ordinary language on output.

In other words, we can and have produced a non-cardinal, ordinal, qualitative, geometry of language that can test the possibility of any speech or text’s testifiability (truth). And we can and have produced a rule set (checklist) for Truthful(testifiable), ethical(direct), and moral(indirect) questions.

THE PLAYERS TODAY AND WHY TSLA MATTERS

TSLA vs Google vs OpenAI use three different models. OpenAi is the simplest, Google’s a bit more challenging, and TSLA’s the most difficult.

Now, we require TSLA’s world model to create an AI that can continuously recursively and in real-time produce truth tests.

And we need eventually neuromorphic hardware (many tiny simple processors with a bit of local memory) to circumvent the backpropagation cost problem (and the alternatives, and evolve closer to real-time learning. (FWIW recent innovations in solving the cost problem has been exciting and is gaining popularity – thanks to one of the fathers of the field.)

The combination of local truth testing of tangible questions and escalation to distant central truth testing for increasingly abstract questions is the holy grail of imitating the human mind and its use of collective minds as a market for knowledge and decisions.

(BTW: Thanks #TwitterDev for long-form tweets. It’s finally possible to inform with Twitter instead of just virtue signal and generate conflict by promoting viscous cycles of moral outrage for dopamine junkies. ๐Ÿ˜‰ )

Original post: https://x.com/i/web/status/1626615439638798337

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *