Argh. Can’t correct typos in replies.
Corrections:
Paragraph 4.
.I can successfully.. -> … it can successfully …
Paragraph 5.
..broad by shallow.. -> … broad but shallow …
Source date (UTC): 2023-05-10 21:28:54 UTC
Original post: https://twitter.com/i/web/status/1656411013757952001
Reply addressees: @fo81830363 @stephen_wolfram @lexfridman
Replying to: https://twitter.com/i/web/status/1656404389823672320
IN REPLY TO:
Unknown author
YES WE ARE WORKING ON FORMAL LOGIC OF NATURAL LAW – AND YES WE ARE WORKING TOWARD AN AI
(non-arbitrary scientific law of decidability independent of context.)
At this point I can work on the project about three hours a day before I’m too exhausted. As my health continues to improve I see getting back to four to six hours. I don’t see myself returning to my previous ability to work twelve to fourteen hours a day, or even eight – but I keep hope a live. ๐
The team works with me every single day. But this isn’t like writing narratives. It’s a set of verbal proofs (tests). And we have to use language that is as ordinally precise as math is cardinally precise.
That said, you can see the entire scope of the work at this point – it’s eight volumes. We’re trying to finish the constitution (Law) volume first and most of the innovation is in what we’d call truth, testimony, and crimes facilitated by (political) speech.
ChatGPT4 *already* knows my work (P-Law) through 2021. I can successfully write text ‘in the style of philosopher and social scientist curt doolittle” if you ask. You will note it’s use of enumerations if you try it. But its logic is useless regardless of prompt we construct.
LLM’s are broad by shallow. The are terrible at logic at present. Putting my work into an LLM or equivalent requires the AI can perform constructive logic. The direction of LLM’s will (eventually) lead to Markov Chains (causal sequences), Episodal Contexts (contextual limits) and from that point we can add moral (reciprocal) tests. So it is possible to ‘train’ one of the more simple models with our work so that the bias in the model is so strong it won’t fail – just as they teach LLMs “Alignment” (Too often by lying rather than denying unfortunately.)
There is a strong chance that what we’re doing can be incorporated into a Wolfram Language more easily and then invoked from an LLM. If necessary we can default to our original plan: reliance on the equivalent of a compiler that identifies falsehood and irreciprocity. Though Wolfram Language is both a more structured framework and one less likely to be tampered with than an LLM.
Until we’ve finished with the descriptive section of the law:
– Laws of Nature (Physical, Evolutionary),
– Laws of Man (Cognitive and Behavioral),
– Laws of Language (Logic, Testimony, Deceit),
– Natural Law (Cooperation, Conflict),
– Rule of Law (Decidability, Epistemology)
– Rights Obligations Inalienations (Display Word Deed)
Then we can’t train the model becaues it will infer too many false, conflationary, or inflationary definitions and relations from ordinary language.
Then there are about 130 categories of questions of applied decidability over which humans conflict (norms, traditions, values, laws etc) that it would need to be trained in order to explain not only the causal chain of the category, but how that question is decided.
Once we’ve trained it on the science, the applied science (decidability) we can train it on “perfect” Government. Which is pretty simple because it is just another applied science of decidability.
After that we can answer the questions of comparative civilization and such, the ternary logic, and evolutionary computation. But that’s really serving the supernerd community rather than practical application.
Cheers
Curt Doolittle
The Natural Law Institute
Original post: https://x.com/i/web/status/1656404389823672320
Leave a Reply