EMBEDDING P-LAW IN LLM’S: TURNS OUT, YES. All; So getting down in the weeds on t

EMBEDDING P-LAW IN LLM’S: TURNS OUT, YES.
All;
So getting down in the weeds on the new LLMs, particularly GPT4. And the emergent properties are fascinating. It’s not self aware, and so self falsifying, or moral testing, but it *is* producing the equivalent ‘framing’ (paradigming) a context. The addition of reflection, and falsification (recursion) appears to work. And it’s clear the community understands the problem at this point but I’m not really sure why the solution to GPT’s “bad hypothesis’ problem isn’t obviously adversarial falsification, and recursion. I mean … what do you think the social function of our brains is? Predicting others behavior so that we falsify our own intuitionistic behaviors that are more selfish.
This turns out to be another problem of CRD (continuous recursive disambiguation). In that the knowledge needed to ask an unambiguous question may not be present prior to the production of an ambiguous answer. 😉 Which is obviouvs – and that’s why discourse is necessary (and why we can be a bit ‘mad’ if we don’t have others to test our ideas against.
So what does this mean for NLI and our formal algorithmic natural law of cooperation, economic ethics, morality, politics et all?
It means that I SHOULD take time out to work on integrating NLI’s method, definitions, science, first principles, etc into one of the LLMs.
But to do that means I have to complete the work on the first few articles of the constitution that formalize the rest of the ‘sciences’.
I’m pretty sure that I could make GPTX write law. And even decide law – incrementally, recursively, untiil it could decide unambiguously.
#AI


Source date (UTC): 2023-04-03 15:18:20 UTC

Original post: https://twitter.com/i/web/status/1642909406642708482

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *