Brian,
This is a research project I’ve been investing in for years.
Spectrum:
Dishonesty (loading, framing, obscuring)
Response (Absence of Due Diligence – Think auto-association, ideation)
Honesty ( Minimum Due Diligence – Think Hypothesis)
Testifiability (Performative truth after max due diligence – Think Theory)
Decidable (Satisfaction of demand for infallibility – Think Settled Theory)
Ideal Truth (Decidability were we omniscient)
Logical Truth (Tautology)
Yes it is possible to use falsification by constructive logic using a fairly limited number of testable dimensions and a fairly limited number of first principles (irreducible causes) to train an AI to DETERMINE the testifiability and to SUGGEST the decidability.
We expect to take the first two thirds of next year to train GPTX to test the testifiability (truthfulness) of claims.
The problem at present is the size of the context window, and the limits on breaking a problem down into discrete steps, and the problem of requiring discrete terms (similar to programming) on an architecture where, unlike math and programming, we are fighting the training. So far the AI’s can’t do it. And the only one that has even a vague chance is ChatGPT.
Probably worth a chat at some point. Our goal is to make the logic accessible to all – it’s not commercial.
Cheers
Reply addressees: @BrianRoemmele
Source date (UTC): 2024-12-14 00:58:09 UTC
Original post: https://twitter.com/i/web/status/1867735797363015680
Replying to: https://twitter.com/i/web/status/1867443437907345569
Leave a Reply