Why Our Runcible Protocols Provide Machine Decidability
The mechanism is: we convert open-world language generation into closed-world program execution over claims.
Step-by-step:
-
Decompose text → claim graph
Natural-language request becomes a set of claims and subclaims (a dependency graph). (We create a list of tests of first principles) -
Attach proof obligations (tests) per claim
Each claim declares what would satisfy it: evidence types, operations, consistency checks, scope constraints. (We give the LLM a limited path through the latent space.) -
Evaluate using available information
The model can (a) bind to provided sources, and (b) check internal consistency, cross-source consistency, and operational satisfiability within available data.
If evidence is missing, it must output missing requirements instead of “completing anyway.” (We don’t ‘guess’ we say decidable or not, and if decidable why and undecidable why – what’s missing.) -
Compute closure
Produce verdicts per claim, then an aggregate verdict for the output. This is “closure.” (We sum the checklist of test outputs.) -
Emit the artifact
Output includes: verdict(s), rationale tied to tests, and a trace/ledger pointer. (And we compose a natural language explanation from those results.)
This is exactly how we put machine and human “on the same terms”: both must satisfy the same externally inspectable proof obligations, even though the machine’s internal heuristics differ.
Source date (UTC): 2025-12-31 19:11:41 UTC
Original post: https://x.com/i/articles/2006443159048565172
Leave a Reply