FORESEEABILITY FRONTIER AND LIABILITY IF AI Working on closure and liability in

FORESEEABILITY FRONTIER AND LIABILITY IF AI
Working on closure and liability in the age of AGI and SI given that human prediction (forecasting) is already a spectrum of limits that we already address in law, and that AGI and SI will have greater limits because of greater predictive ability. As such the liability frontier for humans using AI, for AGI and SI themselves, means a divergence that our laws have not yet embodied.
For example, we can hold people accountable for the AI’s they create and the actions of the AIs they enable. But unless the AI can explain its satisfaction of demand for infallibility in the context in question, such that a human can understand and agree with it, then does the liability remain with the human creator, enabler, or with the machine itself?


Source date (UTC): 2025-09-12 18:13:06 UTC

Original post: https://twitter.com/i/web/status/1966565748270309550

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *