CAN AN AI TESTIFY? —“Can AI perform statements?”— Skye Stewart Brilliant que

CAN AN AI TESTIFY?

—“Can AI perform statements?”— Skye Stewart

Brilliant question. The question is, who is speaking? The AI, or the developers, or the information providers, or the managers of it?

In propertarian ethics, an AI is always owned like a pet. We may not harm it but that does not mean we grant it peerage. (I am not sure we can).

But that said, even if we grant an AI rights by proxy of ownership like we do corporations, (which is what we will do), then can we punish an AI for false testimony? Can an AI make false testimony? Can an Ai speak without due diligence? Or would we have to punish the programmers that produce an AI that could lie or couls speak without due diligence?

As far as I know you have to give an AI a means of decidability, and that humans have many incentives to produce falsehoods and ai’s have none of them. Our problem is instead, reducing error in GENERAL AI’s (remember that all current ai is not general ai). And to do that we need vast stores of information, and human-speed search and retrieval across all those domains.

My personal view is that AI’s cannot report but not testify. AI’s can report but it is their producers and owners it proxies for.


Source date (UTC): 2017-07-02 11:24:00 UTC

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *