Get the latest tech news

AI-Generated Voice Evidence Poses Dangers in Court


In the age of AI, listener authentication of voice evidence should be permissive,  not mandatory.

For example, the length and quality (in terms of audio compression rates or background noise) can impact people’s ability to discern the identity and naturalness of a voice. This would shift admissibility for all the enumerated examples, including the option to authenticate the identity of a person’s voice by calling a witness to the stand who says they recognize the speaker, to a permissive rule rather than a mandatory one. Judges would still apply the low sufficiency standard, so they would not be substituting their judgment for that of the jury, raising the burden on parties seeking to introduce evidence, or opening the floodgates to a morass of evidentiary disputes.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Court

Court

Photo of dangers

dangers

Related news:

News photo

Court denies Elon Musk's attempt to block OpenAI's for-profit transformation

News photo

The LA Times published an op-ed warning of AI’s dangers. It also published its AI tool’s reply

News photo

Holmes and Balwani’s appeal falls flat as court upholds fraud convictions