Why AI-generated audio is so hard to detect

Fake and misleading content created by artificial intelligence has rapidly gone from a theoretical threat to a startling reality.

Fake and misleading content created by artificial intelligence has rapidly gone from a theoretical threat to a startling reality. The technology to produce a convincing audio recording of a person speaking is constantly getting better and has become widely available with a simple online search.

The mere existence of the technology, and the difficulty detecting content created by it, is already causing chaos.

In January, a robocall from a fake President Joe Biden targeted Democratic voters in New Hampshire. Roger Stone recently used an AI-detection program in an attempt to distance himself from a recording that appeared to feature his voice. And a high school principal’s union suggested that AI was maybe to blame for a recording in which he appeared to make racist comments. The district is still investigating.

While dozens of tools and products have popped up to try to detect AI-generated audio, those programs are inherently limited, experts told NBC News, and won’t provide a surefire way for anyone to quickly and reliably determine whether audio they hear is from a real person.

Deepfake detection systems work very differently from how human beings listen. They analyze audio samples for artifacts like missing frequencies that are often left behind when audio is programmatically generated. Often, they focus on particular aspects of speech, like how the speaker seems to breathe or how much the pitch of their voice goes up and down.

https://www.nbcnews.com/tech/misinformation/ai-generated-audio-detect-tool-model-rcna136634


Post ID: 1513939a-7425-4987-8c19-ee8c53451659
Rating: 5
Updated: 2 months ago
Your ad can be here
Create Post

Similar classified ads


News's other ads