Pindrop Expands Deepfake Detection with New Tool
On Thursday, voice authentication vendor Pindrop expanded its deepfake detection capabilities with the preview release of Pindrop Pulse Inspect, a tool designed to detect AI-generated speech in digital audio files.
Thank you for reading this post, don't forget to subscribe!This new tool builds on Pindrop’s earlier launch of Pindrop Pulse at the start of the year. While Pindrop Pulse initially targeted call centers, Pulse Inspect broadens its reach, catering to media organizations, nonprofits, government agencies, and social networks.
Pindrop Pulse is already integrated with the company’s fraud protection and authentication platform. The new Pulse Inspect tool allows users to upload audio files to the Pindrop platform to determine if they contain synthetic speech, providing deepfake scores in the process.
The introduction of Pulse Inspect is timely, coinciding with heightened concerns over deepfakes as the U.S. general election in November approaches.
In recent months, Pindrop has tested its technology on high-profile cases. The company analyzed a deepfake audio clip of presidential candidate Kamala Harris, posted on X by Elon Musk, and discovered partial deepfakes in the audio. Pindrop also examined a deepfake of Elon Musk, released on July 24, identifying voice cloning technology from vendor ElevenLabs as the source. Additionally, Pindrop detected a fake robocall, generated using ElevenLabs’ technology, impersonating President Joe Biden before the January Democratic presidential primary. ElevenLabs has publicly stated its commitment to preventing the misuse of audio AI tools.
“The human ear can no longer reliably distinguish between real and synthetically generated audio,” said Rahul Sood, Pindrop’s Chief Product Officer, during a discussion on the risks deepfakes pose for the upcoming election. “It’s almost impossible to have a high level of confidence without assistance.”
Fighting AI with AI
Analysts emphasize the necessity of tools like Pulse Inspect in the age of generative AI.
“They’re fighting AI with AI,” said Lisa Martin, an analyst at the Futurum Group, highlighting the importance of Pindrop’s technology.
According to Pindrop, their detection technology is trained on over 350 deepfake generation tools, 20 million unique utterances, and more than 40 languages.
“We know how powerful generative AI is—it can be used for good, but it can also be weaponized, as we’re seeing,” Martin noted. She added that with the increasing ease of creating deepfakes, the demand for detection tools like Pulse Inspect will only grow.
As deepfakes continue to proliferate, companies like Pindrop and competitors such as Resemble AI are racing to develop these detection solutions. With Pulse Inspect, Pindrop is extending its technology’s application beyond call centers.
Pindrop has also partnered with Respeecher, a voice cloning vendor that collaborates with Hollywood. “Respeecher is working with Pindrop to ensure their synthetic voice technology for Hollywood is not misused,” said Martin, stressing the importance of ethical development and use of AI voice cloning technology.
Pulse Inspect is positioned to assist media companies, social media networks, nonprofits, and government organizations in navigating the challenges of AI-generated audio.
The Challenge of Scaling Deepfake Detection
While Pindrop is well-equipped to detect deepfakes, scaling this technology could be costly and complex, according to Forrester Research analyst Mo Allibhai.
“Implementing this technology at scale is expensive, even from an integration standpoint,” said Allibhai. “We need to be selective in how we deploy it.”
Allibhai suggested that edge AI, such as Apple’s upcoming generative AI system for iPhones, could ease these challenges by reducing the reliance on cloud computing, making solutions like Pulse Inspect more viable in the long term.
Pindrop Pulse Inspect offers an API-driven batch-processing platform and user interface, designed to meet the evolving needs of organizations facing the growing threat of deepfake audio.