Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Tech/Science

Deepfake Voice Attacks Pose Challenge for Biometric Software Companies and Researchers

Deepfake voice attacks are posing a real-world challenge for biometric software companies and public researchers as they strive to detect these deceptive impersonations. Recently, there have been instances of deepfake voice attacks, such as robocalls impersonating President Joe Biden, which have raised concerns about the efficacy of current detection methods.

Amidst this backdrop, software maker ID R&D, a unit of Mitek, has entered the market with a video demonstrating its voice biometrics liveness code’s ability to differentiate between real recordings and digital impersonations. This move comes in response to a previous high-profile voice cloning scandal involving pop star Taylor Swift.

However, the recent electoral fraud attempt involving a deepfake audio of Biden has presented a unique challenge. While some detector makers like ElevenLabs and Clarity have weighed in on the situation, there is still uncertainty surrounding the ability to accurately detect deepfake voices.

ElevenLabs, which focuses on creating voices, recently achieved unicorn status after raising an $80 million series B funding round. On the other hand, Clarity found the misinformation attack to be 80 percent likely a deepfake. The lack of consensus among industry players underscores the complexity of the issue.

Amidst the uncertainty, a team of students and alums from the University of California – Berkeley claim to have developed a detection method that operates with minimal errors. Their approach involves using deep-learning models to process raw audio and extract multi-dimensional representations, known as embeddings, to discern real from fake.

While this research offers promise, it is important to note that the method has thus far been tested in a lab setting and will require further validation in real-world scenarios. As the debate around deepfake voice attacks continues, the industry is grappling with the urgent need for more effective detection methods to combat the growing threat of deceptive audio impersonations.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *