Deepfakes can easily deceive many authentication systems Facial Liveness Verification

image


A team of researchers from Pennsylvania State University (USA) and Zhejiang and Shandong Universities (China) studied the susceptibility to deepfakes of some of the world’s largest face-based authentication systems. As the results of the study showed, most systems are vulnerable to developing new forms of deepfakes.

During the study, deepfake-based attacks were carried out using a special platform deployed in Facial Liveness Verification (FLV) systems, which are usually supplied by large suppliers and sold as a service to downstream customers, such as airlines and insurance companies.

Facial Liveness is designed to reflect the use of techniques such as image attacks, the use of masks and pre-recorded video, so-called “master faces” and other forms of cloning visual identification.

The study concludes that a limited number of deepfake detection modules in such systems may have been configured for outdated techniques or may be too architecture-specific.

“Even if the processed videos seem unreal to people, they can still bypass the current deepfake detection mechanism with a very high probability of success,” the experts noted.

Another conclusion was that the current configuration of the general facial verification systems is biased against white men. Persons of women and minorities of color were more effective in bypassing verification systems, exposing clients of these categories to a greater risk of hacking using methods based on deepfakes.

The authors offer a number of recommendations to improve the current state of FLV, including the rejection of single-image authentication (“image-based FLV”) when authentication is based on a single frame from the client’s camera; a more flexible and comprehensive update of deepfake detection systems in graphic and voice domains; imposing the need to synchronize voice authentication in user video with movements lips (which, as a rule, is not present now); and the requirement from users to perform gestures and movements that are currently difficult to reproduce by deepfake systems (for example, profile viewing and partial face darkening).

Start a discussion …
Source link