The Threat of Deepfakes to Facial Recognition Security

Options
Janice_Lin
edited August 2023 in PC Tech

Remember the excitement when facial recognition technology was first introduced? Everyone who had a smartphone was able to unlock their phones with just a glance at their screen. The process of setting up facial recognition was also novel, a futuristic nod at how much we’re integrating technology into our lives. However, with deepfakes becoming more common, it is easier than ever to replace an image or a video of a person with another, threatening our security and privacy.

What is facial recognition?  

Facial recognition is a system in which the technology is capable of identifying a human face from an image or a video and matches it with the person’s identity. It identifies the face’s unique features and matches it with the information stored in the database.

While there are several applications for facial recognition technology, the most common one is Face ID, Apple’s authentication system that unlocks your iPhones or iPads. The technology analyzes your features and automatically verifies your identity, allowing you to authorize purchases and acquire personal information. Other practical applications for facial recognition include fraud detection, cybersecurity, airport and border control, banking, and healthcare.

How does facial recognition work? 

Facial recognition uses technology to obtain the biometric facial pattern of the person that needs to be identified. With the help of artificial intelligence and machine learning, facial recognition software uses image analysis to analyze the incoming image and matches it with the person requiring the data. The technology is an algorithm that’s created specifically to compare and contrast two images. For example, the algorithm can use these distinguishable features on the human face to identify a person:

  • Distance between the eyes 
  • Shape of the jawline 
  • Eyebrow and cheekbone structure 
  • Width of the nose

What are deepfakes? 

Deepfakes are AI-generated videos or images that take an existing person and replace them with someone else’s physical likeness. The technology uses deep learning to create fake images or footage of events that never happened. Deep learning is a subset of machine learning, where the artificial neural network has additional layers that allow them to discover structures in the data they acquire. They are a bit different from machine learning. They are both types of artificial intelligence, but in short, machine learning is AI that can automatically adapt with minimal human interface, while deep learning is AI that uses artificial neural networks to mimic the learning process of the human brain.

Deepfakes can create realistic footage, images, or audio files that are entirely fake, hence the name. It was coined by a Redditor, named “deepfakes”, in 2017, when a series of adult content emerged on Reddit by replacing female celebrities’ faces onto adult performers’ video clips. Deepfakes can also threaten national security and international alliances, and damage reputations of specific individuals or entities.

The current state of facial recognition security 

Facial recognition security isn’t new, and the technology is commonly used for marketing, like in Sephora with its Virtual Artist app, which allows customers to try on certain colors and products on themselves without buying it first in-store. However, like any new technology, facial recognition security still poses some privacy and security issues:

  • Lack of consent 
  • Unencrypted faces 
  • Inaccuracy 
  • Lack of transparency

How does deepfake technology threaten facial recognition technology? 

Since deepfake technology allows virtually anyone to create images and videos of another person, attacks on cybersecurity and privacy are a growing concern. Facial liveness verification, a feature of facial recognition technology that uses computer vision to confirm the presence of a live user, can’t always detect digitally altered photos. A new research involving the Penn State College of Information Sciences and Technology found that facial recognition technology is highly vulnerable to cyber attacks that involve deepfakes. Deepfake technology is already advanced enough to the point where it can fool commercial facial recognition systems. In another paper published on the preprint server Arxiv.org, researchers from SungKyunKwan University, South Korea have found that APIs (Application Programming Interface) from tech companies like Amazon and Microsoft can be fooled with deepfake images and videos.

New techniques to detect and counter deepfakes 

Realizing the high stakes and threats to security, tech companies have spearheaded initiatives and projects to fight against the spread of deepfakes. Facebook, along with Microsoft, Amazon, and academics from several universities, launched the Deepfake Detection Challenge. The goal is to encourage new creations for detecting and preventing AI-manipulated media. The fight against deepfakes has attracted more than just tech companies and academics, the Deepfake Detection Challenge is also facilitated and overseen by the Partnership on AI’s Steering Committee on AI and Media Integrity, as well as human rights nonprofit, Witness. Microsoft has also recently launched their own deepfake-combating solution, called Video Authenticator. It’s a tool that can analyze a still photo or video to provide a score for its level of confidence that the media hasn’t been artificially manipulated.  

The future of facial recognition security 

While new tools and methods of creating deepfakes will continue to grow and evolve, there will also be new technology that is designed specifically to combat deepfakes. There must be alternate strategies to develop the technology to be more precise. Tom Burt, Microsoft’s CVP of Customer Security and Trust, said in a blog post in 2020:

“We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media.”  

There are still a lot of challenges ahead for facial recognition security, and the usage and development of this technology will only continue to grow in the future. Deepfakes are a recurring problem, but there are still several positive uses for this technology. As mentioned above, facial recognition technology can also be used for smart retail and personalized customer experiences, allowing customers to interact with products and the brand without ever setting foot in store. With the right technology and intent, facial recognition can also make our lives a lot easier and more convenient, such as doing simple, everyday tasks like unlocking our phones.

Janice is a contributing writer for Acer with a background in marketing and copywriting. She's passionate about literature, tech, blockchain, and creative trends. She has worked with several clients to grow and position their brands internationally.

Socials

Stay Up to Date


Get the latest news by subscribing to Acer Corner in Google News.