Facial recognition is a high-tech method of identifying or verifying the identity of an individual using their face.
There are 7.6 billion people in the world – and you will never find two identical human faces.
A facial recognition system uses biometrics to map facial features from a photograph or video. It identifies people by matching unique characteristics of their facial patterns to databases of images and can be used for everything from surveillance to marketing.
Use of this technology by Governments, law enforcement and the corporate sector is on the rise globally – and Australia is no exception.
If you live in Australia, there’s a good chance at some point you’ve been watched, scanned, or analysed by facial recognition technology – possibly without even realising it.
The technology is being used to screen passengers on international flights, to verify drivers’ licence photos and at universities to monitor class attendance.
It’s also being deployed to observe customers at shopping centres and by businesses to monitor entrances, exits and restricted areas.
Last year, the Metropolitan Police in London announced the rollout of a network of cameras designed to aid police in automating the identification of criminals or missing persons.
And other major cities have piloted or implemented variants of facial recognition – with or without general public consent.
In China, many of the trains and bus systems leverage facial recognition to identify and authenticate passengers as they board or alight. Shopping centres and schools across the country are increasingly deploying similar technology.
And of course, facial recognition enables millions of mobile phone users to unlock their mobile device.
The facial recognition market is estimated to grow to US$7.7 billion in 2022 from US$4 billion in 2017.
But how accurate and reliable is this burgeoning technology? That’s the big question.
Airport passport verification systems vulnerable to hacking
Earlier this year, McAfee a global leader in cyber security, set out to determine how challenging it might be for an individual to bypass a modern facial recognition system.
McAfee’s Advanced Threat Research team released its findings last month, with some surprising results.
The team was able to successfully bypass a facial recognition system through a process they have dubbed “model hacking”.
Sometimes known as “Adversarial Machine Learning” or AML, model hacking is the study and design of adversarial attacks targeting Artificial Intelligence (AI) models.
Basically, it fools the system into thinking it “sees” an entirely different person.
To conduct the research McAfee designed and built a state-of-the-art facial recognition system closely reflecting the types of airport passport verification systems deployed in the real world.
Ultimately, the McAfee team was able to “model hack” the facial recognition system, causing it to misclassify an individual as a completely different person while maintaining the photorealistic appearance of the original individual.
The implications of this research are significant.
As is the case with many new technologies, it may become a target for future attacks.
So, identifying and mitigating security issues proactively is imperative, says McAfee.
Generating more awareness
The findings signify an opportunity for partnership between the threat research community, developers, and vendors to look deeply into facial recognition technology and generate awareness about the security flaws.
It is vital to build systems that are hardened to these types of attacks, McAfee says.
“The earliest computer facial recognition technology dates back to the 1960s, but until recently, has either been cost-ineffective, false positive or false negative prone, or too slow and inefficient for the intended purpose,” says Steve Povolny, Head of McAfee Advanced Threat Research.
However, advancements in technology and breakthroughs in Artificial Intelligence and Machine Learning have enabled several novel applications for facial recognition.
An expert system known as StyleGAN can now generate millions of realistic human images from scratch with varying degrees of photorealism.
“This impressive technology is equal parts revolutions in data science and emerging technology that can compute faster and cheaper at a scale we’ve never seen before,” says Povolny.
Widespread privacy concerns
“It is enabling impressive innovations in data science and image generation or recognition and can be done in real time or near real time.
“Some of the most practical applications for this are in the field of facial recognition.
“Simply put, a computer system can determine whether two images or other media represent the same person or not.”
Despite widespread privacy concerns, Povolny says facial recognition has some obvious benefits.
He cites this recent article outlining how the use of facial recognition technology in China to track down and reunite a family many years after an abduction.
“Despite this, it remains a highly polarizing issue with significant privacy concerns and may require significant further development to reduce some of the inherent flaws,” he says.
“As vulnerability researchers, we need to be able to look at how things work – both the intended method of operation as well as any oversights.”
Povolny says biometrics are an increasingly relied-upon technology to authenticate or verify individuals and are effectively replacing password and other potentially unreliable authentication methods in many cases.
Cyber criminals may develop unique capabilities to bypass critical systems
“However, the reliance on automated systems and machine learning without considering the inherent security flaws present in the mysterious internal mechanics of face-recognition models could provide cyber criminals unique capabilities to bypass critical systems such as automated passport enforcement,” he says.
“As we reflected on this growing technology and the extremely critical decisions it enabled, we considered whether flaws in the underlying system could be leveraged to bypass the target facial recognition systems.
“More specifically, we wanted to know if we could create ‘adversarial images’ in a passport-style format, that would be incorrectly classified as a targeted individual.”
Over a six-month period, McAfee ATR researcher and intern Jesse Chick studied state-of-the-art machine learning algorithms, read and adopted industry papers, and worked closely with McAfee’s Advanced Analytics team in a bid to “defeat” facial recognition systems.
“To our knowledge, our approach to this research represents the first-of-its-kind application of model hacking and facial recognition,” says Povolny.
You can find the full outcome of the study here