For some years now, biometric measures have been touted as “The Next Big Thing” in cyber-security, with the potential to do away with the hassle of remembering, storing, and periodically changing passwords or access codes.
Though validation procedures using unique physical and behavioral traits have been adopted as part of two or multi-factor authentication processes, these technologies have yet to fully displace the more traditional methods of verification and access control.
Much of this is due to the immaturity of biometric hardware and software, which are still in a state of development and evolution. But rapid progress is being made – particularly in the field of facial recognition technology.
The Basics of Facial Recognition Technology
Facial recognition technology employs computer software coded with a mathematical formula or recognition algorithm capable of identifying a person or verifying their identity from an image captured by a digital or video camera. This identification is achieved based on a comparison of several characteristic markers on the human face which are unique to each individual, and a comparison with a database of facial images stored on the system from previous image captures or photographs pulled in from external sources.
The accuracy of facial recognition depends on the capabilities of the technology (both hardware and software) to pick out a human face as distinct from any “noise” in its background (including things like spectacles, beards, hats, or clothing), and then to isolate and measure any combination of the unique traits typically used to distinguish one human face from another. These include:
- The distance between the eyes
- The depth of the eye sockets
- The width of the nose
- The shape of the cheekbones
- The length of the jawline
In facial recognition speak, these traits are often referred to as “nodal points”, and are combined by the system to create a numerical code or “faceprint” representing each individual in the database.
Facial Recognition – A Checkered History
Attempts at using computers to recognize human faces began in the mid-1960s. Early techniques relied solely on two-dimensional image capture, and accurate recognition could only be assured if the subject being photographed was essentially looking straight at the camera – and wearing a similar expression and in the same lighting conditions as the comparison image in the database.
As lighting conditions can change as often as people’s moods, this obviously didn’t make for a very reliable system.
With advances in digital imaging technology that have now put what would be the equivalent of a 20th century photographic studio in the palm of your hand (your smartphone camera and imaging applications), the ability to apply improved corrections for lighting and focus, and the capacity to store vast amounts of image data resulted in a leap forward for facial recognition technology in more recent years. But there were still bugs to be ironed out.
The facial recognition hardware installed in the Ybor City nightlife district of Florida in 2001 by the Tampa Police Department in an attempt to reduce crime in the area were removed in 2003, after visitors and residents there took to wearing masks, turning their faces away, or making rude gestures at the cameras.
A facial recognition scheme run at Logan Airport in Boston recorded only a 61.4 percent accuracy rate during its three month trial period, prompting a rethink on their strategy for airport security.
The current phase in facial recognition technology is pursuing a three-dimensional approach to imaging, which is intended to compensate for the many variables that contributed to the unreliability of flat photography.
3D Facial Recognition: How It Works
Three Dimensional facial recognition relies on the unique topography of the human face to distinguish between individuals: The depth of the nose ridge or eye sockets, grooves and projections of the cheekbone, etc. Once a reliable 3D facial model is created, face measurements and depths of features aren’t affected by lighting, so the technology can in theory be used in darkness. Faces may also be recognized when viewed at different angles – potentially up to full profile (90 degree turn).
The process for building up that 3D model has several stages:
- An image of the face must first be captured, by digitally scanning a camera image or existing two-dimensional photograph.
- Once a face has been detected, the system determines the position, size, and attitude (pose or posture) of the person’s head. This acts as the baseline for determining how the face will look when viewed from different angles.
- The topography of the face is then measured on a microwave or sub-millimeter scale, to give a detailed template for the individual.
- The template is then translated into a numerical code (3D faceprint).
- For identification purposes, an image is compared against all faces in the system database, and given a score based on each potential match.
- For identity verification, a person’s facial image is compared to an existing image of them in the relevant database (e.g., office network, or vehicle licensing).
With databases entirely composed of 3D images, the matching process may occur without any alterations having to be made to the source image. But with databases consisting of two-dimensional photographs or a mix of flat and 3D images, the 3D source image has to be amended in some way to convert it to 2D.
Practical Applications of Facial Recognition Technology
According to the National Facial Recognition Database in the USA, if you’re an adult citizen there, the odds that your face is on a law enforcement database are 50%. With CCTV cameras now an accepted and familiar feature on many city streets across the world, facial recognition technology is an increasingly common feature of our daily lives.
A 3D facial recognition system developed jointly by the University of the West of England (UWE Bristol) and the commercial firm Customer Clever is to receive UK government backing to develop applications for high-security commercial deployments in the UK and worldwide.
And in China, facial recognition technology has already been put in place for a variety of applications – ranging from “public rest room monitor” cams which ration out toilet paper (you’re limited to about half a meter of tissue, every nine minutes), to security cameras in the women’s dormitories of Beijing Shifan University, and improving security and customer service at branches of the fast food giant KFC.
As the technology continues to improve, and issues of public acceptance and data privacy are more fully addressed, we can expect facial recognition to become even more widespread.
Share this Post