Facial recognition technology has always been the thing of science fiction. A computer being able to identify a human accurately could seem absurd even 10 years ago. But with the burgeoning of Facebook, the widespread usage of smartphones, and the need for national security, the idea is not foreign anymore. In fact, it’s a normal sight to see Facebook automatically tag you in a photo. We as a society are used to facial recognition being a part of our lives and with the prevalence of the technology, the technology has moved from tagging you in photos and has moved to authentication, especially when it comes authentication of financial transactions.
Facial recognition by computers dates all the way back to the 1960’s. Woodrow Bledsoe came up with a system of distinguishing specific characteristics of a person’s face to identify them, such as width between eyes (Tucker). Facial recognition progressed in the 1970s with work from Goldstein, Harmon and Lesk. They utilized more factors such as hair color to help with the recognition (Face Recognition). Throughout the late 1980s and early 1990s, linear algebra was brought in to assist with the technology. Kirby and Sirovich pioneered the usage of it while Turk and Pentland built upon it by using linear algebra to calculate the residual error of a face from the average. This allowed for real-time recognition in a reliable way for the first time. This technology was first shown off at the 2001 Super Bowl to compare the faces of people to mugshots (Face Recognition). Biometrics as an industry since then has significantly grown. According to the Boston Globe, the market went from “$400 million in 2001 to $5 billion in 2011, and is projected to increase to $23 billion by 2019”. So besides its technological importance, the market itself is making a dent on the world stage as a money maker for those who are willing to work on the technology.
Currently, there is a fragmentation in the market with regard to the way that facial recognition is done. The three main ways it is done is image recognition, iris scanning, and dot projection. Image recognition is the way that most Android phones that implement facial recognition. The way this works is that your face is stored on the phone and every time you unlock your phone, a secure image is taken and processed on the phone. If the image taken matches that of the face stored on the phone, the phone is unlocked (Hildenbrand). The issue with this is that it can be easily fooled by a picture of the user. So while this is very fast, it is not very secure. Even Samsung will not use it as an authentication tool for its mobile pay solution, Samsung Pay (Lovejoy).
Iris scanning is mainly found on Samsung phones. The way that it works specifically on Samsung phones is there is an infrared illuminator as well as a specific camera that is meant just for iris scanning. The illuminator shines infrared light on the eye so that the camera can read details better. Infrared is used because it will not cause the eye to dilate, yet it will still illuminate the eye for the camera. Also, it allows for better contrast of the eye than normal light, giving the iris camera more detail to work with. Once the data is read, it is stored and processed locally on the phone and then compared to the stored data. If there is a match, the phone is unlocked. The issue with Samsung’s implementation is that since it is prioritizing speed over accuracy, there is a possibility the scanner can be fooled with a high quality print of someone’s eye, but this is harder than using a picture of someone to unlock a phone (Android Central).
Finally, dot projection came to prominence with the iPhone X and FaceID. For this to work, over 30,000 dots are projected onto the user’s face, creating a 3D model of the face. The data is then given to an onboard chip that is secure where the model is compared to the one that is set up. If there is a match, the user let into the phone. To work in the dark, there is an infrared flash to illuminate the face (Maity). Apple is confident enough in its FaceID technology that it allows it to be used as an authentication tool for Apple Pay. Since the FaceID senses depth, it cannot be fooled by a picture of the user. The only way that it can be tricked is if there is an accurate mask made of a person’s face which is even less plausible than a picture of a person’s eye.
Besides usage for phone unlocking and sometimes mobile payment, there is usage for facial recognition outside of the phone for payments. KFC China has implemented something known as “Smile-To-Pay”. A user goes to one of the self-checkout kiosks and once they are done ordering, they look to the 3D camera and smile. The camera identifies the user and then the user is asked to put in their phone number. If everything is successful, the customer is successfully charged. This even works when the person is in heavy makeup or has a different hairstyle and hair color. This technology comes from Alibaba’s affiliate Ant Financial. With this kiosk, it allows a user to pay without a wallet or even without a phone (Russell). A startup called Uniqul aims to do something similar. They will install a camera system in a store and customers can sign up to have the cameras recognize them in the store. That way, all they have to do is have the camera confirm it’s them and they will be able to pay without a wallet or phone (Face recognition payments Experience the world wallet-Free).