In the past few weeks alone, social media bore witness to individuals posting juxtapositions of their then and now photos from a decade back to date. Called the ‘10-Year Facebook Challenge’, the move raised eyebrows about the motive behind this viral challenge – was this just a ruse by social media giants to mine users’ facial recognition data? After all, having access to a broad and rigorous dataset is a perfect storm for machine learning to finetune its prowess.
Controversy aside, facial recognition offers a promising range of use cases based on a system of real-time capture and verification of biometric information. According to data by research firm East & Partners Asia, nearly a third of all large corporations across Asia believe in the security that comes with biometric authentication and expect to use it in the near future – a significant jump from the 3.2% who currently use it.
Some of the greatest benefits of this technology include the speed, simplicity, and reliability it offers when authenticating payment, securing physical access, identifying oneself with a service provider, and protecting public spaces. And it is with this last usage that a Pandora’s Box situation unfolds. Since in this case, technology can be used without the knowledge of the people it is used on.
However, judging if the use of a particular technology is morally and legally sound should be assessed based on the specific scenario. For example, there is a big difference between using facial recognition for border security checks and systematic monitoring of the citizens in public places.
Facial recognition – brought to life
Given the multiple benefits of facial recognition, the technology is increasingly being used for border control. Singapore’s Immigration and Checkpoints Authority (ICA), for example, will progressively deploy a new screening system which uses three biometric identifiers – fingerprints, facial and iris – to complement fingerprint matching for immigration clearance from April 2019. This move will enable ICA to screen incoming and outgoing travellers more efficiently and accurately.
Biometrics are also widely used in banking and payment. Take Japan’s financial solutions provider Dai Nippon Printing (DNP) for instance. To make mobile banking transactions easier to use, DNP started to offer facial recognition to secure access to its mobile banking apps. This enables its customers to easily and quickly log on to mobile banking services upon successful facial authentication, providing more convenient and seamless user experiences.
A slippery slope to navigate
Protecting public or private spaces where large numbers of individuals come together, such as stadia and concert halls, is a real challenge only facial recognition technology is able to tackle due to its speed and accuracy.
But what about individual consent?
When online, we accept that our activities may be recorded and analyzed in order to facilitate interaction. With GDPR and other privacy protection, we now have the rights and choices to decide if we want to be tracked online.
When it comes to using real-time facial recognition to secure spaces, we are inadvertently getting into surveillance-related issues that, if not handled properly, could infringe on our privacy, and even our fundamental human rights.
The need for a universal framework
Today, it appears almost as if there is no universal law governing the use of facial authentication technology for surveillance. Such issues are managed by individual countries and different frameworks, which governments can choose to impose or not.
In Asia, biometric-centric laws are fragmented. For instance, Australia has taken an approach similar to GDPR – biometric information can only be used for purposes including verification or identification, and consent must be obtained before collection. However, the restrictions are also use-specific.
In Japan, the Japanese Personal Information Protection Act expanded its definition of personal information in 2017 to include biometric data such as facial features and fingerprints. The classification potentially limits the sharing of such information. In the latest development, Japan and the European Union have entered into an agreement recognizing each other’s privacy laws as robust and sufficient, therefore allowing data to flow between them – a step in the right direction.
For now, there is still a long way to go for us to have a shared and clearly-defined legal framework for all nations in the world. The stakes remain high, because it is all about striking the right balance between individual rights and a duty of protection on a governmental level. Any new technology as powerful as biometrics should only be welcomed when it can be deployed within a robust and sound legislative framework.
Raphaël de Cormis, Vice President of Innovation Labs at Gemalto