Face recognition technology allows for remote, covert, non-consensual identification. In other words, like other forms of biometric technology, it can easily be used for mass and targeted surveillance. Internet giants such as Google, Facebook and Microsoft have large centralised databases containing photographs and video recordings of our faces. Using machine learning, they can easily identify us if one of their users were to upload an image or begin a live broadcast. As their market shares grow, and as users continue to upload pictures of their faces (including those in response to campaigns such as #10yearchallenge) their artificial intelligence models for each one of us becomes increasingly accurate.
Once such recognition technology has been deployed at global scale, governments of all hues, democratic and authoritarian will sooner or later want to use these capabilities for legal and illegal purposes. For example, a terrorist could be identified at an airport, a criminal could be matched with CCTV footage, an intimidated victim of trafficking can be identified without her cooperation, a missing child who is too young to remember her origins can be united with her parents. Unfortunately, the very same technology can also be used to identify a labour union member on strike, a human rights activist at a demonstration, a sexual minority in a park, and a sex worker at a mall.
Even the employees of these Internet giants seem to be horrified by the potential illegal uses of facial recognition technology by governments. In October last year, 450 Amazon employees protested the licensing of the software, Rekognition, to US government and law enforcement agencies. Amazon ignored these protests and proceeded with closing those deals. Just last week, Satya Nadella of Microsoft indicated that Microsoft would follow suit. He said, “[We] made a principled decision that we're not going to withhold technology from institutions that we have elected in democracies to protect the freedoms we enjoy.”
The digital human rights activists just like the nine judge bench in the Puttaswamy judgment, believe that surveillance must be “necessary and proportionate”. A centralised global panopticon capable of identifying billions of humans across the planet fails this test. Therefore, from a human rights perspective, an absolute ban on the provisioning of these technologies to governments makes perfect sense. The internet giants obviously disagree. Last month, Brad Smith, Microsoft’s chief legal officer, exemplified this position best when he said, “A sweeping ban on all government use clearly goes too far and risks being cruel in its humanitarian effect.”
Face recognition technologies can be life altering for visually impaired persons. Imagine a visually impaired person attending a book fair or a concert. She would be able to use this technology to identify and speak to her favourite author, expert, commercial partner or friend. Therefore, the optimisation question before us is: How can we provide facial recognition technology to the visually impaired person without letting it be abused by the state. Do remember that all of us so called “able-bodied” are only temporarily able. Unless we have the double fortune of dying quickly and early we will spend a part of our life disabled and will have to depend on similar electronic accessibility technologies. And even if our bodies don’t fail us, our minds will and many of us will find such recognition technology critical as we age. Another clear example is the use of recognition technology to find a missing child.
How can internet giants build face recognition technology with technical guardrails in place? Like Apple they can decide to adopt a decentralised architecture. In others words, the best way for internet giants to prevent abuse of their platforms is to make it technically impossible to do so. The face recognition software can run locally on the user’s device, the artificial intelligence models and the relevant data can be stored locally on the user’s device. When a visually impaired person is about to attend an event, the event organisers can provide the attendee’s data after securing informed consent. When a child goes missing, the parents could share the data for their child with search parties that have volunteered to scour the neighbouring localities and states where the child is likely to be found. This decentralised architecture makes it impossible for a government to use internet giants as a global panopticons. A separation of surveillance capitalism from surveillance state.
Even with such technical guardrails there may be unintended consequences. In China, there is the phenomenon of human flesh search engines, wherein online mobs hunt down and punish citizens whose actions have enraged them. Therefore, new technical guardrails and institutional checks and balances will need to be introduced as users use and abuse such platforms.
The writer is Executive Director, Centre for Internet and Society; email@example.com; the Centre for Internet and Society receives grants from Facebook and Google