In the aftermath of the George Floyd killing and protests, IBM, Amazon and Microsoft have announced that they will no longer supply law enforcement agencies with facial recognition software. IBM CEO Arvind Krishna wrote that “We believe now is the time to begin a national dialogue on whether and how facial-recognition technology should be employed by domestic law enforcement agencies.”
The truth is, this decision should have come a long time ago. Study after study has shown that the currently deployed facial recognition systems in the market don’t work as intended. The inability of the artificial intelligence software to get it right, either with false positives identifying a match, or false negatives where an identification is missed, directly leads to either missed opportunities to catch criminals, or perhaps even more troubling, to accusing the wrong person.
The NIST Study
In 2019 the National Institute of Standards and Technology evaluated 189 software algorithms from 99 different developers. They tested both one-to-one matches, where a person was matched to a specific picture, and one-to-many matches, where a person was compared to a database.
Asians and Blacks had 10-100-times more false positives than Whites, depending on the algorithm used in the test. False positives, in addition to potentially putting the focus of an investigation on someone who should be cleared, also opens the risk of someone getting unauthorized access to computer systems that should be closed to them.
That same study showed that algorithms developed in Asian countries had a low rate of false positives for Asians. “These results are an encouraging sign that more diverse training data may produce more equitable outcomes, should it be possible for developers to use such data,” said Patrick Grother, the primary author of the NIST report.
In effect, the NIST study showed that facial recognition software can work, but its current iteration is highly ineffective and shouldn’t be used.
Does it Really Matter if it Works?
Even if companies like IBM, Amazon and Microsoft worked out all the bugs in the algorithm, do we really want to live in a world with ubiquitous facial recognition software?
John Oliver, host of HBO’s Last Week Tonight, pointed out that facial recognition software in the wrong hands is a very dangerous proposition. Hoan Ton-That, founder of Clearview.ai, says his company has scraped over 3 billion publicly available images from Facebook, Twitter and other internet sources. According to Ton-That, anyone with access to the database can scan images to find the identity of individuals in a photo.
While that might seem innocuous, individuals photographed while participating in a protest could be identified through Clearview’s search engine. Without any type of government oversight or regulation, this could have repercussions that follow individuals throughout their lives.
Until governments act to create regulations and oversight over the use of this technology, every effort should be made to prevent invasive abuses of the individual’s privacy from becoming commonplace.
A Good First Step
At D-ID, we applaud the steps taken by these leaders in the industry. However, these steps will likely have a limited effect on facial recognition software abuses. The Washington Post reported that none of these three companies are major players in selling this technology to police.
D-ID board member Richard Purcell, a former CPO at Microsoft, believes that there isn’t any benefit to law enforcement agencies using broken technology.
Purcell added that “D-ID works closely with enlightened companies that want to offer their clients and customers the choice to have online images, still and video, protected from these broken applications and their abusive uses. The benefits of displaying clear images to people while defeating automated facial recognition applications are profound – give people meaningful and effective choice over the display and use of their images.”
Indeed, as long as law enforcement officials continue to use facial recognition systems, individuals need an image privacy solution to protect themselves from being incorrectly identified by a faulty algorithm.
Additionally, as long as companies like ClearView, HiQ, FindFace and dozens of others like them continue to scrape social media and the internet for images that they can add into their database, privacy solutions like those offered by D-ID are vital in protecting the privacy of individuals. Without these protections, and lacking any government intervention, facial recognition software has the power to reduce individual freedoms and curtail the willingness of people to take a public stand for the issues they believe in.