Your Face: The Final Frontier of Privacy?

03 - June 2019

The top four reasons facial recognition is a threat to privacy

 

Testifying before the House Committee on Oversight and Reform last week, Joy Buolamwini, a facial recognition expert from MIT, declared that “our faces may well be the final frontier of privacy.”

She knows what she’s talking about. Facial recognition technology has quietly moved from the laboratory to the mainstream, and many of us didn‘t even realize that it was happening.

Even as civil liberties groups worldwide raise red flags, facial recognition technology continues to infiltrate industry and law enforcement at an alarming rate. Why is this dangerous for privacy? Here are the top four reasons:

 

Companies Are Misusing It

Companies have introduced facial recognition-based services for convenience, but no one is overseeing how the technology is used. Consider the man who boarded a JetBlue flight last month, and was surprised to discover that he didn’t have to scan his boarding card: his face was scanned and recognized instead. This person never consented to having his face scanned, nor having his facial image stored and linked to his identity. Yet under existing laws and regulations, JetBlue does not need this consent.

And then there’s photo storage app Ever. The company recently added a clause to its standard terms of service agreement (the one that no one ever reads) which enables use of photos people share to train a facial recognition system. The data generated can then be sold to other companies, law enforcement and the military. This practice is, of course, completely legal.

 

Governments are Misusing It

Lacking concrete legislative guidance, governmental bodies (especially paw enforcement) are enthusiastically adopting facial recognition solutions and using them in often questionable manners.

Take the New York Police Department, where officers were tipped off about a suspect resembling Woody Harrelson, then ran a “Hail Mary” search for faces resembling a photograph of the actor to generate leads. Or the London police, who fined a man that covered his face when passing by a city facial recognition camera.

There is some good news, though. Not all governmental entities are misusing the technology – and some are outright disavowing it. Earlier this month, San Francisco banned the use of facial recognition software by the police and other agencies. And similar bans are under consideration by other cities.

 

It’s not 100% Accurate

Facial recognition technology is incredibly advanced. Yet, in a recent test of Amazon’s Rekognition facial recognition technology, the American Civil Liberties Union (ACLU) found that the software misidentified 28 members of Congress – flagging them as people arrested for crimes. The ACLU also discovered a disturbing discrepancy in the ratios of minorities correctly identified by the software. And in the UK, a black driver sued Uber for discrimination after he was barred from the company’s app when facial recognition software could not recognize him. So, is facial recognition technology advanced enough to fulfill the myriad tasks it’s being applied to?

 

Facial Recognition Databases are Not Secure

No data is 100% secure. So even facial images that were legally and consensually obtained, and stored in legitimate databases, can be misused if leaked. And once a facial image is matched with PII, there’s really no going backwards. Consider the example of a corporate employee database. Digital personnel files include employee headshots for identification and use on ID badges. When there’s a breach, these images – which are linked to detailed PII – can represent a huge risk to both the company and the individuals themselves.

 

The Bottom Line

Privacy is at risk from the misapplication of facial recognition, and effective legislation is far behind in controlling the rampant adoption and misuse of the technology. However, organizations and individuals are not powerless to stop the downward slide. Today, organizations that want to mitigate the dangers facial recognition abuse have new technological options. Solutions like D-ID’s enable full and continuing use of digital facial images for legitimate purposes, yet make them unidentifiable to even the most advanced facial recognition engines.

Leave a Reply

Your email address will not be published. Required fields are marked *