The Dark Side of Facial Recognition Tech

The Dark Side of Facial Recognition Tech

facial recognition

This post is also available in: heעברית (Hebrew)

Facial recognition technologies have become more and more popular in recent years. In this widespread trend, Amazon has emerged as a frontrunner in the field, courting customers around the US, including police departments and Immigration and Customs Enforcement (ICE). However, experts say the company is not doing enough to allay fears about bias in its algorithms, particularly when it comes to performance on faces with darker skin.
The latest cause for concern is a study published recently by the MIT Media Lab, which found that the Rekognition system performed worse when identifying an individual’s gender if they were female or darker-skinned. In tests led by MIT’s Joy Buolamwini, Rekognition made no mistakes when identifying the gender of lighter-skinned men, but it mistook women for men 19 percent of the time and mistook darker-skinned women for men 31 percent of the time.
The study follows research Buolamwini conducted last February, which identified similar racial and gender biases in facial analysis software built by Microsoft, IBM, and Chinese firm Megvii. Shortly after Buolamwini shared her results, Microsoft and IBM both said they would improve their software according to this latest study found, they did just that.
According to a theverge.com report on the topic, since last February a number of tech companies have voiced concern about the problems with facial recognition. As bias in algorithms is often the result of biased training data, IBM published a curated dataset it said would boost accuracy. Microsoft has gone even further, calling for regulation of the technology to ensure higher standards so that the market does not become a “race to the bottom.”
Amazon, by comparison, has done little to engage with this debate. The company has also denied that this recent research suggested anything about the accuracy of its technology. It noted that the researchers had not tested the latest version of Rekognition, and the gender identification test was facial analysis (which spots expressions and characteristics like facial hair), not facial identification (which matches scanned faces to mugshots). These are two separated software packages, says Amazon. “It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case based on results obtained using facial analysis,” Matt Wood, general manager of deep learning and AI at Amazon Web Services, said in a press statement.
Nevertheless, earlier research has found similar problems in Amazon’s facial identification software. A test last year found that while scanning pictures of members of Congress, Rekognition falsely matched 28 individuals with police mugshots. Amazon blamed the results on the poor calibration of the algorithm.
Although bias in facial recognition systems has become a rallying point for experts and researchers who are worried about algorithmic fairness, many warn that it shouldn’t overshadow broader issues. As Buolamwini and co-author Inioluwa Deborah Raji note in their recent paper, just because a facial recognition system performs equally well on different skin colors, that doesn’t stop it from being a tool of injustice or suppression.
As Amazon plans to peddle the software to law enforcement agencies, Amazon employees and civil rights groups alike have been outspoken against it, saying the software could one-day power mass surveillance. “This technology is being implemented in ways that materially benefit society, and we have received no indications of misuse,” a statement from Amazon quoted on newsweek.com, said.