Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Like all tech giants, Amazon is regularly criticized for various policies and practices. But no issue has been as contentious this year as selling its facial recognition tech Rekognition to law enforcement. Never mind that a growing chorus of AI experts are calling for facial recognition systems to be regulated. Amazon is eagerly developing surveillance tech for police. If it’s legal, Amazon says, any government agency can be a client.
During a keynote last month, protestors interrupted Amazon Web Services CTO Werner Vogels five times demanding the company “cut ties with ICE.” They played recordings of children being separated from their parents at a U.S. Customs and Border Protection facility.
Are you fucking kidding me?
AWS comes under fire for Rekognition sales to the federal government, who in turn is building concentration camps for children, and AWS's response is to improve "age range estimation" and "fear detection" in the service?
Are you FUCKING KIDDING ME?! https://t.co/RiQ850uAFq
— Corey Quinn (@QuinnyPig) August 12, 2019
So this week, Amazon improved Rekognition’s age range estimation accuracy and added “fear” as the eighth emotion that the facial recognition tech can allegedly detect. Here is the company’s description of the update:
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Amazon Rekognition provides a comprehensive set of face detection, analysis, and recognition features for image and video analysis. Today, we are launching accuracy and functionality improvements to our face analysis features. Face analysis generates metadata about detected faces in the form of gender, age range, emotions, attributes such as ‘Smile’, face pose, face image quality and face landmarks. With this release, we have further improved the accuracy of gender identification. In addition, we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’) and added a new emotion: ‘Fear’. Lastly, we have improved age range estimation accuracy; you also get narrower age ranges across most age groups.
Improved face analysis models are now available for both Amazon Rekognition Image and Video, and are the new default for customers in all supported AWS regions. No machine learning experience is required to get started.
Can Amazon be more tone deaf?
Facial recognition and emotion detection
Amazon doesn’t care about ethics. The company is only interested in what the law dictates. That’s convenient given it often takes years, if not decades, for the law to catch up with bleeding-edge technology.
But how accurate is facial recognition for emotion detection anyway? A study published last month throws into question whether the technology even works. From the abstract:
The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state.
Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another.
In short, there is no scientific evidence for the common assumption “that a person’s emotional state can be readily inferred from his or her facial movements.” The same emotions are not always expressed in the same way and the same facial expressions do not reliably indicate the same emotions. Not to mention, humans can experience multiple emotions at the same time, or even fake an emotion by making a face.
There are two possible conclusions. Amazon’s Rekognition cannot detect human emotion because doing so is not technically possible using facial recognition. Or, Rekognition is a breakthrough and can somehow determine human emotion using facial recognition.
Neither option sounds great. The former means that Amazon is selling a product that claims to detect emotion when all it’s really detecting is facial movements that may or may not be related. Clients then make decisions based on this.
The latter means Amazon believes all human emotion can be pigeon-holed into eight categories. This week’s Rekognition update even notes that emotion detection now has “improved accuracy.” Yeah? What was it before? What is it now? Given that no system is 100% accurate, what happens when Rekognition wrongly detects an emotion? Clients then make decisions based on this.
Not sure if either scenario scares you? Standby, Rekognition will confirm.
ProBeat is a column in which Emil rants about whatever crosses him that week.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.