Facial expressions don't tell the whole story of emotion
Listen to this article
Interacting with other people is almost always a game of reading cues and volleying back. We think a smile conveys happiness, so we offer a smile in return. We think a frown shows sadness, and maybe we attempt to cheer that person up.
Some businesses are even working on technology to determine customer satisfaction through facial expressions.
But facial expressions might not be reliable indicators of emotion. In fact, it might be more accurate to say we should never trust a person’s face, new research suggests.
What the researchers say: “The question we really asked is: ‘Can we truly detect emotion from facial articulations?’“ said the lead author. “And the basic conclusion is, no, you can’t.”
The team, whose work has focused on building computer algorithms that analyze facial expressions presented their findings at the annual meeting of the American Association for the Advancement of Science.
The researchers analyzed the kinetics of muscle movement in the human face and compared those muscle movements with a person’s emotions. They found that attempts to detect or define emotions based on a person’s facial expressions were almost always wrong. The same had earlier been found to be true of “body language.”
“Everyone makes different facial expressions based on context and cultural background,” the researchers said. “And it’s important to realize that not everyone who smiles is happy. Not everyone who is happy smiles. I would even go to the extreme of saying most people who do not smile are not necessarily unhappy. And if you are happy for a whole day, you don’t go walking down the street with a smile on your face. You’re just happy.”
It is also true, they said, that sometimes people smile out of an obligation to the social norms. This would not inherently be a problem, they said—people are certainly entitled to put on a smile for the rest of the world—but some companies have begun developing technology to recognize facial muscle movements and assign emotion or intent to those movements.
The research group that presented at AAAS analyzed some of those technologies and largely found them lacking.
“Some claim they can detect whether someone is guilty of a crime or not, or whether a student is paying attention in class, or whether a customer is satisfied after a purchase,” the lead author said. “What our research showed is that those claims are complete baloney. There’s no way you can determine those things. And worse, it can be dangerous.”
The danger, obviously, lies in the possibility of missing the real emotion or intent in another person, and then making decisions about that person’s future or abilities.
After analyzing data about facial expressions and emotion, the research team concluded that it takes more than expressions to correctly detect emotion.
In one experiment, the researchers showed study participants a picture cropped to display just a man’s face. The man’s mouth is open in an apparent scream; his face is bright red.
“When people looked at it, they would think, wow, this guy is super annoyed, or really mad at something, that he’s angry and shouting,” the lead researcher said. “But when participants saw the whole image, they saw that it was a soccer player who was celebrating a goal.”.
And while the lead author said he is “a big believer” in developing computer algorithms that try to understand social cues and the intent of a person, he added that two things are important to know about that technology.
“One is you are never going to get 100 percent accuracy,” he said. “And the second is that deciphering a person’s intent goes beyond their facial expression, and it’s important that people--and the computer algorithms they create--understand that.”
So, what? OK, so one day we might invent a facial scan system that can accurately show what a person is feeling or even–as some colleagues in the field tell me—broadly what they are thinking. The question is why do we allow this? What persuades us to allow people to create this kind oftechnology and use it to survey and control us?
Frankly I don’t care if the AI systems that judge whether someone will be a repeat offender, a purchaser of certain goods or not do as the state tells them are more accurate. That’s not the point. The point is that creating them in the first place is unethical and simply wrong.
None of us should have to live in a world where we’re subject to accurate or inaccurate facial recognition and mood and thought prediction systems.
Got that off my chest!
Join the discussion
More from this issue of TR
You might be interested inBack to Today's Research
Facial plastic surgery in men enhances perception of attractiveness, trustworthiness
New research shows how much we judge character and ability by the face.
Our brains are obsessed with being social.
New research demonstrates how our brains consolidate new social information—even during rest. Our brains are obsessed with being social even when we’re not in social situations. The study, published in Cerebral Cortex, finds that the brain may engage in social encoding (learning from a recent social situation) even when it’s at rest.
Harmful effects of ageism on older persons' health found in 45 countries
We boomers and Xgeners are getting on, and as we do the fixation with youth—in employment and many other areas—that we see all around us becomes more of a problem. We become excluded and seen as a liability rather than the irreplaceable asset that we are.
Join our tribe
Subscribe to Dr. Bob Murray’s Today’s Research, a free weekly roundup of the latest research in a wide range of scientific disciplines. Explore leadership, strategy, culture, business and social trends, and executive health.