In a world where computer-generated images and videos are increasingly difficult to distinguish from reality, a recent study in Scientific Reports sheds light on how our brains respond to emotional expressions from faces we believe to be real versus those we think are computer-generated, or “deepfakes.” The study discovered that when people think a smiling face is artificial (like a deepfake), they don’t react to it as strongly or positively as they do to real smiles, but their reactions to angry faces remain the same whether they believe the face is real or fake.
Deepfakes are hyper-realistic digital creations that use artificial intelligence (AI) and machine learning techniques to manipulate or generate visual and audio content with a high degree of authenticity. This technology can create convincing fake content that appears real, such as videos of public figures saying or doing things they never actually did. The term “deepfake” combines “deep learning” (a subset of AI that uses neural networks to learn from large amounts of data) and “fake,” reflecting the technology’s capacity to create highly deceptive content.
While much research has focused on detecting deepfakes or exploring their potential for misinformation and manipulation, there has been less emphasis on their psychological effects. Specifically, how do deepfakes influence our perception and emotional reactions when we encounter faces that may or may not be real?
“Computer-generated media, including deepfake images and videos of people that do not exist, are becoming increasingly prevalent,” said study author Anna Eiserbeck, a doctoral researcher at the Abdel Rahman Lab at Humboldt University of Berlin.
“Much of the existing research has concentrated on people’s ability to visually differentiate between authentic and computer-generated images. We were more interested in their psychological impact: do we react in the same way to facial expressions shown by real and presumably fake faces?”
“Since real and deepfake faces are harder and harder to distinguish, we hypothesized that our perception and emotional responses may crucially depend on the prior belief that what one is seeing is, in fact, real or fake. In light of this, our study aimed to explore how such prior information about the real or computer-generated nature of portraits influences the emotional processing of faces.”
To explore the impact of deepfakes on emotional processing, the researchers conducted an experiment with 30 individuals, including 21 females and 9 males, who had an average age of approximately 25.87 years. Before the experiment, participants were briefed on the existence of “deepfake” technology capable of generating highly realistic images of people. However, they were misled to believe that some of the faces they would see were generated by such technology, even though all images were real.
During the experiment, images of human faces were presented to participants, each preceded by a label indicating whether the image was “REAL” or “FAKE.” This setup aimed to prime participants’ perceptions of each image’s authenticity. Following the exposure to each image, participants rated the emotional expression of the face on a scale, providing a measure of their emotional response.
To capture the neural underpinnings of participants’ emotional processing, the study employed electroencephalography (EEG). This non-invasive technique records electrical activity in the brain, offering insights into how the brain responds to different stimuli over time. The study focused on specific event-related potentials (ERPs), which are brain responses tied to particular sensory, cognitive, or motor events.
One of the pivotal findings was the differential impact of the belief in the authenticity of a face on our emotional responses to positive expressions, such as smiles. When participants believed they were viewing a real human face, smiles elicited more positive ratings and a distinct pattern of brain activity, indicative of a typical emotional response to happiness.
In contrast, smiles labeled as “fake” were not rated as positively, and they evoked different brain responses, suggesting that the emotional weight of a smile diminishes when we suspect the face to be artificial.
Interestingly, the study found that negative expressions like anger did not exhibit the same variation in emotional impact based on perceived authenticity. Regardless of whether faces were labeled as real or fake, angry expressions were processed similarly, both in terms of participant ratings and brain activity.
This consistency suggests a primacy of negative emotions in our perceptual system, possibly due to their importance in threat detection and survival. Essentially, our brains seem wired to respond to potential threats, real or imagined, with a consistent level of attention and emotional engagement.
“When we believe a smiling face to be a deepfake, perceptual and emotional responses in our brain are reduced, meaning that the smile looks less intense and creates less emotional resonance,” Eiserbeck told PsyPost. “Angry faces, on the other hand, keep the same perceptual and emotional impact, whether we believe them to be genuine or not. So the impact of information regarding the authenticity of portraits (marking images as real or deepfake) also depends on the emotion they convey.”
Looking ahead, the researchers suggest further studies could explore less pronounced emotional expressions, investigate individual differences in reactions to deepfakes, and potentially incorporate eye-tracking technology to understand better how we visually process real versus artificial smiles.
“One of our longer-term goals is to gain an in-depth understanding of the psychological aspects of seeing (or suspecting to see) fake content like deepfake faces,” Eiserbeck said. “We hope that insights into the neurocognitive processes within perceivers can help to inform policies of how we deal with deepfakes as a society.”
The study, “Deepfake smiles matter less—the psychological and neural impact of presumed AI-generated faces,” was authored by Anna Eiserbeck, Martin Maier, Julia Baum, and Rasha Abdel Rahman.
Discussion about this post