Every breakthrough in artificial intelligence holds the potential to alter the course of this technologically driven world.
That’s why researchers at the University of Maryland’s Robert H. Smith School of Business are relentless in identifying the ramifications of developments in artificial intelligence, including discovering its flaws and limitations, and informing the public on how to navigate them effectively.
Contributing to this endeavor are Smith School professors Lauren Rhue and Siva Viswanathan, who have, respectively, produced work highlighting racial bias in AI facial recognition technology and developing a patent-pending deepfake method to detect and mitigate bias in decision-making, which flips the script on narratives surrounding deepfake technology.
Rhue, assistant professor of information systems, has amassed research experience exploring the economic and social implications of technology. Her paper on facial recognition, which involves scanning, analyzing, and recognizing faces, asserts that the technology, as it pertains to emotions, displays racial disparities, particularly for Black individuals.
The research, cited by NBC News, utilized a set of photos of NBA players analyzed by Face++ and Microsoft AI facial recognition services. She found that both platforms interpreted Black players as displaying more negative emotions than white players.
“Face++ consistently interprets Black players as angrier than white players, controlling for their degree of smiling,” says Rhue. “Microsoft registers contempt instead of anger, and it interprets Black players as more contemptuous when their facial expressions are ambiguous. As the players’ smiles widen, the disparity disappears.”
Rhue’s research yields crucial insights into the contentious issue of how companies and government agencies have implemented the technology in their processes, most notably in law enforcement, with Black men being wrongfully arrested in cases where it was used.
It also carries implications for everyday professionals navigating the workforce.
“There’s been some interesting work that looks at emotional labor, saying that essentially African Americans often have to have more exaggerated positive emotions in order to be perceived at the same level of positive as others,” says Rhue. “For example, a sales associate might have to be just over the top friendly and have huge smiles in order to be recognized as being pleasant.”
Similar biases have long been prevalent in the corporate world, and in some cases, have subconsciously seeped into actions such as candidate vetting. Viswanathan, working alongside Balaji Padmanabhan, director of Smith’s Center for Artificial Intelligence in Business, and information systems PhD student Yizhi Liu, sought a remedy through deepfake technology.
Despite the technology's reputation for misinformation and digital deception, the researchers developed a patent-pending deepfake method that utilizes AI-generated facial images to detect, measure, and ultimately mitigate bias in high-stakes decision-making, as described in “Deception to Perception: The Surprising Benefits of Deepfakes for Detecting, Measuring, and Mitigating Bias.”
“With deepfakes, we approach this as another opportunity to repurpose a harm-inducing phenomenon for societal good,” Viswanathan says.
In addition to the prevalence of bias in areas such as corporate hiring and criminal justice, the research explores how deepfakes can be employed to eliminate distorted results and conclusions in healthcare. More specifically, they sought to address how physicians assess pain levels in patients, with previous research on the issue suggesting that “racial and age-based disparities affect medical professionals' evaluations of patient pain levels,” according to Viswanathan.
To do so, they generated deepfake images to retain the key facial action units used to compute the Prkachin and Solomon Pain Intensity score for assessing pain. Then, they tested for whether subtle changes—such as altering a subject’s perceived race or age—would lead to different pain assessments.
“The results were striking,” Viswanathan says. “White patients were consistently rated as experiencing more pain than Black patients, and older individuals were perceived as suffering more than younger ones, despite the images being otherwise identical.”
The research, he says, not only helps diagnose this bias but serves a significant step toward its correction. It also reveals a path forward for deepfake technology’s potential role regarding transparency and accountability in AI-driven decision-making. Viswanathan and his co-authors have also created an Agentic AI system to automate the use of deepfakes for bias detection and correction.
“By integrating deepfake-enhanced datasets into AI training models, we show that machine learning systems can be recalibrated to produce a blueprint for reducing decision-compromising bias—not only in AI-assisted medical diagnostics, but also in criminal justice risk assessments and corporate hiring algorithms,” says Viswanathan.
Media Contact
Greg Muraski
Media Relations Manager
301-405-5283
301-892-0973 Mobile
gmuraski@umd.edu
Get Smith Brain Trust Delivered To Your Inbox Every Week
Business moves fast in the 21st century. Stay one step ahead with bite-sized business insights from the Smith School's world-class faculty.