Can AI Help Create More Equitable Workplaces?

New research may offer a way

Apr 16, 2021
Decision, Operations and Information Technologies

SMITH BRAIN TRUST  Can we partner with artificial intelligence and machine learning tools to build a more equitable workforce? New research from Maryland Smith’s Margrét Bjarnadóttir may offer a way.

Early attempts to incorporate AI into the human resources process haven’t been met with resounding success. Among the most well-known fails: In 2018, Amazon.com was forced to abandon an AI recruiting tool it built, when it was discovered to be discriminating against female job applicants.

In the research, recently awarded the Best White Paper award at the 2021 Wharton Analytics Conference, Bjarnadóttir and her co-authors examine the roots of those AI biases and offer solutions to the challenges they present – solutions that could unlock the potential of analytics tools in the human resources space. They for example recommend creating a bias dashboard that parses a model’s performance for different groups, and they offer checklists for assessing your work – one for internal analytical projects, and another for adopting a vendor’s tool.

“We wanted to look at the question of: How can we do better? How can we build toward these more equitable workplaces?” says Bjarnadóttir, associate professor of management sciences and statistics at the University of Maryland’s Robert H. Smith School of Business.

It wouldn’t be simple. The biases stem from an organization’s HR history. In Amazon’s AI case, the analytics tool was found to be rejecting resumes from applicants for technical job roles because of phrases like “women’s chess team.” With the technical jobs long dominated by men, the model had taught itself that factors correlated with maleness were indicators of potential success.

“We can’t simply translate the analytical approaches that we use in accounting or operations over to the HR department,” Bjarnadóttir says. “What is critically different, is that in contrast to, say, determining the stock levels of jeans or which ketchup brand to put on sale, the decisions that are supported in the HR department can have an instrumental impact on employees lives; who gets hired, who receives a promotion, or who is identified as a promising employee.”

And because the data it draws from is historical, it’s tough to amend. In other contexts there are more obvious remedies. For example, if a skin-cancer-detecting AI tool fails to detect skin cancers in darker skin tones, one could input more diverse images into the tool, and train it to identify the appropriate markers. In the HR context, organizations can’t go back in time and hire a more diverse workforce.

And the fact that typically HR data is what is called unbalanced, meaning, not all demographic groups are equally represented, causes issues when our algorithms interact with the data. A simple example of that interaction is the fact that analytical models will typically perform best for the majority group, because a good performance for that group simply weights the most in the overall accuracy – the measure that most off-the-shelf algorithms optimize.

So even if your models are carefully built - If your data aren’t balanced, even carefully built models won’t lead to equal outcomes for different demographic groups. For example, in a company that has employed mostly male managers, a model is likely to identify men disproportionately as future management candidates - in addition to correctly identifying many men. And the algorithm is more likely to overlook qualified women.

So, how can they “do better,” as Bjarnadóttir says, in future hiring and promotion decisions, and ensure that applications are unbiased, transparent and fair? The first step is to apply a bias-aware analytical process, that asks the right questions of the data, of modeling decisions and of vendors, and then both monitors the statistical performance of the models, but perhaps more importantly, monitors the tool´s impact on the different employee groups.

GET SMITH BRAIN TRUST DELIVERED
TO YOUR INBOX EVERY WEEK

SUBSCRIBE NOW

About the Expert(s)

Margrét Bjarnadóttir

Margrét Vilborg Bjarnadóttir is an associate professor of management science and statistics in the DO&IT department. Bjarnadóttir graduated from MIT's Operations Research Center in 2008, defending her thesis titled "Data-Driven Approach to Health Care, Application Using Claims Data". Bjarnadóttir specializes in operations research methods using large scale data. Her work spans applications ranging from analyzing nation-wide cross-ownership patterns and systemic risk in finance to drug surveillance and practice patterns in health care. She has consulted with both health care start-ups on risk modeling using health care data as well as governmental agencies such as a central bank on data-driven fraud detection algorithms.

The Chip Shortage Isn't Ending Anytime Soon

Toilet paper is back on the shelves, but the COVID-era semiconductor shortage still lingers. Why is that, and when is it likely to end?

Apr 21, 2021
The Ugly Side of Beauty AI

The pitfalls to AI-driven beauty scoring begin with the premise that beauty is in the eye of the beholder.

Apr 15, 2021
That $2T Infrastructure Price Tag

Surprised by the high price tag of President Biden’s proposed infrastructure facelift? Here's why you shouldn't be.

Apr 08, 2021
Robert H. Smith School of Business
Map of Robert H. Smith School of Business
University of Maryland
Robert H. Smith School of Business
Van Munching Hall
College Park MD 20742
SmithInfo@umd.edu