Machine Learning Has a Flaw. It’s Gullible.

In new research, how humans can shield ML from manipulation

May 06, 2020
As Featured In 
Strategic Management Journal

Artificial intelligence and machine learning technologies are poised to supercharge productivity in the knowledge economy, transforming the future of work.

But they’re far from perfect.

Machine learning (ML) – technology in which algorithms “learn” from existing patterns in data to conduct statistically driven predictions and facilitate decisions – has been found in multiple contexts to reveal bias. Remember when came under fire for a hiring algorithm that revealed gender and racial bias? Such biases often result from slanted training data or skewed algorithms.

And in other business contexts, there’s another potential source of bias. It comes when outside individuals stand to benefit from bias predictions, and work to strategically alter the inputs. In other words, they’re gaming the ML systems.

It happens. A couple of the most common contexts are perhaps job applicants and people making a claim against their insurance.

ML algorithms are built for these contexts. They can review resumes way faster than any recruiter can, and can comb through insurance claims faster than any human processor.

But people who submit resumes and insurance claims have a strategic interest in getting positive outcomes – and some of them know how to outthink the algorithm.

This had researchers at the University of Maryland’s Robert H. Smith School of Business wondering, “Can ML correct for such strategic behavior?”

In new research, Maryland Smith’s Rajshree Agarwal and Evan Starr, along with Harvard’s Prithwiraj Choudhury, explore the potential biases that limit the effectiveness of ML process technologies and the scope for human capital to be complementary in reducing such biases. Prior research in so-called “adversarial” ML looked closely at attempts to “trick” ML technologies, and generally concluded that it’s extremely challenging to prepare the ML technology to account for every possible input and manipulation. In other words, ML is trickable.

What should firms do about it? Can they limit ML prediction bias? And, is there a role for humans to work with ML to do so?

Starr, Agarwal and Choudhury honed their focus on patent examination, a context rife with potential trickery.

“Patent examiners face a time-consuming challenge of accurately determining the novelty and nonobviousness of a patent application by sifting through ever-expanding amounts of ‘prior art,’” or inventions that have come before, the researchers explain. It’s challenging work.

Compounding the challenge: patent applicants are permitted by law to create hyphenated words and assign new meaning to existing words to describe their inventions. It’s an opportunity, the researchers explain, for applicants to strategically write their applications in a strategic, ML-targeting way.

The U.S. Patent and Trademark Office is generally wise to this. It has invited in ML technology that “reads” the text of applications, with the goal of spotting the most relevant prior art quicker and leading to more accurate decisions.. “Although it is theoretically feasible for ML algorithms to continually learn and correct for ways that patent applicants attempt to manipulate the algorithm, the potential for patent applicants to dynamically update their writing strategies makes it practically impossible to adversarially train an ML algorithm to correct for this behavior,” the researchers write.

In its study, the team conducted observational and experimental research. They found that patent language changes over time, making it highly challenging for any ML tool to operate perfectly on its own. The ML benefitted strongly, they found, from human collaboration.

People with skills and knowledge accumulated through prior learning within a domain complement ML in mitigating bias stemming from applicant manipulation, the researchers found, because domain experts bring relevant outside information to correct for strategically altered inputs. And individuals with vintage-specific skills – skills and knowledge accumulated through prior familiarity of tasks with the technology – are better able to handle the complexities in ML technology interfaces.

They caution that although the provision of expert advice and vintage-specific human capital increases initial productivity, it remains unclear whether constant exposure and learning-by-doing by workers would cause the relative differences between the groups to grow or shrink over time. They encourage further research into the evolution in the productivity of all ML technologies, and their contingencies.

Accolades: The research paper is the winner of the 2019 Best Conference Paper Award from the Strategic Management Society, and winner of the Best Interdisciplinary Paper Award from the Strategic Human Capital Interest Group at the Strategic Management Society 2019.

Read more:Machine Learning and Human Capital Complementarities: Experimental Evidence on Bias Mitigation,” by Prithwiraj Choudhury of Harvard Business School, and Evan Starr and Rajshree Agarwal of the University of Maryland’s Robert H. Smith School of Business, is forthcoming in Strategic Management Journal.


About the Author(s)

Rajshree Agarwal

Rajshree Agarwal is the Rudolph Lamone Professor of Entrepreneurship and Strategy and director of the Ed Snider Center for Enterprise and Markets at the University of Maryland. She studies the evolution of industries, firms and individual careers, as fostered by the twin engines of innovation and enterprise. Agarwal's scholarship uses an interdisciplinary lens to provide insights on strategic innovation for new venture creation and for firm renewal. She routinely publishes in leading journals in strategy and entrepreneurship. An author of more than 60 studies, her research has been cited more than 10,000 times, received numerous best paper awards, and funded by grants from various foundations, including the Kauffman Foundation, the Rockefeller Foundation and the National Science Foundation. She is currently the co-editor of the Strategic Management Journal and has previously served in co-editor and senior editor roles at Strategic Entrepreneurship Journal and Organization Science respectively.

Evan Starr

Evan Starr is an Assistant Professor of Management & Organization at the Robert H. Smith School of Business at the University of Maryland. He received a Ph.D. in economics from the University of Michigan and a bachelor's degree from Denison University. He originally hails from Claremont, California. Starr's current research examines issues at the intersection of human capital accumulation, employee mobility, entrepreneurship, and innovation.

More in


Why Critics Don’t Pan Blockbusters… At Least Not Right Away
New research finds media outlets delay negative reviews of movies and video games.
Sep 18, 2020
How to Feel Like You Belong No Matter Where You’re Working
New Research Offers Strategies for Inclusion in a Multinational Organization
Aug 20, 2020
Feeling Normal in a Pandemic? Study Has Good News
New research shows that our human sense of normalcy is capable of bouncing back a lot faster than we might think.
Jul 22, 2020
Robert H. Smith School of Business
Map of Robert H. Smith School of Business
University of Maryland
Robert H. Smith School of Business
Van Munching Hall
College Park MD 20742