Maryland Smith Research / March 10, 2026

Where GenAI Consumer Research is Likely Headed

Three professionals reviewing generative AI data on a transparent digital display in an office.
Generative AI expands access to consumer research but risks biased, generic findings detached from real behavior, Roland Rust and Ming-Hui Huang say. They identify democratization, the “average trap” and model collapse as growing threats, urging human-centered methods to prevent synthetic, nonhuman results.

Generative artificial intelligence (GenAI) makes it possible for more researchers to better conduct consumer research. But this increased access could lead to study results that are generic and eventually have nothing to do with actual consumer behavior.

Three phenomena—democratization, the average trap and model collapse—form a trajectory. This is the focus of research from University of Maryland Smith School of Business Distinguished University Professor, David Bruce Smith Chair in Marketing and Executive Director of Smith’s Center for Excellence in Service, Roland Rust and the center’s Distinguished Research Fellow and Distinguished University Professor at National Taiwan University, Ming-Hui Huang

They find that though GenAI enables democratization, allowing more people to use technology like ChatGPT or OpenAI’s Operator to buy and use products—including the marginalized—bias permeates generative AI analysis of that consumption. “It happens because people are biased, and so if you try to predict what people do based on the database, your predictions about them would be biased,” says Rust. “If the data are biased before you even start researching, then you don’t really have a chance.”

And as more researchers rely on GenAI for this democratized data, their conclusions are neither new nor unique. That’s because generative AI, especially large language models, are pre-trained on huge amounts of data to predict the next token—the most likely next occurrence. As examples, Huang points to how in the UK the sky is more likely to be predicted as “gray,” while in Taiwan, the sky is likely to be “blue.” This popularity prediction leads to the average trap, with researchers coming to similar conclusions based on prevailing results from past studies.

When more studies are based on GenAI data or predictions, the data are considered synthetic. “AI is starting to generate its own data and then analyze that,” says Huang. “It leads to model collapse, which is where the results of consumer research don’t look human-like at all.”  The predictions will lack human sense because GenAI does not think in a human way; it is based on next-token prediction. “People don’t really realize the risk involved with that,” she adds.

What will prevent consumer studies from going down this path? 

Watch out for bland and average results from AI (the “average trap”). Rust and Huang say GenAI consumer research needs to be human-centric and nurture human differences as much as similarities. One way to do that is to come up with methodologies that reinforce human relevance.

Read “The GenAI Future of Consumer Research,” published in the Journal of Consumer Research.

Media Contact

Greg Muraski
Media Relations Manager
301-405-5283  
301-892-0973 Mobile
gmuraski@umd.edu 

Get Smith Brain Trust Delivered To Your Inbox Every Week

Business moves fast in the 21st century. Stay one step ahead with bite-sized business insights from the Smith School's world-class faculty.

Subscribe Now

Read More Research

Back to Top