When scholars turn to research, they are seeking evidence on any given issue. But in most cases, what they get is the testimony of an author’s interpretation of that evidence – and that’s a big issue, says new research from the University of Maryland’s Robert H. Smith School of Business.
Maryland Smith’s Brent Goldfarb, along with two co-authors from Boston University, produced the paper with hopes of helping scholars in all fields reconsider how to learn from empirical reports and how their approach must change based on the context of their research.
“Sometimes you begin looking at data without knowing exactly what answer it points to and have to figure out the best explanation for that data,” says Goldfarb, associate professor of management and entrepreneurship and the academic director of the Dingman Center for Entrepreneurship. “As scientists, we want to make general statements, but really what we find is that companies in particular industries at specific moments in time behaved a certain way, but we’ve little statistical basis to make general statements about whether this pattern will appear elsewhere.”
The issue with relying on testimony on quantitative research is a two-pronged problem, Goldfarb says. The first is that social science research frequently relies on convenient data sets. The second problem arises when scholars have to begin looking into the data itself to make sense of it, says Goldfarb. Once scholars are forced to do that, that data can no longer be interpreted properly using commonly used approaches, such as p-values, he says.
Goldfarb likens what happens to someone trying to hit a bullseye. One approach, he says, is to draw your arrow and hopefully hit it, but the other approach is to shoot the arrow and draw a bullseye around it.
“What we often, as social scientists, are doing is the latter because we’ve run tests on the data, reason as to what it means, find theories that explain the results and then do more analysis to try to “test” the theory,” Goldfarb says. “In reality, what we’re doing is calibrating and matching an explanation to the data as opposed to testing an explanation with the data.”
This is OK and even important, Goldfarb says. Scholars might not have a choice given the data they have at their disposal, and developing explanations for patterns in the data is an important part of theorizing. However. when studying different industries, strategic settings or financial markets, scholars should be straightforward about this calibration exercise, he says.
“We have been trying to be saying that every paper we have has some replicable result that is generally true,” says Goldfarb. “Instead, we should devote a lot more attention to careful thinking about whether some explanation from one industry is likely to hold in another. For example, if someone has an interesting observation on the computer industry, let’s try to imagine what it might mean in other industries. People can agree or disagree, but that approach is much different than saying the data points to a general law about how the world works.”
Goldfarb hopes that the research helps scholars correct their misinterpretations and applications of empirical data in order to engage in better research practices and report results to readers in a more forthright manner.
“When somebody tells us something, why are we inclined to believe them? Under what circumstances do we believe them? From the outside, when we look at academics we get a lot of credence, but we have to be careful that this credence is earned,” says Goldfarb.
“Hopefully, with some luck, this will cause people in this field to write and describe their work differently.”
Read More: “Learning from Testimony on Quantitative Research in Management” is published in the Academy of Management Review.
Media Relations Manager