A New Way To Find The Right Ad Words
How can advertisers better gauge how well their paid search ads are doing?
New research from Michael Trusov, associate professor of marketing at the University of Maryland’s Robert H. Smith School of Business, and two co-authors proposes a new, two-part approach.
First, the researchers suggest collecting primary data on hundreds of ads, through paired comparisons of their relative ability to generate interest and clicks. Trusov and co-authors Oliver J. Rutz from the University of Washington and Garrett P. Sonnier of the University of Texas at Austin used a statistical model calibrated on paired comparisons — the Elo algorithm — to score the full set of ads. The estimated scores validate the clear link between perception and performance.
And second, the authors predict the perceptions and performance of new ads relative to the existing set using textual metrics extracted from ads content using text mining methods. This predictive model allows for direct effects and interactions of the text metrics.
Paid search text ads — those that float to the top of the browser results when you’ve searched for certain keywords — are big business. And they’re a powerful advertising tool. In 2015, total spending on internet advertising amounted to $59.6 billion, according to PricewaterhouseCoopers, with nearly half of that sum related to search engine marketing.
“Traditional advertising often aims to build awareness or create unique and favorable brand image associations,” the study explains. “In contrast, the main goal of a text ad is to entice the consumer to respond immediately by clicking on the ad.”
Text ads are relatively simple and cheap to create. Typical text ads that appear on Google, for example, are formatted to consist of a headline, a display URL and two lines of text. There’s little content and, arguably, little creativity involved. In some cases, the ad is created by machine-based dynamic keyword insertion.
“Whether created by humans or machines, it is relatively inexpensive to generate a large corpus of text advertisements,” the study says. “Copy testing these ads, however, is not as straightforward.”
The study’s modeling approach enables managers to assess the relative performance of text ads. While many managers use A/B or multivariate testing to evaluate different paid search ad copies, the method isn’t well-suited to evaluate large numbers of ads.
Even with a relatively small number of ads, such tests might take several months to generate enough data to yield reliable results. “Our approach complements A/B and multivariate testing by effectively pre-screening a large number of ads, and yields a ranking of the set of ads on perceptions and performance based on paired comparisons,” the study says.
When evaluating a large number of subjects, the researchers leveraged the tactics used in sports to rank-order hundreds of players and teams when only nonexhaustive paired comparisons are available and the comparisons are not generated by an experimental design. The method “ably rank-orders the ads in terms of perceptions and performance,” the study says.
“Furthermore,” the researchers concluded, “we show that the scores generated by this approach are internally stable, closely approximate observed click behavior for paid search ads and are consistent with theory that suggests a positive relationship between perceptions and performance.”
Read more: A New Method to Aid Copy Testing of Paid Search Text Advertisements appears in the Journal of Marketing Research.