Contextual advertising involves matching features of ads to features of the media context where they appear. We propose AdGazer, a new machine learning procedure to support contextual advertising. It comprises a theoretical framework organizing high and low-level features of ads and contexts, feature engineering models grounded in this framework, an XGBoost model predicting ad and brand attention, and an algorithm optimally assigning ads to contexts. AdGazer includes a Multimodal Large Language Model to extract high-level topics predicting the ad-context match. Our research uses a unique eye-tracking database containing 3531 digital display ads and their contexts, and aggregate ad and brand gaze times. We compare AdGazer’s predictive performance to two feature learning models, VGG16 and ResNet50. AdGazer predicts highly accurately with hold-out correlations of 0.83 for ad gaze and 0.80 for brand gaze, outperforming both feature learning models and generalizing better to out-of-distribution ads. Context features jointly contributed at least 33% to predicted ad gaze and about 20% to predicted brand gaze, good news for managers practicing or considering contextual advertising. We demonstrate that the theory-informed AdGazer effectively matches ads to advertising vehicles and their contexts, optimizing ad gaze more than current practice and alternatives like text-based and native contextual advertising.
Michel Wede (UMD Smith) Jianping Ye (UMD PhD student); and Rik Pieters (Tilburg University, the Netherlands)
Journal of Marketing