Ilya O. Ryzhov  
Ilya O. Ryzhov 4322 Van Munching Hall
Decision, Operations, and Information Technologies
Robert H. Smith School of Business
University of Maryland
College Park, MD 20742

(curriculum vitae)

(Optimal Learning on

I am an Assistant Professor in the Department of Decision, Operations, and Information Technologies at the Robert H. Smith School of Business, University of Maryland. I received my Ph.D. from Princeton University in 2011. I am a co-author (with W.B. Powell) of the book Optimal Learning, available now on

I study the role that information plays in optimization and decision analysis. There are countless situations when we have to make decisions under uncertainty. We may be pricing a new product, with only a rough idea of the demand curve. Or we may have to decide on a production plan for a new product, with only a guess of its potential profit margin. In supply chain management, our service costs are affected by the reliability of our suppliers, but reliability can only be evaluated if we sign a contract with a supplier for a period of time.

In all of these problems, we have to make decisions based only on an incomplete or approximate understanding of our environment. We are thus faced with a problem called exploration vs. exploitation or learning vs. earning. Each decision carries immediate economic benefits (revenue from sales), but it also provides information that will bring about future benefits (observed demand). It may even be desirable to sacrifice short-term gains for the sake of information, for example setting a higher price to explore the possibility of high demand). I optimize this tradeoff by modeling the economic value of information and incorporating these models into the decision-making process.

I use the word optimal learning to describe strategies that explicitly consider the value of information, and the tradeoff between learning and earning, when making decisions. The book Optimal Learning discusses how to model and solve many different types of learning problems, beginning with the classic models of ranking and selection and multi-armed bandits, and moving on to more sophisticated decision problems.