Experiments in a Data-Rich, Information-Poor World
Scalable web technology has greatly reduced the marginal cost of serving users. Thus, an individual business today may support a very large user base. With so much data, one might imagine that it is easy to obtain statistical significance in live experiments. However, this is always not the case. Often, the very business models enabled by the web require answers for which our data is information poor.
In this talk, Amir Najmi from Google will use a simple mathematical framework to discuss how experiment sizing interacts with the business model of some large-scale online services.
Amir Najmi is Principal Quantitative Analyst at Google. He received a PhD in Electrical Engineering from Stanford University under Robert Gray and Richard Olshen. Amir works on statistical modeling and prediction methodology for large-scale high-dimensional data. He is interested in a critical understanding of mathematical models, and the role of human insight in machine learning.
This talk was given at the SF Data Engineering meetup in May 2016.