Small calvinfrenchowen Calvin French-Owen on

Data is critical to building great apps. Engineers and analysts can understand how customers interact with their brand at any time of the day, from any place they go, from any device they're using - and use that information to build a product they love. But there are countless ways to track, manage, transform, and analyze that data. And when companies are also trying to understand experiences across devices and the effect of mobile marketing campaigns, data engineering can be even trickier. What’s the right way to use data to help customers better engage with your app?

In this all-star panel hear from mobile experts at Instacart, Branch Metrics, Pandora, Invoice2Go, Gametime and Segment on the best practices they use for tracking mobile data and powering their analytics.

Continue

Small aaeaaqaaaaaaaacaaaaajgq1mmy3zwqxlwi4mtetndqzmc1hmzk4lwizzmnlmde3owvkma Nick Chamandy on

Simple “random-user” A/B experiment designs fall short in the face of complex dependence structures. These can come in the form of large-scale social graphs or, more recently, spatio-temporal network interactions in a two-sided transportation marketplace. Naive designs are susceptible to statistical interference, which can lead to biased estimates of the treatment effect under study.

In this talk we discuss the implications of interference for the design and analysis of live experiments at Lyft. A link is drawn between design choices and a spectrum of bias-variance tradeoffs. We also motivate the use of large-scale simulation for two purposes: as an efficient filter on candidate tests, and as a means of justifying the assumptions underlying our choice of experimental design.

Continue

Small 05f417e Ben Packer on

With the world’s largest residential energy dataset at their fingertips, Opower is uniquely situated to use Machine Learning to tackle problems in demand-side management. Their communication platform, which reaches millions of energy customers, allows them to build those solutions into their products and make a measurable impact on energy efficiency, customer satisfaction and cost to utilities.

In this talk, Opower surveys several Machine Learning projects that they’ve been working on. These projects vary from predicting customer propensity to clustering load curves for behavioral segmentation, and leverage supervised and unsupervised techniques.

Ben Packer is the Principal Data Scientist at Opower. Ben earned a bachelor's degree in Cognitive Science and a master's degree in Computer Science at the University of Pennsylvania. He then spent half a year living in a cookie factory before coming out to the West Coast, where he did his Ph.D. in Machine Learning and Artificial Intelligence at Stanford.

Justine Kunz is a Data Scientist at Opower. She recently completed her master’s degree in Computer Science at the University of Michigan with a concentration in Big Data and Machine Learning. Now she works on turning ideas into products from the initial Machine Learning research to the production pipeline.

This talk is from the Data Science for Sustainability meetup in June 2016.

Continue
Small calvinfrenchowen Calvin French-Owen on

Segment’s API has scaled significantly over the past three years and has grown from processing a trickle of events to tens of thousands per second. Today, Segment processes tens of billions of events each month and sends them to hundreds of partner APIs.

It can be a very hostile environment. Partners fail frequently, customers send highly variable data and instances regularly die. As a result, Segment has invested heavily in tools for monitoring, failover and fairness when it comes to routing events through its system.

In this talk, CTO Calvin French-Owen will discuss how Segment continues to maintain a high quality of service, how its infrastructure has evolved over time and where it's heading in the future.

This talk is from DataEngConf SF in April 2016.

Continue
Small karthik Karthik Ramasamy on

Twitter generates billions and billions of events per day. Analyzing these events in real time presents a massive challenge. Twitter designed and deployed a new streaming system called Heron. Heron has been in production nearly 2 years and is widely used by several teams for diverse use cases. Twitter open-sourced Heron this year.

In this talk, you will learn about the operating experiences and challenges of running Heron at scale and the approaches that the team at Twitter took to solve those challenges.

Continue

Small screen shot 2016 06 07 at 3.57.11 pm Amir Najmi on

Scalable web technology has greatly reduced the marginal cost of serving users. Thus, an individual business today may support a very large user base. With so much data, one might imagine that it is easy to obtain statistical significance in live experiments. However, this is always not the case. Often, the very business models enabled by the web require answers for which our data is information poor.

In this talk, Amir Najmi from Google will use a simple mathematical framework to discuss how experiment sizing interacts with the business model of some large-scale online services.

Amir Najmi is Principal Quantitative Analyst at Google. He received a PhD in Electrical Engineering from Stanford University under Robert Gray and Richard Olshen. Amir works on statistical modeling and prediction methodology for large-scale high-dimensional data. He is interested in a critical understanding of mathematical models, and the role of human insight in machine learning.

This talk was given at the SF Data Engineering meetup in May 2016.

Continue
Small geg dingle Greg Dingle on

Tech businesses know how they're doing by numbers on a screen. The weakest link in the process of analysis is usually the part in front of the keyboard. People are not designed to think about abstract quantities. Scientists in the field of decision science have described for decades now exactly how people go wrong. You can overcome your biases only by being aware of them. Greg Dingle will walk you through some common biases, examples, and corrective measures.

Greg Dingle's Bio - “My first love was science. I was happily ensconced in a PhD program in evolutionary psychology when Y-Combinator came calling. I moved to SF, I lived the startup life for two years, then Facebook bought my two-person company. I rode that rocketship for 7 years. I wrote lots of code. I ended up specializing in building tools for data analysis--query tools, visualization tools and workflow tools. This past March, I quit Facebook and joined a young startup as co-founder, ParseHub. We make web scraping easy.”

This talk was given at the SF Data Science Meetup at Galvanize in May, 2016.

Continue
Small m3kmuz40k40trprtqohq 400x400 Ramesh Johari on

A/B testing is a hallmark of Internet services: from e-commerce sites to social networks to marketplaces, nearly all online services use randomized experiments as a mechanism to make better business decisions. Such tests are generally analyzed using classical frequentist statistical measures: p-values and confidence intervals.

Despite their ubiquity, these reported values are computed under the assumption that the experimenter will not continuously monitor their test---in other words, there should be no repeated “peeking” at the results that affects the decision of whether to continue the test. On the other hand, one of the greatest benefits of advances in information technology, computational power, and visualization is precisely the fact that experimenters can watch experiments in progress, with greater granularity and insight over time than ever before.

What You Will Learn:
Based on some of Ramesh's work at Optimizely, you'll learn how their optimization platform addresses continuous monitoring of experiments.

Prerequisites:
Basic statistics would be helpful.

Where To Learn More:
- Nontechnical blog post @ Optimizely.com
- Technical post (PDF) from Optimizely
- Full paper on arxiv via Arxiv.org

These slides are from a talk given at the SF Data Engineering meetup.

Continue
Unknown author on

Yunliang Jiang, engineer at Thumbtack, shares from his PhD research about data mining techniques he's applied to the wealth of unstructured health data available online.

The development of Web 2.0 techniques has led to the prosperity of online communities, which spread to various domains and areas in our daily life. When it comes to the medicine and healthcare domain, a series of good online services such as Yahoo! Groups, WebMD and Med-Help, offer patients and physicians a good platform to discuss health problems, e.g., diseases and drugs, diagnoses and treatments, which also provide a large volume of data for researchers to analyze and explore. However, the nature of the personal messages, e.g., unclean, unstructured and isolated from clinical practice, hinders users’ effective digestion of information in the front end and challenges the data analysis in the back end. In such a scenario, the objective of Yunliang's thesis is to apply advanced data mining, information retrieval and natural language processing techniques to effectively analyze and re-organize the rich source of personal health messages from online medical communities, in order to satisfy patients’ information need and support physicians’ clinical practice.

Yunliang introduces an SVM-based multi-class classification method which utilizes term-appearance, lexical and semantic features to effectively classify health messages sampled from our unique dataset of Yahoo! Health Groups into three categories: News, User Comments and Spam. Yunliang also depicts a comprehensive system with an extensive evaluation framework to organize and cluster patient outcomes utilizing topic model, which groups large collections of personal comments into a series of topics, guided by expert comments. In the third part, Yunliang addresses a novel and promising topic: Comparative Effectiveness Research (CER) hypothesis prediction, by presenting a study which evaluates patients’ opinions on different treatments by machine enabled sentiment analysis or human analysts utilizing our MedHelp dataset. By suggesting three different methods to compare such opinions, reliable conclusions about the patients’ preference on different treatments can be drawn consistently, which imply the effectiveness of the treatments.

21:08

This video was recorded at the SF Bayarea Machine Learning meetup at Thumbtack in SF.

Continue
Small photo Max Sklar on

Dataenconfnyc2016 logos3

Max Sklar and Maryam Aly from Foursquare lead this session in Tech@NYU's Startup Week. They cover the theory and history of natural language processing (NLP) as well as the specific journey that Foursquare went on in dealing with their millions of "tips" that users write for other users.

00:00

Continue
Join Us