## Author Archives: Matt Gershoff

## Getting Past Statistical Significance: Foundations of AB Testing and Experimentation

How often is AB Testing reduced to the following question: ‘what sample size do I need to reach statistical significance for my AB Test?’ On the face of it, this question sounds reasonable. However, unless you know why you want to run a test at particular significance level, or what the relationship is between sample […]

## What is the value of my AB Testing Program?

Occasionally we are asked by companies how they should best assess the value of running their AB testing programs. I thought it might be useful to put down in writing some of the points to consider if you find yourselves asked this question. With respect to hypothesis tests, there are two main sources of value: […]

## Do No Harm or AB Testing without P-Values

A few weeks ago I was talking with Kelly Wortham during her excellent AB Testing webinar series. During the conversation, one of the attendees asked if they just wanted to pick between A and B, did they really need to run standard significance tests at a 90% or 95% confidence levels? The simple answer is […]

## Thompson Sampling or how I learned to love Roulette

Multi-armed bandits, Bayesian statistics, machine learning, AI, predictive targeting blah blah blah. So many technical terms, morphing into buzz words, that it gets confusing to understand what is going on when using these methods for digital optimization. Hopefully this post will give you a basic idea of how adaptive learning works, at least here at […]

## Going from AB Testing to AI: Optimization as Reinforcement Learning

In this post we are going to introduce an optimization approach from artificial intelligence: Reinforcement Learning (RL). Hopefully we will convince you that it is both a powerful conceptual framework to organize how to think about digital optimization, as well as a set of useful computational tools to help us solve online optimization problems. Video […]

## Machine Learning and Human Interpretability

The key idea behind Conductrics is that marketing optimization is really a reinforcement learning problem, a class of machine learning problem, rather than an AB testing problem. Framing optimization as a reinforcement learning problem allowed us to provide, from the very beginning, not just AB and multivariate testing tools, but also multi-armed bandits, predictive targeting, and a type of multi-touch decision attribution […]

## Conductrics 3.0 Release

Today I am happy to announce Conductrics 3.0, our third major release of our universal optimization platform. Conductrics 3.0 represents the next generation of personalized optimization technology, blending experimentation with machine learning to help deliver the best customer experiences across every Marketing channel. Conductrics 3.0 highlights include: Conductrics Express – You asked and we listened. While many […]

## Segmentation and Shrinkage

In our last post, we introduced the idea of shrinkage. In this post we are going to extend that idea to improve our results when we segment our data by customer. Often what we really want is to discover what digital experience is working best for each customer. A major problem is that as we segment […]

## Prediction, Pooling, and Shrinkage

As some of you may have noticed, there are often little skirmishes that occasionally break out in digital testing and optimization. There are the AB test vs multi-armed bandits debate (both are good, depending on task), standard vs multivariate testing (same, both good), and the Frequentist vs. Bayesian testing argument (also, both good). In the […]

## Easy Introduction to AB Testing and P-Values

A version of this post was originally published over at Conversion XL For all of the talk about how awesome (and big, don’t forget big) Big data is, one of the favorite tools in the conversion optimization toolkit, AB Testing, is decidedly small data. Optimization, winners and losers, Lean this that or the other thing, at […]