Getting Past Statistical Significance: Foundations of AB Testing and Experimentation

How often is AB Testing reduced to the following question:  ‘what sample size do I need to reach statistical significance for my AB Test?’  On the face of it, this question sounds reasonable. However, unless you know why you want to run a test at particular significance level, or what the relationship is between sample size and that significance level, then you are most likely missing some basic concepts that will help you get even more value out of your testing programs.

There are also a fair amount of questions around how to run AB Tests; what methods are best; and the various ‘gotchas’ to look out for.  In light of this, I thought it might be useful to step back, and just review some of the very basics of experimentation and why we are running hypothesis tests in the first place. This is not a how to guide, nor a collection of different types of tests to run, or even a list of best practices.

What is the problem we are trying to solve with experiments?

We are trying to isolate the effect on some objective result, if any, of taking some action in our website (mobile apps, call centers, etc.). For example, if we change the button color to blue rather than red, will that increase conversions, and if so, by how much?

What is an AB test?

AB and multivariate tests are versions of randomized controlled trials (RCT). A RCT is an experiment where we take a sample of users and randomly assign them to control and treatment groups. The experimenter then collects performance data, for example conversions or purchase values, for each of the groups (control, treatment).

I find it useful to think of RCTs as having three main components:  1) data collection; 2) estimating effect sizes; and 3) assessing our uncertainty of the effect size and mitigating certain risks around making decisions based on these estimates.

 

Collection of the Data

What do you mean by sample?

A sample is a subset of the total population under investigation. Keep in mind that in most AB testing situations, while we randomly assign users to treatments, we don’t randomly sample. This may seem surprising, but in the online situation the users present themselves to us for assignment (e.g. they come to the home page).  This can lead to selection bias if we don’t try to account for this nonrandom sampling in our data collection process. Selection bias will make it more difficult, if not impossible, to draw conclusions about the population we are interested in from our test results. One often effective way to mitigate this is by running our experiments over full weeks, or months etc. to try to ensure that our samples look as much as possible like our user/customer population.

Why do we use randomized assignments?

Because of “Confounding”. I will repeat this several times, but confounding is the single biggest issue in establishing a causal relation between our treatments and our performance measure.

What is Confounding?

Confounding is when the treatment effect gets mixed together with the effects from any other outside influence. For example, consider we are interested in the treatment effect of Button Color (or Price, etc.) on conversion rate (or average order size, etc). When assigning users to button color we give everyone who visits on Sunday the ‘Blue’ button treatment, and everyone on Monday the ‘Red’ button treatment. But now the ‘Blue’ group is comprised of both Sunday users and the Blue Button, and the ‘Red’ group is both Monday users and the Red Button. Our data looks like this:

Sunday:Red 0%   Monday:Red 100%
Sunday:Blue 100%   Monday:Blue 0%

We have mixed together the data such that any of the user effects related to day are tangled together with the treatment effects of button color.

What we want is for each of our groups to both: 1) look like one another except for the treatment selection (no confounding); and 2) to look like the population of interest (no selection bias).

If we randomly assign the treatments to users, then we should on average get data that looks like this:

Sunday:Red 50%   Monday:Red 50%
Sunday:Blue 50%   Monday:Blue 50%

Where each day we have a 50/50 split of button color treatments.  Here the relationship between day and button assignment is broken, and we can estimate the average treatment effects without having to worry as much about influences of outside effects (this isn’t perfect of course, since it holds only on average – it is possible due to sampling error that for any given sample we don’t have a balanced sample over all of the cofactors/confounders.)

Of course, this mixing need not be this extreme – it is often much more subtle. When Ronny Kohavi advises to be alert to ‘sample ratio mismatch’, (See: https://exp-platform.com/hbr-the-surprising-power-of-online-experiments/), it is because of confounding. For example, say a bug in the treatment arm breaks the experience in such a way that some users don’t get assigned. If this happens only for certain types of users, perhaps just for users on old browsers, then we no longer have a fully randomized assignment.  The bug breaks randomization and lets the effect of old browsers leak in and mix with the treatment effect.

Confounding is the main issue one should be concerned about in AB Testing.  Get this right and you are most of the way there – everything else is secondary IMO.

Estimating Treatment Effects

We made sure that we got random selections, now what?

Well, one thing we might want to do is use our data to get an estimate of the conversion rate (or AOV etc.) for each group in our experiment.  The estimate from our sample will be our best guess of what the true treatment effect will be for the population under study.

For most simple experiments we usually just calculate the treatment effect using the sample mean from each group and subtract the control from the treatment –  (Treatment Conversion Rate) – (the Control Conversion Rate) = the Treatment Effect.  For example, if we estimate that Blue Button has a conversion rate of 0.1 and Red Button has a conversion rate of 1.1, then the estimated treatment effect is -0.01.

Estimating Uncertainty

Of course the goal isn’t to calculate the sample conversion rates, the goal is to make statements about the population conversion rates.  Our sample conversion rates are based on the particular sample we drew. We know that if we were to have drawn another sample, we almost certainly would have gotten different data, and would calculate a different sample mean (if you are not comfortable with sampling error, please take a look at https://conductrics.com/pvalues).

One way to assess uncertainty is by estimating a confidence interval for each treatment and control group’s conversion rate. The main idea is that we construct an interval that is guaranteed to contain, or trap, the true population conversion rate with a frequency that is determined by the confidence level. So a 95% confidence interval will contain the true population conversion rate 95% of the time.  We could also calculate the difference in conversion rates between our treatment and the control group’s and calculate a confidence interval around this difference.

Notice that so far we have been able to: 1) calculate the treatment effect; and 2) get a measure of uncertainty in the size of the treatment effect with no mention of hypothesis testing.

Mitigating Risk

If we can estimate our treatment effect sizes and get a measure of our uncertainty around the size, why bother running a test in the first place? Good question.  One reason to run a test is to control for two types of error we can make when taking an action on the basis of our estimated treatment effect.

Type 1 Error – a false positive.

  1. One Tail: We conclude that the treatment has a positive effect size (it is better than the control) when it doesn’t have a real positive effect (it really isn’t any better).
  2. Two Tail: We conclude that the treatment has a different effect than the control (it is either strictly better or strictly worse) when it doesn’t really have a different effect than the control.

Type 2 Error – a false negative.

  1. One Tail: We conclude that the treatment does not have a positive effect size (it isn’t better than the control) when it does have a real positive effect (it really is better).
  2. Two Tail: We conclude that the treatment does not have a different effect than the control (it isn’t either strictly better or strictly worse) when it really does have a different effect than the control.

How to specify and control the probability of these errors?

Controlling Type 1 errors – the probability that our test will make a Type 1 error is called the significance level of the test. This is the alpha level you have probably encountered. An alpha of 0.05 means that we want to run the test so that we only make Type 1 errors up to 5% of the time. You are of course free to pick whatever alpha you like – perhaps an alpha of 1% may make more sense for your use case, or maybe an alpha of 0.1%. It is totally up to you! It all depends on how damaging it would be for you take some action based on a positive result, when the effect doesn’t exist. Also keep in mind that this does NOT mean that if you get a significant result, that only 5% (or whatever your alpha is) of the time it will be a false positive.   The rate that a significant result is a false positive will depend on how often you run tests that have real effects.  For example, if you never run any experiments where the treatments are actually any better than the control, then all of your significant results will be false positives.  In this worse case situation, you should expect to see significant results in up to 5% (alpha%) of your tests, and all of them will be false positives (Type 1 errors).

You should spend as much time as needed to grok this idea, as it is the single most important idea you need to know in order to thoughtfully run your AB Tests.

Controlling Type 2 errors – this is based on the power of the test, which is turn based on the beta. For example, a beta of 0.2 (Power of 0.8) means that of all of the times that the treatment is actually superior to the control, your test would fail, on average, to discover this up to 20% of the time. Of course, like the alpha, it is up to you, so maybe power of 0.95 makes more sense, so that you make a type 2 error only up to 5% of the time.  Again, this will depend on how costly you consider this type of mistake.  This is also important to understand well, so spend some time thinking about what this means.  If this isn’t totally clear, see a more detailed explanation of Type 1 and Type 2 errors here https://conductrics.com/do-no-harm-or-ab-testing-without-p-values/.

What is amazing, IMO, about hypothesis tests is that, assuming that you collect the data correctly, you are guaranteed to limit the probability of making these two types of errors based on the alpha and beta you pick for the test. Assuming we are mindful about confounding, all we need to do is collect the correct amount of data. When we run the test after we have collected our pre-specified sample, we can be assured that we will control these two errors at our specified levels.

 

“The sample size is the payment we must make to control Type 1 and Type 2 errors.”

 

What about Sample Size?

There is a relationship between alpha, beta, and the associated sample size. In a very real way, the sample size is the payment we must make in order to control Type 1 and Type 2 errors. Increasing the error control on one means you either have to lower the control on the other or increase the sample size.  This is what power calculators are doing under the hood — calculating the sample size needed, based on a minimum treatment effect size, and desired alpha and beta.

 

What about Sample Size for continuous conversion values, like average order value?

Calculating sample sizes for continuous conversion variables is really the same as for conversion rates/proportions. For both we need to have some guess of the both the mean of the treatment effect and the standard deviation of the effect.  However, because the standard deviation of a proportion is determined by its mean, we don’t need to bother to provide it for most calculators. However, for continuous conversion variables we need to have an explicit guess of the standard deviation in order to conduct the sample size calculation.

What if I don’t know the standard deviation?

This isn’t exact, and in fact it might not be that close, but in a pinch, you can use the range rule  as a hack for the standard deviation.  If you know the minimum and maximum values that the conversion variable can take (or some reasonable guess), you can use standard deviation ⩰ (Max-Min)/4 as a rough guess.

What if I make a decision before I collected all of the planned data?

You are free to do whatever you like. Trust me, there is no bolt of lightning that will come out of the sky if you stop a test early, or make a decision early. However, it also means that the Type 1 and Type 2 risk guarantees that you were looking to control for will no longer hold. So to the degree that they were important to you and the organization, that will be cost of taking an early action.

What about early stopping with sequential testing?

Yes, there are ways to run experiments in a sequential way.  That said, remember how online testing works. Users present themselves to the site (or app or whatever), and then we randomly assign them to treatments. That is not the same as random selection.

Why does that matter?

Firstly, because of selection bias. If users are self selecting when they present themselves to us, and if there is some structure to when different types of users arrive, then our treatment effect will be a measure of only the users who we have seen, and won’t be a valid measure of the population we are interested in. As mentioned earlier, often the best way to deal with this is to sample in the natural period of your data – normally this is weekly or monthly.

Secondly, while there are certain types of sequential tests that don’t bias our Type 1 and 2 errors they do, ironically, bias our estimated treatment effect – especially when stopping early, which is the very reason you would run a sequential test in the first place. Early stopping can lead to a type of magnitude bias – where the absolute value of the reported treatment effects will tend to be too large.  There are ways to adjust try to adjust for this,  but it adds even more approximation and complexity into the process.
See https://www.ncbi.nlm.nih.gov/pubmed/22753584  and http://journals.sagepub.com/doi/abs/10.1177/1740774516649595?journalCode=ctja

So the fix for dealing with the bias in Type 1 error control due to early stopping/peaking CAUSES bias in the estimated treatment effects, which, presumably, are also of importance to you and your organization.

The Waiting Game

However, if all we do is just wait –  c’mon, it’s not that hard 😉 – and run the test after we collect our data based on the pre-specified sample size, and in weekly or monthly blocks, then we don’t have to deal with any issues of selection bias or biased treatment effects. This is one of those cases where just doing the simplest thing possible gets you the most robust estimation and risk control.

What if I have more than one treatment?

If you have more than one treatment you may want to adjust your Type 1 error control to ‘know’ that you will be making several tests at once. Think of each test as a lottery ticket. The more tickets you buy, the greater the chance you will win the lottery, where ‘winning’ here means making a Type 1 error.

The chance of making a single Type 1 error over all of the treatments is called the Familywise Error Rate (FWER). The more tests, the more likely you are to make a Type 1 error at a certain confidence level (alpha). I won’t get into the detail here, but to ensure that the FWER is not greater than your alpha, you can use any of the following methods:  Bonferroni; Sidak; Dunnetts etc. Bonferroni is least powerful (in the Type 2 error sense), but is the simplest with the least assumptions, so a good safe bet, esp if Type 1 error is a very real concern. One can argue which is best, but it will depend, and for just a handful of comparisons it won’t really matter what correction you use IMO.

Another measure of familywise error is the False Discovery Rate (FDR) (See “>https://en.wikipedia.org/wiki/False_discovery_rate).  To control for FDR, you could do something like the Benjamini–Hochberg procedure.  While controlling for the FDR means a more powerful test (less Type 2 error), there is no free lunch, and it is at the cost of allowing more Type 1 errors. Because of this, researchers often use the FDR as a first step to screen out possibly interesting treatments in cases where there are many (thousands) of independent tests. Then, from the set of significant results, more rigorous follow up testing occurs.  Claims about preference around controlling for either FDR or FWER are really implicit statements about relative risk of Type 1 and Type 2 error.

Wrapping it up

The whole point of the test is to control for risk – you don’t have to run any tests to get estimates of treatment effects, or a measure of uncertainty around those effects. However, it is often is a good idea to control for these errors, so the more you understand their relative costs, the better you can determine how much you are willing pay to reduce the chances of making them. Rather then look at the sample size question as a hassle, perhaps look at it as an opportunity for you and your organization to take stock and discuss what the goals, assumptions, and expectations are for the new user experiences that are under consideration.

 

Posted in Uncategorized | Leave a comment


What is the value of my AB Testing Program?

Occasionally we are asked by companies how they should best assess the value of running their AB testing programs. I thought it might be useful to put down in writing some of the points to consider if you find yourselves asked this question.

With respect to hypothesis tests, there are two main sources of value:
1) The Upside – reducing Type 2 error. 
This is implicitly what people tend to think about in Conversion Rate Optimization (CRO) – the gains view of testing. When looking to value testing programs, they tend to ask something along the lines of ‘What are the gains that we would miss if we didn’t have a testing program in place?’ One possible approach is to reach back into the testing toolkit and create a test program control group.  The users assigned to this control group are then shielded from any of  the changes made based on outcomes from the testing process. This control group is then used to estimate a global treatment effect for the bundle of changes over some reasonable time horizon (6 months, a year etc.)  The calculation looks something like:

Total Site Conversion – Control Group Conversion – cost of the testing program.

Of course, this may not be so easy to do, as forming a clean control group will often be complicated, if not impossible, and determining how to value the lift in conversion can also be tricky in most non commerce applications.

2) The Downside – mitigating Type 1 loss.
A major limitation of the above approach is that it ignores a major reason for testing – the mitigation of Type 1 errors. Type 1 errors are changes or behaviors that lead to harm, or loss. To estimate the value of mitigating this possible loss, we would need to expose to our control group the changes that we WOULD have made on the site had they not been rejected. So we need to make all the changes that we think, based on our test results, will harm the customer experience. Of course this is almost certainly a bad idea, and it highlights why certain types of questions are not amenable to randomized controlled trials (RCT) that are the backbone of AB Testing (anyone out there using instrumental variables or other counterfactual methods? Please comment if you are).

(for a refresher on Type 1 and Type 2 errors please see https://conductrics.com/do-no-harm-or-ab-testing-without-p-values/)

But even if we did go down this route (bad idea), it still doesn’t get us a proper estimate of the true value of testing, since even if we don’t encounter harmful events, we still had protection against them.  For example, you may have collision car insurance but have had no accidents over the course of a year. What was the value of the collision insurance, zero? The value of insurance isn’t equal to what is paid out. Insurance doesn’t work like that – it is good that is consumed regardless if it pays out or not. What you are paying for is the reduction in downside risk – and that is something that testing provides regardless of if the adverse event pays out or not.  The difficult part for you is to assess the probability (is this risk or Knightian uncertainty), and severity of the potential negative results.

The value of testing is in both optimization (finding the improvements); and in mitigating downside risk. To value the latter, we need to be able to price what is essentially a form of insurance against whatever the org considers to be intolerable downside risk. So it is sort of like asking what is the value of insurance, or privacy policies, or security policies. You may get by in any given year without them, but as you scale up, the risks of downside events grow, making it more and more likely that a significant adverse event will occur.

One last thing. Testing programs tend to jumble together the related, but separate concepts of hypothesis testing  – binary Reject/Fail to Reject outcome –  with the estimation of effect sizes – the best guess of the ‘true’ population conversion rates.  I mention this because often we just think about the value of the actions taken based the hypothesis tests, rather than also considering the value of robust estimates of the effect sizes for forecasting, ideation, and for helping allocate future resources  (as an aside, one can run an experiment that has a robust hypotheses test, but also yields a biased estimate of the effect size (magnitude error). [Sequential testing I’m looking at you!]

Ultimately, testing can be seen as both a profit (upside discovery) AND cost (downside mitigation) center.  Just focusing on one will lead to underestimating the value your testing program can provide to the organization.  That said, it is a fair question to ask, and one that hopefully will help lead to extracting even more value from your experimentation efforts.

What are your thoughts? Is there any thing we are missing, or should consider? Please feel free to comment and let us know how you value your testing program.

 

Posted in Uncategorized | Leave a comment


Do No Harm or AB Testing without P-Values

A few weeks ago I was talking with Kelly Wortham during her excellent AB Testing webinar series.  During the conversation, one of the attendees asked if they just wanted to pick between A and B, did they really need to run standard significance tests at a 90% or 95% confidence levels?

The simple answer is no.  In fact, in certain cases, you can avoid dealing with p-values (or priors and posteriors) altogether and just pick the option with the highest conversion rate.

Even more interesting, at least to me, is that simple approach can be viewed as either form of classical hypothesis testing or as an epsilon- first solution to the multi-arm bandit problem.

Before we get into our simple testing trick, it might be helpful to first revisit a few important concepts that underpin why we are running tests in the first place.

The reason we run experiments is to help determine how different marketing interventions will affect the user experience.  The more data we collect, the more information we have, which reduces our uncertainty about the effectiveness of each possible intervention.  Since data collection is costly, the question that always comes up is ‘how much data do I really need to collect?’

 

In a way, every time we run an experiment, we are trying the balance the following: 1) the cost of making a Type 1 error; 2) the cost of making a Type 2 error; and 3) the cost of data collection to help reduce our risk of making either of these errors.
To help answer this question, I find it helpful to organize optimization problems into two high level classes of problems:

 

1) Do No Harm
These are problems where there is either:
  1. an existing process, or customer experience, that the organization is counting on for at least a certain minimum level of performance.
  2. a direct cost associated with implementing a change.
For example, while it would be great if we could increase conversions from an existing check out process, it may be catastrophic if we accidentally reduced the conversion rate. Or, perhaps we want to use data for targeting offers, but there is a real direct cost we have to pay in order to use the targeting data.  If it turns out that there is no benefit to the targeting, we will incur the additional data cost without any upside, resulting in a net loss.
So, for the ‘Do No Harm’ type of problem we want to be pretty sure that if we do make a change, it won’t make things worse. For these problems we want to stay the current course unless we have strong evidence to take an alternative action.

 

2) Go For It
In the ‘Go For It’ type of problem there often is no existing course to follow.  Here we are selecting between two or more novel choices AND we have symmetric costs, or loss, if we make a Type I error (reviewed below).

A good example is headline optimization for news articles.  Each news article is, by definition, novel, as are the associated headlines.  Assuming that one has already decided to run headline optimization (which is itself a ‘Do No Harm’ question), there is no added cost, or risk to selecting one or the other headlines when there is no real difference in the conversion metric between them. The objective of this type of problem is to maximize the chance of finding the best option, if there is one. If there isn’t one, then there is no cost or risk to just randomly select between them (since they perform equally as well and have the same cost to deploy).  As it turns out, Go For It problems are also good candidates for Bandit methods.

State of the World
Now that we have our two types of problems defined, we can ask under what situations we might find ourselves when we finally make a decision (i.e. select ‘A’ or ‘B’).  There are two possible states of the world when we make our decisions:
  1. There isn’t the expected effect/difference between the options
  2. There is the expected effect/difference between the options
It is important to keep in mind that in almost all cases we won’t be entirely certain what the true state of the world is, even after we run our experiment (you can thank David Hume for this).  This is where our two error types, Type I and Type II come into play. You can think of these two error types as really just two situations where our Beliefs about the world are not consistent with the true state of the world.
A Type I error is when we pick the alternative option (the ‘B’ option), because we mistakenly believe the true state of the world is ‘Effect’.  Alternatively, a Type II error is when we pick ‘A’ (stay the course), thinking that there is no effect, when the true state of the world is ‘Effect’.

 

The difference between the ‘Do No Harm’ and ‘Go For It’ problems is in how costly it is to make a Type I error.
The table below is the payoff matrix for each error for ‘Do No Harm’ problems
Payoff: Do No Harm      The True State of the World (unknown)
Decision Expected Effect No Expected Effect
Pick A Opportunity Costs No Cost
Pick B No Opportunity Cost Cost
Notice, that if we pick B when there is no effect, we make a Type I error and suffer a cost.  Now lets look at the payoff table for the ‘Go For It’ problem.
Payoff: Go For It           The True State of the World (unknown)
Decision Expected Effect No Expected Effect
Pick A Opportunity Costs No Cost
Pick B No Opportunity Cost No Cost

Notice that the payoff tables for Do No Harm and Go For It are the same when the true state of the world is that there is an effect.  But, they differ when there is no effect. When there is no effect, there is NO relative cost in selecting either A or B.

Why is this way of thinking about optimization problems useful? 
Because this can help with what type of approach to take based on the problem.
In the Do No Harm problem we need to be mindful about Type I errors, because they are costly, so we need to factor in the risk of making them when we design our experiments.  Managing this risk is exactly what classical hypothesis testing does.
That is why for ‘Do No Harm’ problems, it is best practice to run a classic, robust, AB Test.  This is because we care more about minimizing our risk of doing harm (the cost of Type I error) than any benefit we might get from rushing through the experiment (cost of information).

However, it also means that if we have a ‘Go For It’ problem, if there is no effect, we don’t really care how we make our selections.  Picking randomly when there is no effect is fine, as each of the options have the same value.  It is this case where our simple test of just picking the highest value option makes sense.

Go For It: Tests with no P-Values

Finally we can get to the simple, no p-value test.  This test guarantees that if there is a true difference of the minimum discernible effect (MDE), or larger, one will choose the better-performing arm X% of the time, where X is the power of the test.
Here are the steps:

1) Calculate the sample size
2) Collect the data
3) Pick whichever option has the highest raw conversion value. If a tie, flip a coin.

Calculate the sample size almost exactly the same as in a standard test:  1) pick a minimum detectable effect (MDE) – this is our minimum desired lift; 2) select the power of the test.

Ah, I hear you asking ‘What about the alpha, don’t we need to select a confidence level?’ Here is the trick. We want to select randomly when there is no effect. By setting alpha to 0.5, the test Reject the null 50% of the time when null is true (no effect).

Lets go through a simple example to make this clear.  Lets say your landing page tends to have a conversion rate of around 4%.  You are trying out a new alternative offer, and a meaningful improvement for your business would a lift to a 5% conversion rate. So the minimum detectable effect (MDE) for the test is 0.01 (1%).

We then estimate the sample size needed to find the MDE if it exists. Normally, we pick an alpha of 0.05 , but now we are instead going to use an alpha of 0.5.  The final step is to pick the power of the test, lets use a good one, 0.95 (often folks pick, 0.8, but for this case we will use 0.95).

You can use now use your favorite sample size calculator (for Conductrics users this is part of the set up work flow).

If you use R, this will look something like:

power.prop.test(n = NULL, p1 = 0.04, p2 = 0.05, sig.level = 0.5, power = .95,
alternative =”one.sided”, strict = FALSE)

This gives us a sample size of 2,324 per option, or 4,648 in total.  If we were to run this test with a confidence of 95% (alpha=0.05) would need to have almost four times the traffic, 9,299 per options, or 18,598 in total.

The following is a simulation of 100K experiments, were each experiment selected each arm 2,324 times.  The conversion rate for B was set to 5% and 4% for A. The chart below plots the difference in the conversion rates between A and B.  Not surprisingly, it is centered on the true difference of 0.01.  The main thing to notice, is that if we pick the option with the highest conversion rate we pick B 95% of the time, which is exactly the power we used to calculate the sample size!

Notice – no p-values, just a simple rule to pick whatever is performing best, yet we still get all of our power goodness! And we only needed about a fourth of the data to reach that power.
Now lets see what our simulation looks like when both A and B have the same conversion rate of 4% (Null is true).
Notice that the difference between A and B is centered at ‘0’, as we would expect. Using our simple decision rule, we pick B 50% of the time and A 50% of the time.
 
Now, if we had a Do No Harm problem, this would be a terrible way to make our decisions because half the time we would select B over A and incur a cost. So you still have to do the work and determine your relative costs around data collection, Type 1, Type II errors.
While I was doing some research on this, I came across Georgi Z. Georgiev’s Analytics tool kit. It has a nice calculator that lets you select your optimal risk balance between there three factors. He also touches on running tests with an alpha of 0.5 in this blog post. Go check it out.

What about Bandits?

As I mentioned above, we can also think of our Go For It problems as a bandit.  Bandit solutions that first randomly collect data and then apply the ‘winner’ are known as epsilon-first (To be fair, all AB Testing for decision making can be thought of as Epsilon-first). Epsilon stands for how much of your time you spend during the data collection phase.  In this way of looking at the problem, the sample size output from our sample size calculation (based on MDE and Power), is our Epsilon – how long we let the bandit collect data in order to learn.
What is interesting, is that at least in the two option case, this easy method gives us roughly the same results an adaptive Bandit method will.  Google has a nice blog post on Thompson Sampling, which is a near optimal way to dynamically solve bandit problems. We also use Thompson Sampling here at Conductrics, so I thought it might be interesting to compare their results on the same problem.
In one example, they run a Bandit with two arms, one with a 4% conversion rate, and the other with a 5% conversion rate – just like our example. While they show the Bandit performing well, needing only an average of 5,120 samples, you will note that that is still slightly higher than the fixed amount we used (4,648 samples) in our super simple method.
This doesn’t mean that we don’t ever want to use Thompson Sampling for bandits. As we increase the number of possible options, and many of those options are strictly worse than the others, running Thompson Sampling or another adaptive design can make a lot of sense. (That said, by using a multiple comparison adjustments, like the Šidák correction, I found that one can include K>2 arms in the simple epsilon-first method and still get Type 2 power guarantees. But, as I mentioned, this becomes a much less competitive approach if there are arms that are much worse than the MDE.)

The Weeds

You may be hesitant to believe that such a simple rule can accurately help detect an effect.  I checked in with Alex Damour, the Neyman Visiting Assistant Professor over at UC Berkeley and he pointed out that this simple approach is equivalent to running a standard t-test of the following form. From Alex:

“Find N such that P(meanA-meanB < 0 | A = B + MDE) < 0.05. This is equal to the N needed to have 95% power for a one-sided test with alpha = 0.5.

Proof: Setting alpha = 0.5 sets the rejection threshold at 0. So a 95% power means that the test statistic is greater than zero 95% of the time under the alternative (A = B + MDE). The test statistic has the same sign as meanA-meanB. So, at this sample size, P(meanA – meanB > 0 | A = B + MDE) = 0.95.”
To help visualize this, we an rerun our simulation, but run our test using the above formulation.
Under the Null (No Effect) we have the following result
We see that the T-scores are centered around ‘0’.  At alpha=0.5, the critical value will be ‘0’.  So any T-score greater than ‘0’ will lead to a ‘Rejection’ of the null.
If there is an effect, then we get the following result.
Our distribution of T-scores is shifted to the right, such that only 5% of them (on average) are below ‘0’.

A Few Final Thoughts

Interestingly, at least to me, is that the alpha=0.5 way to solve the ‘Go For It’ problems straddles two of our main approaches in our optimization toolkit.  Depending how you look at it, it can be seen as either: 1) A standard t-test (albeit one with the critical value set to ‘0’); or 2) as an epsilon-first approach to solve a multi-arm bandit.
Looking at it this way, the  ‘Go For It’ way of thinking about optimization problems can help bridge our understanding between the two primary ways of solving our optimization problems. It also hints that as one moves away from Go For It into Do No Harm (higher Type 1 costs), perhaps classic, robust hypothesis testing is the best approach. As we move toward Go For It, one might want to rethink the problem as a multi-arm bandit.
Have fun, don’t get discouraged, and remember optimization is HARD – but that is what makes all the effort required to learn worth it!
Posted in Uncategorized | 5 Comments


Thompson Sampling or how I learned to love Roulette

Multi-armed bandits, Bayesian statistics, machine learning, AI, predictive targeting blah blah blah. So many technical terms, morphing into buzz words, that it gets confusing to understand what is going on when using these methods for digital optimization.  Hopefully this post will give you a basic idea of how adaptive learning works, at least here at Conductrics, without getting stuck in the AI hype rabbit hole.

Forget Dice, Let’s Play Roulette

tl;dr  Select higher value options more often than lower valued ones, and adjust the relative frequency depending on the user.

What is the difference between basic AB Testing and Adaptive selection?  Adaptive selection differs from AB Test selection by dynamically weighing the probability of selection to favor the higher value decision options. This has the effect of selecting options with higher predicted values more often, and selecting the lower values options less often.  In AB Testing each experience has an equal chance to be selected (or if not equal, the chance of selection has been fixed by the test designer).

To help visualize this, imagine a roulette wheel. Under a “fair” random policy, the roulette wheel assigns an equal area to each option.  For three options, our roulette wheel looks like this:

We spin the roulette wheel and select the option that the wheel stops on. Because each option has an equal share of the wheel, each option has an equal chance to be selected from each spin.

In adaptive/predictive mode, however, Conductrics increases the probability of selecting the options that have higher predicted conversion values, and lowers the probability of options with lower predicted conversion values. This is equivalent to Conductrics constructing a biased roulette wheel to spin for the selections. For example, the roulette wheel below is biased.

If we were to spin this wheel repeatedly, option ‘A’ would, on average, be selected 67% of the time, option ‘B’ 8%, and option ‘C’ only 25%.
Of course, this begs the question, ‘how does Conductrics decide how to weight each option?’

From Bayesian AB Testing to Bandits: How Conductrics calculates the probabilities

Conductrics uses a modified version of Thompson Sampling to make adaptive selections. Thompson sampling is an efficient way to make selections based on the probability that an option is the best one. These selection probabilities are analogues to the areas we assign to the roulette wheel.

Before we take draws using the Thompson sampling method, we first need to construct a probability distribution over the possible conversion values for each option. How do we do that?
For those of you who already know about Bayesian AB Testing, this section will be familiar. Without getting to much into the weeds (we will skip over the use and selection of the prior distribution), we will estimate both the conversion values and a measure of uncertainty (similar to standard error), and construct a probability distribution based on those values. For example, lets say we were using conversion rate as a goal, and we had two options, Grey and Blue, both with a 5% conversion rate. However, for the Grey option we have 3,000 samples, but for the Blue one we only have 300.  The predicted conversion rate of the Grey option has less uncertainty than the Blue one because we have 10 times more experience with it.

The uncertainty of our estimates are encoded in the width, or spread of our distribution of the predicted values. Notice that the values for Grey are clustered mostly between 4% and 6%, whereas for Blue, while also centered at 5%, has a spread between 2% and 8%.

Now let’s say that rather than having the same average conversion rate, we estimate a conversion rate of 5.5% for our Blue option. Now our distribution for Blue has been shifted slightly to right – centered at 5.5%, rather than at 5%.

Just by looking at the two distributions, we can see that while Blue has a higher estimated conversion rate, there is so much uncertainty in Blue (as well as some in Grey), that we can’t really be too certain which option really will be the best one going forward.

One simple approach to get an idea of how likely Blue will be better than than Grey, is to take many pairs of random draws from each distribution, compare the results, and then mark how often the result from Blue was greater than Grey.

You can think of Grey and Blue’s conversion rate distributions (those curves in the chart above), as custom random number generators.  You can use them just like you would call Excel’s rand() function. Except unlike rand(), where we get back values uniformly distributed between zero and one, calling Dist(Grey) we get back values that are near 0.05, plus or minus a small amount. If we call Dist(Blue), we will get back values near 0.055,  plus or minus a larger amount than Grey – reflecting the greater uncertainty in Blue’s predicted conversion rate.

If we called Dist(Grey) and Dist(Blue) 10,000 times, we would get a good idea of how much more likely Blue is better than Grey – in this case Blue comes up higher approx. 62% of the time. We could then conclude, based on the data so far, that Blue has an approximately 62% chance of being the higher valued option and Grey has a 38% chance to be the best option.

Advanced: Priors and AB Testing

If you have experience with Bayesian AB Testing you may be wondering about the selection of the prior distribution. Conductrics uses an approximate empirical Bayesian approach to estimate the prior distributions. The choice of prior can lead to different results than those found in many online testing calculators (which tend to use a uniform prior, whereas Conductrics uses the empirical grand mean for shrinkage/regularization). However, the basic approach is similar to the calculation in many Bayesian AB Testing reports (please see our posts on Shrinkage if you would like to learn more ).

Note: While there is reasonable debate on the best approach for AB Testing, Conductrics recommends the use of error-statistical methods (e.g. t-tests) when the objective is statistical error control (see: Mayo ).  Consider applying Bayesian methods, for example the use of posterior distributions discussed in this post, for when the objective is estimation and prediction (of course, regularization and shrinkage aren’t just the purview of Bayesian statistics, so use what you like and works for your problem 🙂 ).

Thompson Sampling and Bandit Selection

Rather than just report on how likely each option is the best one, we can modify this approach to make the selections for each user.  This modified approach is called Thompson sampling. The way it works is to take random draws from each of the option’s associated distribution. This is just as we did before, but instead of marking down the option with the highest drawn value, we instead go ahead and select that option and serve it to the user. That is all there is to it.

To make this clearer, lets take a look at an example. Below we have three options, each with their associated posterior distributions over their predicted conversion value. Thompson Sampling implicitly makes draws based on the probability that each is best by taking random draws from each of the three options, and then selecting the option with the highest valued draw.

Lets now take a random draw from the distributions of each option. Notice that even though ‘A’ has a higher predicted value (‘A’ is shifted to the right of the both ‘B’ and ‘C’), some of the left hand side of the distribution overlaps with the right hand sides of both ‘B’ and ‘C’. Since the width of the distribution indicates our uncertainty around the predicted value (the more uncertain, the wider the distribution), it isn’t inconceivable that either options B or C are the best.

In this case we see that our draw from ‘B’ happened to have the highest value (0.51 vs. 0.49 and 0.46). So we would select ‘B’ for this particular request.

This process is repeated with each selection request. If we were to get another request we might get draws such that ‘A’ was best.

Over repeated draws the frequency that each option is selected will be consistent with the probability that the option is the best one.

In our example we would select ‘A’ approximately 67% of the time, ‘B’ 8% of the time, and ‘C’ 25% of the time. Our roulette selection wheel looks like this:

As we collect more data we can update our distributions to reflect the information in the new data. For example, if as new data streams in, option ‘A’ continues to have higher conversion rates, then we would recreate our wheel to reflect this (note: warnings about confounding apply).

Where can I see these results in the Reporting?

What is neat is that Conductrics provides reporting and data visualizations based on data from your own experiments and adaptive projects.
If you head over to the audience report you will be see by default, both the predicted value of each option, but also, represented by the heights of the bars, the probability that the selection is best. For example, the report below shows that after 10,000 visitors, option ‘A’ has a predicted value of 4.96% and a 67% change to be a better option than both ‘B’ and ‘C’.

But wait, there is more. We even provide a data visualization view that displays the posterior distributions that Conductrics uses in the Thompson Sampling draws.

This is the same data as before, but with a slightly different view. Now, instead of the bar chart showing the chance that each option is the best, there are the actual distributions that are used to make the adaptive selections. This view can be useful to help understand why, even though an option has a higher predicted value than the others, it still won’t always be selected. The width of the distributions gives you an idea of the relative uncertainty in the data around the predicted values, and the greater degree of overlap indicates how much uncertainty there is between which option to select.

Adaptive Selection with Predictive Targeting

Conductrics also uses the same Thompson Sampling approach for targeting. When Conductrics determines that a user feature or trait is useful, it recalculates both the predicted values as well as the probabilities that each option is best. In the example below visitors on Apple Devices have a much higher predicted value under the ‘A’ option than either the ‘B’ or ‘C’ options. So much so that if Conductrics was to select options for users on Apple Devices, the ‘A’ option would always get selected.

However, for users not on Apple Devices, option ‘A’ performs much worse that either option ‘B’ or ‘C’, and hence, will not be selected (‘A’ Best-Pick Prob=0%).
You will notice that you still have access to the ‘Everyone’ audience, so that you can see how well each option performs overall, and what the overall probability of selection would be without any targeting.
If you would like, you can also see the visualization of the posterior distributions for each option, in each audience.

Now that you understand Thompson sampling, its obvious why for Apple Device users option ‘A’ has essentially a 100% chance of being selected. Even if we were to always pick pessimistically from the very left hand side (low value side) from option ‘A’, and always picked optimistically from the right hand side (high value side) from both ‘B’ and ‘C’, option ‘A’ would still, essentially, always have a higher value.
For visitors not on Apple Devices, the reverse is true. Even if we were to use the most optimistic draws from option ‘A’ and the most pessimistic draws from ‘B’ and ‘C’, option ‘A’ would still produce draws that were less than those from ‘B’ and ‘C’.  As an aside, pruning out very poor options is often where much of the value of adaptive sections comes from.

If Thompson sampling and posterior distributions are tripping you up, you can also just think of it as Conductrics creating a different roulette wheel for each audience. When a member of that audience enters into an agent, Conductrics will first check which wheel to use, and then spin that wheel in order to make a selection for them. So based on the audience report for Apple and non Apple Device users, the logic would look like:

We can apply this logic to any number of audiences, such that there can be 10, 20, 100 etc. such audiences, each with their own custom roulette wheel.

In a way, when you peel back all of the clever machine learning, and multi-armed bandit logic, all Conductrics is doing is constructing these targeted roulette wheels, and using them to select experiences for your users.

One last thing: Uniform Selection

Even when the agent is set to make adaptive selection, there is still a portion of the traffic that is assigned by our fair roulette wheel.  The reason for dong this is twofold:

  • 1) To ensure a random baseline. In order to evaluate the efficacy of adaptive learning, it is important to ensure there is an unbiased comparison group to compare the results against.
  • 2) To ensure that Conductrics still tries out all of the options in case there is some external change, or seasonality effect, that results in a change in the conversion rate/value of the decision options.

One last, last thing:

There are actually many different ways to tackle adaptive selection. Along with Thompson Sampling (TS), we have also used UCB methods, Boltzmann, simple e-greedy (those with sharp eyes will have noticed that what I have described in the post should be called an e-Thompson method), and even mixtures of them.   In terms of performance, all of them are probably fine to use. We found that TS works better for us because it:

  1. uses similar ideas as Bayesian AB Testing, hence it is perhaps easier to understand by our clients than some of the other methods (“What, what?! What is this temperature parameter?”).
  2. fits gracefully with the approximate empirical Bayesian methods that Conductrics uses in its machine learning.
  3. is simple(ish) to implement.

So if you are using some other method, and it works, then great.  Assuming you are paying attention to the possibility of non-stationary data and confounding, I am not convinced that for most use cases there is much extra juice to squeeze between one method or another.

‘* I actually don’t like gambling and am a total buzz kill at casinos.

Posted in Uncategorized | Leave a comment


Going from AB Testing to AI: Optimization as Reinforcement Learning

In this post we are going to introduce an optimization approach from artificial intelligence: Reinforcement Learning (RL).

Hopefully we will convince you that it is both a powerful conceptual framework to organize how to think about digital optimization, as well as a set of useful computational tools to help us solve online optimization problems.

Video Games as Digital Optimization

Here is a fun example of RL from Google’s Deepmind. In the video below, an RL agent learned how to play the classic Atari 2600 game of Breakout. To create the RL agent, Deepmind used a blend of Deep Learning (to map the raw game pixels into a useful feature space) and a type of RL method called Temporal-Difference learning.

The object of Breakout is to remove all of the bricks in the wall by bouncing the ball off of a paddle while ensuring that you don’t miss the ball and let it pass the paddle (lose a life).  The only control, or action, to take is to move the paddle left or right.  Notice that at first, the RL agent is terrible. It is just making random movements left of right. However, after only a few hours of play it begins to learn, based on the position of the ball, how to move the paddle to take out the bricks.  After even more play, the RL agent learns a higher level strategy to remove bricks such that ball can pass through behind the wall, a more efficient way to clear the wall of bricks. This higher level strategy emerges from the agent factoring in the long term effects from each decision on how to move the paddle.

This is the same type of behavior that we want to learn for our multi-touch optimization problems. We want to learn not just what the direct (or last touch) effects are, but also the longer term impact across the set of relevant marketing touch-points.

AB Testing as Building Block to AI

To make sure we are all on the same page, lets start with something we are already familiar with from optimization, AB Testing.  AB Testing is often used as a way to decide which is the best experience to present to your customers so that they have the highest probability to achieve a particular objective.   In order to smooth the transition from AB testing to thinking about RL, it’s going to be helpful to think about AB testing as having the following elements:

  1. A touch-point – the place, or state, where we need to make the decision, for example on a web page;
  2. A decision – this is the set of possible competing experiences to present to the customer (the ‘A’ and ‘B’ in AB Testing); and
  3. A payoff or objective – this is our goal, or reward, it is often an action we would like the customer to take (buy a product, sign-up etc.)

In the image below we have a decision on a web page where we can either present ‘A’ or ‘B’ to the customer, and the customer can either convert or not convert after being exposed to either ‘A’ or ‘B’.

So far, nothing new.  Now, lets make this a little more complicated.  Instead of making a decision at just one touch-point, let’s add another page where we want to add another set of experiences to test. We now have the following:

From each of the pages we see that some of the customers will convert (represented by the path through the green circle with ‘$$$’ signs) before exiting the site, while others will directly exit the site without converting (represented by the direct paths to the ‘Exit Site’ node).  We could just treat this as two separate AB tests, and just evaluate the conversion from each of our options (‘A’ and ‘B’ on Page 1, and ‘C’ and ‘D’ on Page 2).

Attribution = Dynamics

However, as we go from one touch-point to multiple touch-points we add additional complexity to our problem.  Now, not only do we need to keep track of how often users convert before exiting the site after each of our touch-points, but we also need to account for how often they transition from one touch-point to another.

Here, the red lines represent the users that transition from one touch-point to another after being exposed to one of our treatment experiences.  These transitions from touch-point to touch-point represent how our experiments are not only affecting the conversion rates, but also how they affect the larger dynamics of our marketing systems.

Accounting for the impact of these changes to the user dynamics is the attribution problem. Attribution is really about accounting for the non-direct impact of our marketing interventions when we no longer just have single decisions, but a system of sequential marketing decisions to consider.

Reinforcement learning

One simple way to handle this is to combine both tests into one multivariate test. In our example, we would have four treatment options: 1) ‘AC’; 2) ‘AD’; 3) ‘BC’; and 4) ‘BD’. However, since not all users will wind up going to each touch-point, users that we assign to  ‘AC’ for example, will really be comprised of three groups of users, those exposed to ‘AC’, ‘Aω’, and ‘ωC’, where ‘ω’ represents a null decision.   This inefficiency is because this approach doesn’t easily let us take into account the sequential and dynamic nature of our problem, since users may not even wind up being exposed to certain treatments based on how they flow through the process.

This is where reinforcement learning can help us.  Realizing that we have sequential decisions, we can recast our AB testing problem as a Reinforcement Learning Problem. From Sutton and Barto:

“Reinforcement learning is learning what to do—how to map situations to
actions—so as to maximize a numerical reward signal. The learner is not
told which actions to take, as in most forms of machine learning, but instead
must discover which actions yield the most reward by trying them. In the most
interesting and challenging cases, actions may affect not only the immediate
reward but also the next situation and, through that, all subsequent rewards.
These two characteristics—trial-and-error search and delayed reward—are the
two most important distinguishing features of reinforcement learning. ” (Page 4 http://people.inf.elte.hu/lorincz/Files/RL_2006/SuttonBook.pdf)

Mapping situations to actions so as to maximize reward by trial and error learning is the marketing optimization problem. RL is so powerful, not only as a machine learning approach, but because it gives us a concise and unified framework to think about experimentation, personalization, and attribution.

Coordinated Bandits through TD-Learning

In our two touch-point problem we have the standard conversion behavior, just like in AB Testing. In addition, we have the transition behavior, where a user can go from one touch-point to another. What would make our attribution problem much easier to solve is if we could just treat the transition behavior  like the standard conversion behavior. I like to think of this as a sort of lead-gen model, where each touch point can either try to make the sale directly or pass the customer on to another touch-point in order to close the sale.  Just like any other lead -en approach, each decision agent needs to communicate with the others, and credit the leads that are sent its way.

What is cool is that we can use a similar approach that Deepmind uses to treating the transition events like conversion events. This will let us hack our AB Testing, or bandit approach, to solve the multi-touch, credit attribution problem.

Q-Learning

One version of TD-Learning, which is what Deepmind used for Breakout, is Q-Learning.  Q-learning uses both the explicit rewards (e.g. the points after removing a brick for example) along with an estimated value of transitioning from touch-point to a new touch-point.

A simple version of the Q-learning (TD(0)) conversion reward looks like:

Reward(t+1)+γ∗Maxa Q(s(t+1),at ).

Don’t let the math throw you.  In words,

Reward(t+1) is just the value of conversion event after the user is exposed to a treatment. This is exactly the same thing we measure when we run an AB Test.

Maxa Q(s(t+1),at ) – this is a little trickier, but actually not too tricky. This bit is how we will calculate the long term, attribution calculation.  It just says find the highest valued option at the NEW touch-point, and use its value as the credit to attribute back to the selected action from the originating touch-point.

γ – this is really a technicality, it is just a discount rate (its the same type of calculation that banks use in calculating the present value of payments over time). For our example below we will just set γ=1 but normally γ is set between 0 and 1.

Lets run through a quick example to nail this down.  Lets go back to our simple two touch-point example.  On Page 1 we select either ‘A’ or ‘B’ and on Page 2 we select between ‘C’ or ‘D’.  For simplicity’s sake, lets say the first 10 customers go only to Page 1 during their visit.  Half of them get exposed to ‘A’ and the other half get exposed to ‘B’. Lets say three customers convert after ‘A’ and only one customer converts after ‘B’. Lets also assume a conversion is worth $1. Based on this traffic and conversions, the estimated values for Page1:A=$3.0/5, or $0.60, and for Page2:B=$1.0/5, or $0.20. So far nothing new. This is exactly the same types of conversion calculations we make all of the time with simple AB Tests (or bandits).

Now, lets say customer 11 comes in, but this time, the customer hits Page2 first.  We then randomly pick experience ‘C’. After being exposed to ‘C’, the customer, rather than converting, goes to Page 1. This is where the magic happens.  Page 2 now ‘asks’ Page 1 for a reward credit for sending it a customer lead. Page 1 then calculates the owed credit as the value of its highest valued option, which is ‘A’, with a value of $0.60.   The value of Page2:C is now equal to $0.60/1, since remember the TD-Reward is calculated as Reward(t+1)+γ∗Maxa Q(s(t+1),at ) = 0 + 1*0.60 = 0.60.
So now we have:
Page1:A=0.6
Page1:B=0.2
Page2:C=0.6
Page2:D=0.0 (no data yet)

Lets say customer 11, now that they are on Page1, is exposed to ‘A’, but they don’t convert and leave the site.  We then update the value of Page1:A, $3/6=$0.50.  What is interesting is that even though customer 11 didn’t convert, we still were able to increase the value of Page2:C, which makes sense, since in the long term, getting a user to Page1 is worth something (certainly more than ‘0’ as would be the case with a first click attribution method).

While there is more detail, this is mostly all there is to it. By continuously updating the values of each option in this way, our estimates will tend to converge towards the true long term values.

What is awesome is that RL/TD-Learning lets us: 1) blend Optimization with Attribution; 2) calculate a version of time-decay attribution, but only needing to use an augmented version of a last click approach; and 3) interpret the transitions from one touch-point to another as just a sort of intermediate, or internal conversion event.

In a follow up post, we will cover how to include Targeting, so that we can learn the long term value of each touch-point option/action by customer.

If you would like to learn more, please review our Datascience Resources 2  blog post.

Posted in Uncategorized | 1 Comment


Machine Learning and Human Interpretability

The key idea behind Conductrics is that marketing optimization is really a reinforcement learning problem, a class of machine learning problem, rather than an AB testing problem. Framing optimization as a reinforcement learning problem allowed us to provide, from the very beginning, not just AB and multivariate testing tools, but also multi-armed bandits, predictive targeting, and a type of multi-touch decision attribution using Q-learning.  Our first release back in 2010 was in many ways more advanced than what the rest of the industry is providing today.

However, we discovered that no matter how accurate the machine learning might be, many clients and potential clients, were uncertain about ceding control, even of low risk aspects of the user experience, to automated systems if they couldn’t understand how the machine learning was making decisions.

This led us to reconsider what is a good ML solution, especially for customer facing applications, and to redesign our machine learning engine to both solve for accuracy and also be human interpretable.

What is Machine Learning?

Machine learning can be thought of as sitting at the intersection of computer science and statistics. ‘Computer Science has focused primarily on how to manually program computers, Machine Learning focuses on the question of how to get computers to program themselves …’ – Tom Mitchel (see: http://www.cs.cmu.edu/~tom/pubs/MachineLearning.pdf ).

For many tasks it makes relatively little difference if these programs are opaque to human introspection. We are almost exclusively concerned with performance for some task (‘is this a cat?’; how to win at Donkey Kong, etc.). Here, high capacity models, like deep learning, suffer little penalty for marginal increases in representational complexity.

However, for several valid reasons, marketers tend to be wary about ceding control of their customers’ experiences to black box methods. Firstly, they need assurances that they can trust automation to make reasonable decisions over a large space of possible environments. Secondly, companies often have internal and legal regulatory requirements around how, and which, data may be used for making marketing decisions.  With the EU’s GDPR coming into effect in May 2018, this will be even more of a concern.

Why Human Understanding matters

1)  Trust – People often need to understand something before they can trust it.  This is not only true for marketers, but true of ML diagnostics systems for Doctors.  Doctors are much more likely to consider the recommendation/decision from an automated system if the reasoning can be explained.

2) Insights –  Not only do people need to trust these systems, but they also want to be able to glean insights from them – ‘I didn’t realize this type of offer would convert better on the weekends’.

3) Review and Accountability – will this decision from our machine learning be consistent with our stated data policies? Can we be sure this is fair, or non discriminatory? Is this even Legal?

4) Explanation – can you communicate and explain the prediction, or decisions, to users if they ask.  You may be thinking, ‘who cares if you can explain it to users’? Well, as if May 2018, the General Data Protection goes into effect in the EU. Not only does this change the regulations of how personal data can be used, stored etc., it also places regulation on automated decision systems based on Machine Learning.

Under Article 22 of the GDPR, EU citizens will have the right of explanation for any ‘significant’ decision that was made via an automated ML system. While there is still ambiguity around what ‘significant’ will mean, and what will be a satisfactory explanation,  the main take away is that Automated Decisions are contestable.   The penalty for being in breach is up to 4% of a companies worldwide gross revenue.  So it is a huge risk.

Human readable machine learning representation

While lacking the expressiveness, or capacity, of more complex representations, sparse decision trees have many appealing properties for the marketing use cases:

1)Human Readable – You can see based on the features of the users, what action / or output the system will take.

2)Discrete Rule set – the rules cover all possible use cases. This is useful for organizational review/legal before deploying since one can predetermine all outcomes for all inputs.

3)Loggable – since the decision policy is represented as rules, rather than as a function, as in Neural nets (deep or shallow) or regression, each decision can be logged with the exact policy/rule that was used. This lets the organization recall for customer support, or for a GDPR challenge for explanation,  the exact reason for any particular decision that was made.

Conductrics’ Machine Learning Rules

The following is a simple example of a tree view for a set of decision rules that select between three possible user experiences (the ‘A’, ‘B’, and ‘C’ experiences are represented as bars in the audience nodes) generated by Conductrics’ predictive targeting engine:

The model begins with everyone in a single group (the Everyone Audience on the far left) – as you would with simple AB testing. Then, one by one, Conductrics finds the user features that are most useful to discriminate between the user experiences (tech note: unlike standard tree algorithms, that build decision rules from the data, Conductrics unpacks a sparse set of rules that are implicitly represented within a predictive model (function approximation)).  This is very similar to the game of 20 questions you may have played as a child.  Conductrics tries to ‘ask’ the fewest questions possible, while still finding the best solution. In this case, we only need to include three user features: Rural or not; Mobile Device or not; and Registered or not.

It is also useful to see each targeted audience side by side, with additional details, such as audience size, as well as the predicted values of each option and the probability that each option is the best one.

 

When Conductrics is making automated decisions, it will follow the rules implied by this report, which as a program looks something like:
if ( Rural ) { return A; }
else if ( Mobile ) { return A; }
else if ( Registered ) { return C; }
else { return B; }

But how do you know if the targeting rules are working?  Well, we can evaluate our little targeting program by running a type of AB Test, but where our predictive program is one of our test’s options. By default, Conductrics will take a random sample to use as a baseline to compare against the predictive targeting rules. For example, our targeting program returned about $2.77 per user, whereas a simple random play of the four base options returned just $2.10, for a lift of 31.4%

 

Conductrics also includes each of the individual options under random selection, which can each also be used as baseline (just keep in mind to specify sample sizes beforehand).

Using the decision tree representation for the predictive control logic, enables the machine learning to be both machine and human readable. As we see ever increasing demand for customer facing applications that embed AI and machine learning automation, expect to see a greater focus on the human interpretability of these systems.

If you would like to learn more about how we use ML at Conductrics, please get in touch.

 

Posted in Analytics, Testing and Data Science, Uncategorized | Leave a comment


Conductrics 3.0 Release

Today I am happy to announce Conductrics 3.0, our third major release of our universal optimization platform. Conductrics 3.0 represents the next generation of personalized optimization technology, blending experimentation with machine learning to help deliver the best customer experiences across every Marketing channel. Conductrics 3.0 highlights include:

Conductrics Express – You asked and we listened. While many of you were happy using our APIs, agencies and front-line marketers often wanted a more robust point-and-click tool to help set up tests and personalized experiences for web sites. Conductrics 3.0 introduces Express, a novel, self-hosted WYSIWYG test creation tool that lets non-developers easily create advanced web optimization campaigns. Following our deep belief that long-term optimization requires Marketing and IT to work in harmony, we have designed Express so that it can be self-hosted, without the need for third party tags, allowing IT to ensure proper Q/A and management of the overall operation of your digital properties.

Interpretable AI – Conductrics 3.0 was designed so that clarity is now a core element of our machine learning, and automation algorithms.  Our new platform converts complex optimization logic into easily digestible, human readable decision rules. Conductrics is not alone in applying machine learning to optimize clients’ Marketing applications. However, to be truly effective, marketers must also be able to quickly understand the who and why of predictive analytics. This will be especially true in European markets covered under the upcoming GDPR regulations.

New Flavor of our API – Conductrics has provided APIs as a web service since our first release, back in 2010. Conductrics 3.0 adds an even-faster JavaScript API to our classic web service API. Now it is even easier to integrate across almost any channel, either server or client side (or both) with Conductrics’ APIs.

We have been looking forward to this day for a while now, but what we are most excited about is to see all of the original and amazing experiences you will discover and provide for your customers.  We can’t wait to be part of that journey with you. Please reach out if you would like to learn more or just chat about the future of optimization.

For more on the Conductrics 3.0 platform features, visit http://conductrics.com/features/

Posted in Uncategorized | Leave a comment


Segmentation and Shrinkage

In our last post, we introduced the idea of shrinkage. In this post we are going to extend that idea to improve our results when we segment our data by customer.

Often what we really want is to discover what digital experience is working best for each customer.  A major problem is that as we segment our customers into finer and finer audiences, we have less and less data to use for each segment. In a way segmentation is the opposite of pooling -we want to analyze the data in small chunks, rather than analyzing it as one big blob.

In the previous post we showed how partial-pooling can help improve our estimates of individual items by adjusting them back toward a pooled, or global mean.

 

partialpool

We noted that this is especially effective when we don’t have much data to estimate each individual mean.

Segmentation: Unpooled

For our segmentation use case we will keep it simple and assume we know just two things about our users. They are either a new or repeat customer, and they either live in a rural or suburban neighborhood.

We want to estimate the conversion rate for each test option in our AB Test for each of the four possible user segments (Repeat+Suburban; Repeat+Rural; New+Suburban; New+Rural). This is going to give us eight total conversion rates – two conversion rates for each segment.  So you can see, even in a super simple case, we already have a bunch of individual means we need to estimate.

In our little scenario, the null hypothesis is actually going to be true, so there is no difference in conversion rate between A and B.  In addition, neither of the user features (New/Repeat; Suburban/Rural) will have any impact on conversion rate – they are going to be useless.

We set the conversion rate for both A and B to be 20% (0.2) and then draw 100 random samples and we get the following results:

segmentation2

The unsegmented conversion rates are A=16.7% and B=23.1%.  If we just stopped at 100 observations, we would find that B has a 78% probability of being better than A. Not conclusive by any means, but might be enough to lead some astray in to concluding that B is best.

When we segment our results, we find the conversion rates are all all over the place:
1) Repeat+Suburban A=10.4%, B=17.2%; Probability B is best 72%
2) Repeat+Rural A=17.1%, B=31.7%; Probability B is best 88%
3) New+Suburban A=17.2%, B=15.8%; Probability B is best 49%
4) New+Rural A=23.9%,B=30.3%; Probability B is best 51%

Just looking at these numbers we might be tempted to think that Rural customers have higher conversion rates and that option B is probably best for Repeat+Rural customers .

Remember, the real conversion rate for both options, A and B, regardless of segment is 20%.

Segmentation: Partial Pooling

Now lets rerun our simulation, but this time we will calculate the partial pooled values by shrinking our option results back to the grand/pooled average.

segmentsb2

Notice that our unsegmented A and B estimates are pulled toward 20%, with A=20.6% and B=19.2%. After 100 trials, the probability that A is best is just 60%.

We can add another level of partial pooling by shrinking each of the segment results back toward the respective option mean:
1) Repeat+Suburban A=20.4%, B=19.0%
2) Repeat+Rural A=20.7%, B=19.2%
3) New+Suburban A=20.5%, B=19.2%
4) New+Rural A=20.8%,B=19.4%

Now, each of the options for each segments have a 50%/50% probability to be the best, which is what we would expect given that there really is no difference in the conversion rates.

Just to recap this section. We first shrunk our unsegmented A and B estimates toward the average conversion rate over both options.  Then we used these partial-pooled unsegmented option values as a grand average for the segments, and used them for shrinking the values for the segments.

Shrinkage, Targeting, and Multi-Armed Bandits

Where this really gets useful is when you are running a targeted optimization problem with an adaptive method, such as contextual bandits/RL with function approximation.

With multi-armed bandits, rather selecting options with an equal probability, as we do with AB testing, we can instead make our selections based on the probability that each option is best.  So for example, based on the unpooled results we would select B 88% and A 22% of the time for Repeat+Rural customers.   In this use-case it doesn’t really matter, but as you can imagine in a real world scenario,  any intelligent optimization algorithm will need to sort through hundreds of possible segments, option combinations.  What we don’t want, is to confuse the learning system with all of the noise that is bound to creep in to the results when there are so many combinations under consideration.  By feeding the bandit algorithm the partial pooled estimates, rather than the unpooled, we are able to stabilize the optimization process in early, most uncertain portion of the learning.

Maybe even more valuable, is when we introduce new options into an ongoing optimization effort.  So lets say we added a ‘C’ option.  By using shrinkage, we automatically get a reasonable initial estimate for our new option. We just shrink it to the current grand mean of the problem, which is almost certainly a better initial guess then zero. This way the new option has a better chance of being selected by our bandit method (where we make draws from the posterior distributions).

If you would like to learn more about how Conductrics blends partial pooling, predictive targeting, testing, and bandits, please reach out.

Posted in Uncategorized | Leave a comment


Prediction, Pooling, and Shrinkage

As some of you may have noticed, there are often little skirmishes that occasionally break out in digital testing and optimization. There are the AB test vs multi-armed bandits debate (both are good, depending on task), standard vs multivariate testing (same, both good), and the Frequentist vs. Bayesian testing argument (also, both good). In the spirit of bringing folks together, I want to introduce the concept of shrinkage. It has Frequentist and Bayesian interpretations, and is useful in both sequential, Bayesian, and bandit style testing. It is also useful for building predictive models that work better in the real world.

Before we get started talking about shrinkage, lets first step back and revisit our coffee example from our post on P-Values (http://conductrics.com/pvalues).  In one of our examples, we wanted to know which place in town has the hotter coffee: McDonald’s or Starbucks.  In order to answer that, we needed to estimate the average temperatures and variances in temperature for both McDonalds and Starbucks coffees. After we calculated the temperature of our samples of cups of coffee from each store, we got something like this:

unpooled2

(Perhaps because McDonalds got burned (heh) in the past for serving scalding coffee they are now serving lower average coffee temps than Starbucks.)

In most A/B testing situations we calculate unpooled averages. Unpooled means that we calculate a separate set of statistics based only on the data from each collection.  In our coffee example, we have two collections – Starbucks and McDonald’s. Our unpooled estimates don’t co-mingle the data between stores. One downside of unpooled averages is that each estimated average is based on just a portion of the data. And if you remember, the sampling error (how good/bad our estimate is of the true mean/average) is a function of  how much data we have. So less data means much less certainty around our estimated value.

Data Pooling

We could ignore that the coffee comes from two different stores and pool the data together. Here is what that looks like:pooled

The pooled estimate gives us the grand average temperature for any cup of coffee in town.  You may wonder why this pooled average is interesting to us, since what we really care about is finding the average coffee temperature at each store.

As it turns out we can use our pooled average to help us reduce uncertainty,  in order to get better estimates for each store, especially when have only a small amount of data with which to make our calculations. This often is the case when we have many different collections that we need to come up with individual estimates for.

Partial Pooling

You can think of partial pooling as a way of averaging together our pooled and our unpooled estimates.  This lets us share information across all of the collections, while also being able to estimate individual results.  By using partial pooling, we get the pooled benefit of tapping into all of the coffee data to calculate our store level coffee temperatures, and the unpooled advantage of having a unique temperature value for each store. Since we are effectively averaging the pooled and unpooled values together, this has the effect of pulling each estimated value back toward each other – toward the grand/pooled mean.

When we take our average of the pooled and unpooled values, we factor in how much data we have for each component.  So it is a weighted average. If we have lots of data on the unpooled portion, then we weigh the unpooled estimate more, and the pooled value has very little impact. If, on the other hand, we have very little data for the unpooled estimate, than the pooled value makes up a larger share of our partial pooled estimate.

The pooled value is like the gravitational center of your data. Collections with sparse data don’t have much energy to pull away from that center and get squeezed, or shrunk toward it.

partialpool

However, as we get more data on each option, the partial pooled values will start to move away from the center value and move toward their unpooled values.

Partial pooling can improve our estimates when we have lots of separate collections we want to estimate and when the collections are in some sense from the same family.

Imagine that instead of two or three coffee shops, we had 50 or even 100 places that sell coffee. We might only be able to sample a handful of cups from each store. Because of our small samples, our unpooled estimates will all be very noisy, with some having very high temps and some very low temps. By using partial pooling we smooth out the noise and pull all of the outliers back toward the grand average.  This tugging-in of our individual results toward the center – called shrinkage – is a tradeoff of bias for less variance (noise) in our results.

If you are not convinced, here is another example. Let’s say we wanted to know the what the average temperature of a cup of coffee is from the town’s local cafe. Even before we test our first cup of coffee, we already have a strong evidence that the temperature will be somewhere around 130 degrees, since the coffee temp of the independent cafe belongs to the family of all coffee shop coffee temperatures. If you then tested a cup of coffee from the cafe, and found it was only 90 F, you might think that while the cafe’s average temperature might be less than average coffee temperature, it is probably higher than 90 F.

Partial Pooling and Optimization

We can use this same idea when we are run sequential, empirical Bayesian, or bandit style tests. By using partial pooling, we help stabilize our predictions in the early stages of a campaign. It also allows us to make reasonable predictions to new options added mid-test.

Shrinking the individual means back to the grand mean is also consistent with the null hypothesis from standard testing – we assume that all of the different test options are drawn from the same distribution.  Partial pooling implicitly includes that assumption directly into the estimated values.  However, as we collect more data, each result moves away from the center. Partial pooled results sit along the continuum between pooled and unpooled estimates. What is nice, is that the data determines where we are on the continuum.

What to learn more about partial pooling and are a Frequentist?  Look up random effects models. If you are Bayesian, please see hierarchical modelling.  And if you are cool with either, see empirical Bayes and James-Stein estimators.

If you want to learn more about how Conductrics uses shrinkage, and how we can help you provide better digital experiences for your customers please reach out.

 

Posted in Uncategorized | Leave a comment


Easy Introduction to AB Testing and P-Values

A version of this post was originally published over at Conversion XL

For all of the talk about how awesome (and big, don’t forget big) Big data is, one of the favorite tools in the conversion optimization toolkit, AB Testing, is decidedly small data. Optimization, winners and losers, Lean this that or the other thing, at the end of the day, A/B Testing is really just an application of sampling.

You take couple of alternative options (eg. ‘50% off’ v ‘Buy One Get One Free’ ) and try them out with a portion of your users.  You see how well each one did, and then make a decision about which one you think will give you the most return.  Sounds simple, and in a way it is, yet there seem to be lots of questions around significance testing. In particular what the heck the p-value is, and how to interpret it to help best make sound business decisions.

These are actually deep questions, and in order to begin to get a handle on them, we will need to have a basic grasp of sampling. 

A few preliminaries

Before we get going, we should quickly go over the basic building blocks of AB Testing. I am sure you know most of this stuff already, but can’t hurt to make sure everyone is on the same page:

The Mean – often informally called the average. This is a measure of the center of the data. It is a useful descriptor, and predictor, of the data, if the data under consideration tends to clump near the mean AND if the data has some symmetry to it.  

The Variance – This can be thought of as the average variability of our data around the mean (center) of the data.  For example, consider we collect two data sets with five observations each: {3,3,3,3,3} and {1,2,3,4,5}.  They both have the same mean (its 3) but the first group has no variability, whereas the second group does take different values than its mean. The variance is a way to quantify just how much variability we have in our data.  The main take away is that the higher the variability, the less precise the mean will be as a predictor of any individual data point. 

The Probability Distribution – this is a function (if you don’t like ‘function’, just think of it as a rule) that assigns a probability to a result or outcome.  For example, the roll of a standard die follows a uniform distribution, since each outcome is assigned an equal probability of occurring (all the numbers have a 1 in 6 chance of coming up).  In our discussion of sampling, we will make heavy use of the normal distribution, which has the familiar bell shape.  Remember that the probability of the entire distribution sums to 1 (or 100%).

The Test Statistic or Yet Another KPI
The test statistic is the value that we use in our statistical tests to compare the results of our two (or more) options, our ‘A’ and ‘B’.   It might make it easier to just think of the test statistic as just another KPI.  If our test KPI is close to zero, then we don’t have much evidence to show that the two options are really that different. However, the further from zero our KPI is, the more evidence we have that the two options are not really performing the same.

Our new KPI combines both the differences in the averages of our test options, and incorporates the variability in our test results. The test statistics looks something like this:

T Statistic

While it might look complicated, don’t get too hung up on the math. All it is saying is take the difference between ‘A’ and ‘B’ – just like you normally would when comparing two objects, but then shrink that difference by how much variability (uncertainty) there is in the data. 

So, for example, say I have two cups of coffee, and I want to know which one is hotter and by how much. First, I would measure the temperature of each coffee. Next, I would see which one has the highest temp. Finally, I would subtract the lower temp coffee from the higher to get the difference in temperature. Obvious and super simple.

Now, let’s say you want to ask, “which place in my town has the hotter coffee, McDonald’s or Starbucks?” Well, each place makes lots of cups of coffee, so I am going to have to compare a collection of cups of coffee. Any time we have to measure and compare collections of things, we need to use our test statistics.

The more variability in the temperature of coffee at each restaurant, the more we weigh down the observed difference to account for our uncertainty. So, even if we have a pretty sizable difference on top, if we have lots of variability on the bottom, our test statistic will still be close to zero. As a result of this, the more variability in our data, the greater an observed difference we will need to get a high score on our test KPI. 

Remember, high test KPI -> more evidence that any difference isn’t just by chance.

Always Sample before you Buy

Okay now that we have that out of the way, we can spend a bit of time on sampling in order to shed some light on the mysterious P-Value.

For sake of illustration, let say we are trying to promote a conference that specializes in Web analytics and Conversion optimization.  Since our conference will be a success only if we have at least certain minimum of attendees, we want to incent users to buy their tickets early. In the past, we have used ‘Analytics200’ as our early bird promotional discount code to reduce the conference price by $200.  However, given that AB Testing is such a hot topic right now, maybe if we use ‘ABTesting200’ as our promo code, we might get even more folks to sign up early. So we plan on running an AB test between our control, ‘Analytics200’ and our alternative ‘ABTesting200’.

We often talk about AB Testing as one activity or task. However, there are really two main parts of the actual mechanics of testing.

Data Collection – this is the part where we expose users to either ‘Analytics200’ or ‘ABTesting200’. As we will see, there is going to be a tradeoff between more information (less variability) and cost.  Why cost? Because we are investing time and foregoing potentially better options, in the hopes that we will find something better than what we are currently doing. We spend resources now in order to improve our estimates of the set of possible actions that we might take in the future. AB Testing, in of itself, is not optimization. It is an investment in information.

Data Analysis – this is where we select a method, or framework, for drawing conclusions from the data we have collected. For most folks running AB Tests online, it will be the classic null significance testing approach. This is the part where we pick statistical significance, calculate the p-values and draw our conclusions.

The Indirect logic of Significance Testing

Sally and Bob are waiting for Jim to pick them up one night after work.  While Bob catches a ride with Jim almost every night, this is Sally’s first time. Bob tells Sally that on average he has to wait about 5 minutes for Jim.  After about 15 minutes of waiting, Sally is starting to think that maybe Jim isn’t coming to get them.  So she asks Bob, ‘Hey, you said Jim is here in 5 minutes on average, how often do you have to wait 15 minutes?’  Bob, replies, ‘don’t worry, with the traffic, it is not uncommon to have wait this long or even a bit longer. I’d say based on experience, a wait like this, or worse, probably happens about 15% of the time.’ Sally relaxes a bit, and they chat about the day while they wait for Jim. 

Notice that Sally only asked about the frequency of long wait times.  Once she heard that her observed wait time wasn’t too uncommon, she felt more comfortable that Jim was going to show up.  What is interesting is what she really wants to know is the probability that Jim is going to stand them up.  But this is NOT what she learns.  Rather, she just knows, given all the times that Jim has picked up Bob, what is the probability is of him being late more than 15 minutes.  This indirect, almost contrarian, logic is the essence of the p-value and classical hypothesis testing. 

Back to our Conference

For the sake of argument, let’s say that the ‘Analytics200’ promotion has a true conversion rate of 0.1, or 10%.  In the real world, this true rate is hidden from us – which is why we go and collect samples in the first place –  but in our simulation we know it is 0.1. So each time we send out ‘Analytics200’, approximately 10% sign up.

If we go out and offer 50 prospects our ‘Analytics200’ promotion we would expect, on average, to have 5 conference signups. However, we wouldn’t really be that surprised if we saw a few less or a few more.  But what is a few? Would we be surprised if we saw 4? What about 10, or 25, or zero?   It turns out that the P-Value answers the question, How surprising is this result?

Extending this idea, rather than taking just one sample of 50 conference prospects, we take 100 separate samples of 50 prospects (so a total of 5,000 prospects, but selected in 100 buckets of 50 prospects each).  After running this simulation, I plotted the results of the 100 samples (this plot is called a histogram) below:

Hist100_50

Our simulated results ranged from 2% to 20% and the average conversion rate of our 100 samples was 10.1% – which is remarkably close to the true conversion rate of 10%.

Amazing Sampling Fact Number 1

The mean (average) of repeated samples will equal the mean of the population we are sampling from.

Amazing Sampling Fact Number 2

Our sample conversion rates will be distributed roughly according to a normal distribution – this means most of the samples will be clustered around the true mean, and samples far from our mean will occur very infrequently.  In fact, because we know that our samples are distributed roughly normally, we can use the properties of the normal (or students-t) distribution to tell us how surprising a given result is.

This is important, because while our sample conversion rate may not be exactly the true conversion rate, it is more likely to be closer to the true rate than not.  In our simulated results, 53% of our samples were between 7 and 13%. This spread in our sample results is known as the sampling error. 

Ah, now we are cooking, but what about sample size you may be asking? We already have all of this sampling goodness and we haven’t even talked about the size of each of our individual samples. So let’s talk:

There are two components that will determine how much sampling error we are going to have:

  • The natural variability already in our population (different coffee temperatures at each Starbucks or McDonald’s)
  • The size of our samples

We have no control over variability of the population, it is what it is.

However, we can control our sample size. By increasing the sample size we reduce the error and hence can have greater confidence that our sample result is going to be close to the true mean.

Amazing Sampling Fact Number 3

The spread of our samples decreases as we increase the ‘N’ of each sample.  The larger the sample size, the more our samples will be squished together around the true mean.

For example, if we collect another set of simulated samples, but this time increase the sample size to 200 from 50, the results are now less spread out – with a range of 5% to 16.5%, rather than from 2% to 20%. Also, notice that 84% of our samples are between 7% and 13% vs just 53% when our samples only included 50 prospects.

We can think of the sample size as a sort of control knob that we can turn to increase or decrease the precision of our estimates.  If we were to take an infinite number of our samples we would get the smooth normal curves below. Each centered on the true mean, but with a width (variance) that is determined by the size of each sample.

ShrinkingMSE

 

Why Data doesn’t always need to be BIG

Economics often takes a beating for not being a real science, and maybe it isn’t ;-). However, it does make at least a few useful statements about the world. One of them is that we should expect, all else equal, that each successive input will have less value than the preceding one. This principle of diminishing marginal returns is at play in our AB Tests. 

DiminishingReturns

Reading right to left, as we increase the size of our sample, our sampling error falls. However, it falls at a decreasing rate – which means that we get less and less information from each addition to our sample.   So in this particular case, moving to a sample size of 50 drastically reduces our uncertainty, but moving from 150 to 200, decreases our uncertainty by much less. Stated another way, we face increasing costs for any additional precision of our results. This notion of the marginal value of data is an important one to keep in mind when thinking about your tests. It is why it is more costly and time consuming to establish differences between test options that have very similar conversion rates.  The hardest decisions to make are often the ones that make the least difference.

Our test statistic, which as noted earlier, accounts for both how much difference we see between our results and for how much variably (uncertainty) we have in our data. As the observed difference goes up, our test statistic goes up. However, as the total variance goes up, our test statistic goes down.

t_scoreabstract

Now, without getting into more of the nitty gritty, we can think of our test statistic essentially the same way we did when we drew samples for our means.  So whereas before, we were looking just at one mean, now we are looking at the difference of two means, B and A.  It turns out that our three amazing sampling facts apply to differences of means as well. 

Whew- okay, I know that might seem like TMI, but now that we have covered the basics, we can finally tackle the p-values.

Assume There is No Difference

Here is how it works. We collect our data for both the ABTesting200, and Analytics200 promotions. But then we pretend that really we ran an A/A test, rather than an A/B test. So we look at the result as if we just presented everyone with the Analytics200 promotion.  Because of what we now know about sampling, we know that both groups should be centered on the same mean, and have the same variance – remember we are pretending that both samples are really from the same population (the Analytics200 population). Since we are interested in the difference, we expect that on average, that Analytics200-Analytics200 will be ‘0’, since on average they should have the same mean.

AATest

So using our three facts of sampling we can construct how the imagined A/A Test will be distributed, and we expect that our A/A test, will on average, show no difference between each sample. However, because of the sampling error, we aren’t that surprised when we see values that are near zero, but not quite zero. Again, how surprised we are by the result is determined by how far away from zero our result is. We will use the fact that our data is normally distributed to tell us exactly how probable seeing a result away from zero is. Something way to the right of zero, like at point 3 or greater will have a low probability of occurring. 

Contrarians and the P-Value, Finally!

The final step is to see where our test statistic falls on this distribution. For many researchers, if it is somewhere between -2 and 2, then that wouldn’t be too surprising to see if we were running an A/A test. However, if we see something on either side of or -2 and 2 then we start getting into fairly infrequent results. One thing to note: what is ‘surprising’ is determined by you, the person running the test.  There is no free lunch, at the end of the day, your judgement is still an integral part of the testing process.

Now lets  place our test statistic (t-score, or z-score etc) on the A/A Test distribution. We can then see how far away it is from zero, and compare it to the probability of seeing that result if we ran an A/A Test .

HypothTest

Here our test statistic is in the surprising region.  The probability of the surprise region is the P-value. Formally, the p-value is the probability of seeing a particular result (or greater) from zero, assuming that the null hypothesis is TRUE.  If ‘null hypothesis is true’ is tricking you up, just think instead, ‘assuming we had really run an A/A Test.

If our test statistic is in the surprise region, we reject the Null (reject that it was really  an A/A test). If the result is within the Not Surprising area, then we Fail to Reject the null.  That’s it.

Conclusion: 7 Points

Here are a few important points about p-values that you should keep in mind:

  • What is ‘Surprising’ is determined by the person running the test. So in a real sense, the conclusion of the test will depend on who is running the test.  How often you are surprised is a function of how high a p-value you need to see (or related, the confidence level in a Pearson-Neyman approach, eg. 95%) for when you will be ‘surprised’.
  • The logic behind the use of the p-value is a bit convoluted and contrarian. We need to assume that the null is true, in order to evaluate the evidence that might suggest that we should reject the null. This is kinda of weird and an evergreen source of confusion.
  • It is not the case that the p-value tells us the probability that B is better than A. Nor is it telling us the probability that we will make a mistake in selecting B over A. These are both extraordinarily commons misconceptions, but they are false.  This is an error that even ‘experts’ often make, so now you can help it explain it to them ;-).  Remember the p-value is just the probability of seeing a result or more extreme given that the null hypothesis is true. 
  • While many folks in the industry will tout classical significance testing as some sort of gold standard, there is actually debate in the scientific community about the value of p-values for drawing testing conclusions. Along with Bergers’ paper below, also check out Andrew Gelman’s blog for frequent discussions around the topic. http://andrewgelman.com/2013/02/08/p-values-and-statistical-practice/ 
  • You can always get a higher (significant) p-value. Remember that the standard error was one part variation in the actual population and one part sample size. The population variation is fixed, but there is nothing stopping us, if we are willing to ‘pay’ for it, to keep collecting more and more data. The question really becomes, is this result useful. Just because a result has a high p-value (or is statistically significant in the Pearson-Neyman approach) doesn’t mean it has any practical value.
  • Don’t sweat it, unless you need to. Look, the main thing is to sample stuff first to get an idea if it might work out. Often the hardest decisions for people to make are the ones that make the least difference. That is because it is very hard to pick a ‘winner’ when the options lead to similar results, but since they are so similar it probably means there is very little up or downside to just picking one.  Stop worrying about getting it right or wrong.  Think of your testing program more like a portfolio investment strategy. You are trying to run the bundle of tests, whose expected additional information will give you the highest return. 
  • The p-value is not a stopping rule. This is another frequent mistake. In order for all of the goodness we get from sampling that lets us interpret our p-value, you select your sample size first. Then you run the test. (There are some that advocate the use of Wald’s sequential tests (SPRT, or similar), but these are not robust in the presence of nonexchangeable data- which is often the case in online settings.)

This could be another entire post or two, and it is a nice jumping off point for looking into the multi-arm bandit problem (see Conductrics http://conductrics.com/balancing-earning-with-learning-bandits-and-adaptive-optimization/ 

*One final note: What makes all of this even more confusing is that there isn’t just one agreed upon approach to testing. For more check out Berger’s paper for a comparison of the different approaches http://www.stat.duke.edu/~berger/papers/02-01.pdf and Baiu et .al http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2816758/

 

Posted in Analytics, Testing and Data Science, Uncategorized | 2 Comments