AB Testing: When Tests Collide


Normally, when we talk about AB Tests (standard or Bandit style), we tend to focus on things like the different test options, the reporting, the significance levels, etc.  However, once we start implementing tests, especially at scale, it becomes clear that we need a way to manage how we assign users to each test.  There are two main situations where we need to manage users:

1)      Targeted Tests – we have a single test that is appropriate only for some of users.

2)      Colliding Tests – we have multiple separate tests running, that could potentially affect each other’s outcomes.

The Targeted Test

The most obvious reason for managing a test audience, is that some tests may not be appropriate for all users. For example, you might want to include only US visitors for an Offer test that is based on a US only product.  That is pretty simple to do with Conductrics. You just set up a rule that filters visitors from the specified test (or tests) if they do not have the US-Visit feature. The set up in the UI looks like this:

Offer Test Exclude Non US Visitors

What this is telling Conductrics is that for the Offertest, if the user is not from the US, then do not put them into the test and serve them the default option.  Keep in mind that only US visitors will be eligible for the test, and will be the only users who will show up in the reporting.  If you really just want to report the test results for different types of users; you just run the test normally and include the targeting features you want to report over.

Colliding Tests

Unless you are just doing a few one off tests, you probably have situations where you have multiple tests running at the same time. Depending on the specific situation you will need to make some decisions about how to control for how and when users can be exposed to a particular test. 

We can use the same basic approach we used for US visitors, to establish some flow control over how users are assigned to multiple concurrent tests.

For example, perhaps the UX team wants to run a layout test, which will affect look and feel of every page on the site.  The hypothesis is that the new layout will make the user experience more compelling and lead to increased sales conversion on the site.

At the same time, the merchandising team wants to run an offer test on just a particular product section of the site. The merchandising team thinks that their new product offer will really incentivize users to buy and will increase sales in this particular product category.  

Strategy One: Separate Tests

The most common, and easiest, strategy is to just assume that the different tests don’t really impact one another, and just run each test without bothering to account for the other test.  In reality, this is often fine in many cases, especially if there is limited overlap.

Strategy Two: The Multivariate Test.

We could combine both tests into one multivatiate test, with two decisions: Layout; Offer.  This could work, but,  once you start to think about it, maybe not the best way to go. For one, the test really only makes sense as a multivariate test if the user comes to the product section. Otherwise, they are never exposed to the product offer component of the test. Also, it assumes that both the UX and Merchandising teams plan to running  each test for the same amount of time. What to do if the UX team was only going to run the layout test for a week, but the merchandizing team was planning to run the Offer test for two weeks?

Strategy Three: Mutually Exclusive Tests

Rather then trying to force what are really two conceptually different tests into one multivariate test, we can instead run two mutually exclusive tests.  There are several ways to set this up in Conductrics.  As an example, here is one simple way to make sure users are assigned to just one of our tests.

Since the Layout test touches every page on the site, lets deal with that first. A simple approach to keep some visitors free for the offer test is to randomly assign a certain percentage of users first to the layout test.  We can do that by setting up the following filter:

LayoutTest_50pct

This rule will randomly assign 50% of the site’s visitors into the Layout test. The other 50% will not be assigned to the test (the % can be customized).  We now just need to set up a filter for the Offer test, that excludes visitors that have been placed into the Layout test.

Offertest_excludeLayout

 This rule just says, exclude visitors who are in the layout test from the Offer test.  That’s it! Now you will be able to read your results without having to worry if another test is influencing the results.

What is neat, is that by combining these assignment rules you can customize you testing experience almost any way you can think of. 

Strategy Four: Multiple Decision Point Agents 

These eligibility rules make a lot of sense for when we are running what are essentially separate tests. However, if we have a set of related tests – that are all working toward the same set of goals, we can instead use a multiple decision point agent.  With multi-point agents, Conductrics will keep track of both user conversions and when the user goes from one test to another.  Multi-point agents are where you can take advantage of Conductrics integrated conversion attribution algorithms to solve these more complex joint optimization problems. We will cover the Multi-Point agents separately and in detail in an upcoming post.

Thanks for reading and we look forward to hearing from you in the comments.

 


Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*