Insights

Mitigating Testing Risks for Paid Media Marketers

Evaluating how big of a test to run

Testing is a huge part of what we do at Seer to improve paid media performance.

With user behavior always changing and new competitors arising - it's important to make sure you are running tests consistently to improve your paid media campaign performance.

However, every time a test is put into market there is risk associated with it. There is always a chance that the test can fail and conversion volume will be lost - it's simply a part of the game.

As paid media professionals, it's our responsibility to be able to calculate that risk, present it to our clients, and make recommendations based on that data and our clients goals.

Determine how much you're willing to risk

Risk tolerance is going to differ from test to test and from client to client. However, this question should always be top of mind when evaluating how big of a test you are going to run. 

Start with calculating the risk and identify which campaigns to include in a test.

Check out the "weighing out the risk" section of this blog post I wrote. It will help you figure out how to identify which campaigns to include in a test and how to calculate conversion risk.

An important rule of thumb is to make sure that both your control and test campaign should have a minimum of 20 conversions in the last 30 days, if you want to include it in your test.

Start small, end big

Even if a client has a very high risk tolerance, I always suggest starting small and increasing the size of the test if positive results are being driven.

For example, if you are running an A/B test with a campaign that has driven 100 conversions in the last 30 days you’d set up the traffic split to be 80/20. 

Meaning 80% of traffic is driven to the control campaign, while only 20% is driven to the test campaign.

This allows for a more conservative test where you are hitting the 20 conversion threshold testing minimum, while minimizing the amount of conversions being put at risk.

80 / 20 rule for testing slow and then ramping

If you are seeing positive performance from this test, you can then start to increase the amount of traffic being driven to the test campaign slowly over time.

This could look like, increasing the 80/20 traffic split to 70/30 after 2 weeks of strong performance. Increasing all the way up to a 50/50 split, assuming this positive trend continues.

Once you hit a 50/50 split and are seeing increased conversions (or improvements in KPI goals) from the test campaign, you can then fully launch the testing tactic into the market with confidence.

This method allows for you to start with a more conservative test to minimize risk and increase the size of the test over time, until the test is fully implemented into your campaign strategy.

This works for any kind of test. Whether you are testing bid strategies, ad copy, new campaign structures, or any other type of test - you can always start small and increase overtime to minimize risk of conversion loss.

Test size is important

Factoring in the size of a test is important and can make or break performance for your client or your business. So you are going to need to take your time to ramp up to an appropriate test size to make the data more accurate.

Instead of starting off with a huge test that puts half (or all) of your account conversions at risk, you can now calculate and mitigate that risk by working through the process above.

This process starts with identifying which campaigns to include in your test, calculating the amount of conversions at risk, and evaluating how large of a test you are going to launch.

Now get out there and keep on testing!


 

Want more posts like this? Subscribe to the Seer Newsletter:

Sign up for newsletter

 

We love helping marketers like you.

Sign up for our newsletter to receive updates and more:

David Lacamera
David Lacamera
Sr. Manager, Paid Media