Ad Copy testing can seem like a daunting task, especially when your account has multiple campaigns/ad groups and runs across multiple platforms. However, when you take a step back and really think about the goal behind Ad Copy testing, you start to see the light at the end of the tunnel.
Let’s start at Step One: Why do we run Ad Copy tests?
The answer is easy â To find the optimal performing ads that drive the most conversions.
While all PPC marketers (I hope) realize that this is the end goal in Ad Copy testing, many PPC marketers get overwhelmed with the amount of ads in their accounts and get caught up with where to start and how to achieve the goal. It really does not need to be such a stressful task.
Though it would be great to be able to run ad copy tests across all campaigns/ad groups, the fact is there is just not enough time in the day to tackle it all.
Simply (Step Two) start by identifying your under-performing campaigns/ad groups (I recommend using a 60-90 day time frame, though depending on the volume of your account, a 30 or even 120 day time frame may be more appropriate) that if you optimized ad copy, could have a positive impact in performance. “Under-performing” can be various indicators, for example; over CPA/CPL /ROAS/ROI goals, low CTR, low conversions, etc.
Let’s look at an example of how to choose your under-performing campaigns for your Ad Copy testing. Below I’ve indicated the three campaigns that within this data set, I would chose to focus on when looking to run an ad copy test. One has the lowest CTR, one has the highest CPA and the last has the lowest conversions.
Now that I know where I want to focus my time and energy to improve performance through testing Ad Copy, Step Three is pulling an Ad Report for those campaigns (use the same time frame as you did when selecting under-performing campaigns).
Sort & filter the report by Campaign/Ad group. From there, create a new column with the formula CTR * Conversion Rate. By using this formula, you are not just analyzing ads based on CTR, which is not an indicator of conversions, or Conversion Rate, which is not an indicator of traffic. You are able to find the optimal pairing of CTR (the traffic indicator) and Conversion Rate (the conversion indicator). The one important thing to keep in mind with this formula is that it won’t work for evaluating all of your ads, namely those that have low/no conversions. In that instance, you would simply base your decisions off of CTR.
Once you have the new formula calculated for all ads, Step Four is ensuring there is statistical significance between ads. Here at SEER, we have an amazing developer, who has built us a proprietary internal tool to automatically determine statistical significance between the formula, CTR * Conversion Rate (screen shot below).
If you aren’t as lucky as we are, there are various Stat Checker tools out there you can use for the same thing. A few of our favorites are: Split Tester, SEO Book Split Tester and Super Split Tester . While you may not be able to check the statistical significance of the formula by itself, you can still use it to check Clicks/CTR or Conversions/Conversion rate to ensure the outcome of the formula is based on statistical significance.
95% is the ideal confidence level you want to reach to make a decision on the “winning ad.” However, if you find the ads consistently run at an 85-90% confidence level, it may indicate that the ads are very similar and thus will continue to perform in that manner, which indicates a new ad test is still ideal.
Now that you have your statistically significant winner in each ad group, you can confidentially pause the under-performing ad, and replace it with a new ad to test against the winner (Step Five). However, in order to ensure you are running a well structured ad test, it is critical to remember to isolate the ad test to one variable. For instance, when writing your new ad, keep all copy consistent with the winning ad, with the exception of 1 variable, perhaps the headline, value proposition, call to action, or even display variable. Here is an example of two ads that would adhere to ideal ad testing practices:
Notice the only variable I changed was the value proposition in Line 2 -> Free Shipping vs. Price Point. Testing just one variable at a time allows you to easily determine and understand why the ads are performing differently.
One of the best things about optimizing your ads and testing them using this strategy is you are able to get into a rotation. When your first test (let’s say the one in Step Two) is implemented and aggregating data, you can start the process all over again with a new round of campaigns to optimize and test new ads. If your first test was to optimize and test ads for under-performing campaigns, perhaps in round two of testing you should tackle your top converting campaigns, as you can always improve performance by tweaking aspects of your campaigns!
So, any time you want to optimize and test your ads follow the five simple steps for a stress-free testing strategy!
Step One: Why do we run Ad Copy tests?
Step Two: Identify your under-performing campaigns/ad groups (or other set of factors to test a specific set of ads)
Step Three: Pull an Ad Report for those campaigns and add the CTR*Conversion Rate formula (where/when applicable)
Step Four: Ensure there is statistical significance between ads
Step Five: Pause the under-performing ad and replace it with a new ad to test against the winner (remembering to isolate just one variable for accurate testing)