Back in October, I wrote a post on how to use Googles ACE tool for landing page testing. In December, Google launched a new feature for this innovative tool that makes ad copy testing using ACE possible. They describe this new feature in detail in their blog post linked here.
As the post explains, In the past, you may have used ad rotation to test ad variations within an ad group. The main difference between ACE and ad rotation is that ACE allows you to decide the percentage of traffic on which you want your experiment to run - giving you more control over which ads show when.
In addition to the benefit described above ACE will also automatically calculate whether or not statistical significance has been achieved, which is very useful to be able to see this right in the interface.
Recently I took over an account for a client who was managing paid search in house. They had some really strong ad description lines in the account that I wanted to keep running. I decided to test the existing description line vs. a new description line for each ad group.
I planned to analyze whether or not the new variations outperformed the existing variations. I could have done this by setting my ad copy to rotate evenly. Once data collected, I would then export all the ad copy into Excel and use a Pivot Table to analyze the Control ads vs. the Experiment ads. However, instead I decided to take advantage of the new ad copy feature that Google launched for ACE. ACE made it easy for me to test my ad copy and see which ad variations were statistically significant winners at the campaign level, or within individual ad groups.
Below is the step by step guide as to how I used ACE for ad copy testing:
- Once you have your campaign and ad copy uploaded, navigate to the campaign settings. Within campaign settings scroll down to the bottom and click on the plus box. Add in the details of your experiment. If you are testing new ads vs. current winners, then you may want to set your test so that your control runs for a greater percent of the time than your experiment ads. If you expect your ad groups to get a lot of volume you can run the experiment for a smaller percent of time, for example, 10%, vs. 90% for your control. However, if you do not expect to get a lot of volume then I would choose a split closer to 50/50, so that you can get enough data for each variation to achieve statistical significance. Next, add in the date you want to launch your test and the date you would like your experiment to automatically stop running and hit save. I would set a reminder to yourself as to when you set the end date for the experiment because your experiment ads will stop running on that date.
- Now that you have started an experiment, navigate to view all ads within the campaign. Create a filter so that all of the ads you plan on setting as the Control are visible. For instance, in my example all of my new ads contained BBB in the description line, while the clients existing description lines did not. Thus, under Filter in the drop down I selected Ad Text>does not contain> BBB and hit apply.
- Only the ads that will be set as Control should be visible. Select all ads and then under status select Control only.
- Repeat step 3, but now create a filter that will only show the ads you would like to set as your experiment. In my example, I set the filter to contains >BBB. Select all visible ads and changed their status to Experiment.
- Now let your campaign run as normal and wait for data to gather. You can view your results at the campaign level to see how your Experiment ads are running as an entire group or at the ad group level.
- Viewing results at the campaign level: Currently the example below shows that a statistically significant winner has not been achieved for all the Experiment ads vs. Control ads as a group for CTR since the up and down arrow is greyed out.
- Viewing results at the ad group level: Currently the example below, shows that a statistically significant winner for CTR and clicks has been achieved for the one ad group detailed below.
The Experiment ad actually lost in this example. The one arrow down for clicks and CTR signify with 95% confidence, that the Experiment ad performs worse on these metrics than the Control.
Using ACE for testing ad copy is very easy and results and statistical significance can be seen right in the interface without any extra computation. Below are two ideas to make this tool even more useful for ad copy testing:
- The ability to export the experiment results. Right now it is not possible to pull a report that includes the experiment segments broken out. The ability to do this would be helpful to report on findings to clients, as well as to keep historical data since once you stop your experiment this data is lost.
- Automatic Alerts: It would be great if Google added the functionality to alert you when you have a winning ad copy variation or when your experiment is coming to an end. This would be helpful If you were running tests across multiple accounts, campaigns or ad groups so you do not have remember to check results and when you set the end date.
As this tool is still in Beta, I have confidence that Google will eventually add additional features to ACE. For now, I still find this tool to be incredibly useful for testing ad copy.