Optimization testing has been the hot thing that everyone wants to tap. Some boast that they’ve done it forever – weekly subject line split testing with 10% of their email list and then sending the remaining 90% the winner.
Maybe you read some variation on color psychology and tested red vs. green on your CTA button. Or perhaps you’ve gotten the buy-in and have invested in a fancy tool but struggle to sustain a full testing program. No matter where you are on that spectrum, we can all admit that there is room for improvement – otherwise we wouldn’t need testing at all.
This post will walk you through answering the 6 basic questions you must ask once entering the Hypothesis Phase. This phase begins when you take an idea or observation about your website and identify a way to change it that impacts your business. Whether the idea comes from an intern or an executive, it is important that you can answer these questions before agreeing to start A/B testing.
- What do you see?
- What do you think?
- How will you know?
- What will it look like?
- What will you do?
- BONUS: Why should we care?
Asking these 6 questions can help you get A/B testing buy-in from key stakeholders.
Example: Embedding a form on a landing page rather than a call-to-action button that produces a pop-up form can improve form conversion rate.
Every test you run must be based in some sort of research, whether it is first-hand observation, 3rd party research, or even something you see your competition doing. In a future post, we’ll explore the top reports and patterns you can find in your analytics that could point to a testing opportunity.
Example: By embedding a form onto the landing page, users will be more likely to complete it
Make your high level hypothesis. Breaking this down to it’s most basic structure, your hypothesis should include: “By [changing this], we expect [audience] to [do this].”
Example: Success would yield a higher number of pageviews per session while also improving the bounce rate of the landing page.
Determine the primary way you will measure success of your test. In this example, we ultimately want to see more people stay on the site and read more, so that will be our primary way of measuring success. You can mention secondary metrics that will determine the completeness of the success. While things like Awareness, Engagement, and Monetization are agreeable things to improve, it is important to identify one clearly measurable determiner of success.
This mock-up was made in Slides, but I’ve used tools like Visio to illustrate user flow for non-visual tests.
Establish your control and variants. It does not have to be fully mocked-up at this point, but even a rough sketch would be helpful (although you might want to be more polished than me…)
Scenario 1: Embedded Form Increases Signup Rate +%
At a confidence level of 95%*, we will add an embedded form on the landing page. This will be the new standard layout for all landing pages using this template.
Scenario 2: Embedded Form Decreases Signup Rate -%
We will keep the existing email signup button in place and test a different treatment of the button or call-to-action at a later date.
Scenario 3: Embedded Form Doesn’t Change Signup Rate –%
…we will keep the existing email signup button in place and test an additional placement of the button or embedded form on the landing page.
If your hypothesis proves to be correct, what are you going to do about it? And if it doesn’t, are you going to keep the control in place and test another visual treatment or content variation? Put it in the backlog to revisit at another time?
Establishing next steps at this early stage in is helpful should your test results require any implementation from developers or designers. It also can help shorten approval times between the analysis of test results and the push to production. At the very least, always plan to document and share these findings so others don’t waste their time testing in an area that did not influence user behavior.
It would be a waste of your time if you conduct this test without doing anything with the results!
*Confidence levels are set by you, the test creator, at the beginning of the test (not by what the data tells you after you get the results). In this case, setting a 95% confidence level means that there is a 95% probability that the results of this test represent the behavior of the general population. Generally, it is recommended to set a 95% confidence level because it accounts for two standard deviations of difference from your mean.
But, depending on your test situation (the amount of data you’re collecting, the risk/effort/impact of implementing a change, timing considerations, etc.), it can make sense to lower your confidence level to 90% or 80%. Regardless, be sure to have a plan to monitor results of any changes you make as result of a test to make sure you got it right!
Overall site email signup rate is 3%. Average value per email is $20. By embedding the email signup form into the landing page, it removes one step for the user to sign up. If we can improve the email signup rate by 5%, we will get an additional 1.5 email signups per thousand sessions. That’s an additional $30 per thousand sessions driven to the site.
This is the doozie and where you need to do some research. Why should your company test this? Should it be prioritized? What will this impact? Offering a couple of scenarios (a conservative and an optimistic case) can provide a range of success that is easier to comprehend than promising a single hard number. Do some math because numbers will prove the value of this test, especially in comparison to others fighting for a chance to see the light of day.
Got a stakeholder who likes to get into the weeds or just has more questions (sample sizes, test dates, confidence levels)? Tell us about it in the comments below.
Subscribe to our newsletter to receive monthly digital marketing updates!