Analytics

The Most Basic Framework for Developing an A/B Test Hypothesis

Optimization testing has been the hot thing that everyone wants to tap. Some boast that they’ve done it forever – weekly subject line split testing with 10% of their email list and then sending the remaining 90% the winner.

Maybe you read some variation on color psychology and tested red vs. green on your CTA button. Or perhaps you’ve gotten the buy-in and have invested in a fancy tool but struggle to sustain a full testing program.

No matter where you are on that spectrum, we can all admit that there is room for improvement – otherwise we wouldn’t need testing at all.

Okay, well I have some ideas about improving my conversion rate, I’ve done the research, and I know I need to run an A/B test. What else do I need to get buy in to start testing?

You’re now entering the Hypothesis Phase, where you take an idea or observation and figure out why it would impact your business. Whether the idea comes from an intern or an executive, it is important that you can answer these questions before agreeing to test.

  1. What do you see?
  2. What do you think?
  3. How will you know?
  4. What does it look like?
  5. What will you do?
  6. BONUS: Why should we care?

1) What do you see?

Example: Embedding a form on a landing page rather than a call-to-action button that produces a pop-up form can improve form conversion rate.

Every test you run must be based in some sort of research, whether it is first-hand observation, 3rd party research, or even something you see your competition doing. In a future post, we’ll explore the top reports and patterns you can find in your analytics that could point to a testing opportunity.

2) What do you think?

Example: By embedding a form onto the landing page, users will be more likely to complete it

Make your high level hypothesis. Breaking this down to it’s most basic structure, your hypothesis should include: “By [changing this], we expect [audience] to [do this].”

3) How will you know?

Example: Success would yield a higher number of pageviews per session while also improving the bounce rate of the landing page.

Determine the primary way you will measure success of your test. In this example, we ultimately want to see more people stay on the site and read more, so that will be our primary way of measuring success. You can mention secondary metrics that will determine the completeness of the success. While things like Awareness, Engagement, and Monetization are agreeable things to improve, it is important to identify one clearly measurable determiner of success.

4) What will your test look like?

This mock-up was made in Slides, but I’ve used tools like Visio to illustrate user flow for non-visual tests.

Establish your control and variants. It does not have to be fully mocked-up at this point, but even a rough sketch would be helpful (although you might want to be more polished than me…)

5) What will you do?

Scenario 1: If the embedded form improves email signup rate…

At a confidence level of 95%*, we will add an embedded form on the landing page. This will be the new standard layout for all landing pages using this template.

Scenario 2: If the embedded form decreases email signup rate…

…we will keep the existing email signup button in place and test a different treatment of the button or call-to-action at a later date.

Scenario 3: If the embedded form generates the same email signup rate as the CTA button…

…we will keep the existing email signup button in place and test an additional placement of the button or embedded form on the landing page.

If your hypothesis proves to be correct, what are you going to do about it? And if it doesn’t, are you going to keep the control in place and test another visual treatment or content variation? Put it in the backlog to revisit at another time?

Establishing next steps at this early stage in is helpful should your test results require any implementation from developers or designers. It also can help shorten approval times between the analysis of test results and the push to production. At the very least, always plan to document and share these findings so others don’t waste their time testing in an area that did not influence user behavior.

It would be a waste of your time if you conduct this test without doing anything with the results!

*Confidence levels are set by you, the test creator, at the beginning of the test (not by what the data tells you after you get the results). In this case, setting a 95% confidence level means that there is a 95% probability that the results of this test represent the behavior of the general population. Generally, it is recommended to set a 95% confidence level because it accounts for two standard deviations of difference from your mean.

But, depending on your test situation (the amount of data you’re collecting, the risk/effort/impact of implementing a change, timing considerations, etc.), it can make sense to lower your confidence level to 90% or 80%. Regardless, be sure to have a plan to monitor results of any changes you make as result of a test to make sure you got it right!

6) Why should we care?

Overall site email signup rate is 3%. Average value per email is $20. By embedding the email signup form into the landing page, it removes one step for the user to sign up. If we can improve the email signup rate by 5%, we will get an additional 1.5 email signups per thousand sessions. That’s an additional $30 per thousand sessions driven to the site.

This is the doozie and where you need to do some research. Why should your company test this? Should it be prioritized? What will this impact? Offering a couple of scenarios (a conservative and an optimistic case) can provide a range of success that is easier to comprehend than promising a single hard number. Do some math because numbers will prove the value of this test, especially in comparison to others fighting for a chance to see the light of day.

These six questions form the basic framework for the Hypothesis phase of your A/B test. By answering these questions, you’ve organized your ideas into a pitch to get buy in from your key stakeholders.

Got a stakeholder who likes to get into the weeds or just has more questions (sample sizes, test dates, confidence levels)? Tell us about it in the comments below! We’d love to cover some of these and more in our next post about the Prep phase.