It is no news that every day more companies are running experiments to accelerate their growth.
But a common problem among them is how hard it is to only run tests during isolated moments in their customers' journey.
It's usually easier to start experimenting at the start of the journey, like on landing pages. But expanding experimentation to more advanced steps brings additional complexities that relate to technology and even culture.
This "short-sightedness" can lead to a number of strange situations. For instance:
In the work we do at Seer to help in-house Growth/Marketing/CRO teams evolve in Experimentation, I've run into a few cases — a few times over the years — of amazing SaaS companies facing the following scenario:
The Marketing/Growth team used to devote most of their time to a vicious battle of running tests to minimally increase the conversion rate (by 5%, 3%, 1%... sometimes less) on sites and landing pages that were already well optimized. Meanwhile, over 80% of people who started the SaaS trial did not return … 🤔
Strange situation, I know. But surprisingly common. And the reason relates to what I said above: it is instinctively more common (and technologically easier) for Growth professionals to try to optimize customer acquisition.
But often the biggest growth opportunities are further ahead in the customer journey.
And if there is one thing that hasn't been explored much out there, it is applying marketing Experimentation techniques and culture in customer retention.
Therefore, doing Experimentation to increase Retention is our main subject in this post.
Learn how to utilize experimentation for retention with these four steps:
- Visualize the customer journey
- After your analysis, decide on the next experiment
- Create your hypothesis
- Run the test, learn, and start again
Retain or Die
It is rare to see retention as the key metric of a business. Metrics related to new customer acquisitions typically end up being priorities at the moment because their impact in the short-term is greater. And since most teams work with short-term results (monthly or quarterly goals, for instance), this ends up being the natural trend of things.
But looking at it objectively, it makes no sense to not prioritize retention.
It is mathematically indisputable that the key to the accelerated and sustainable growth of a business lies in its ability to retain customers. After all, winning over new customers tends to be considerably more expensive than keeping customers you've already wooed engaged.
Not to mention the positive impact that a high retention generates on metrics that are vital to competitiveness, such as the LTV (Lifetime Value: the financial return that the customer makes for a company throughout their relationship) and the MRR (Monthly Recurring Revenue).
Notice my emphasis on the word "competitiveness" above.
This is because a company that faces challenges in retention also tends to be run over by its competitors when it comes to acquisition. And do you know why?
Businesses with low retention tend to have low LTVs. And when your LTV is low, you can't spend too much to win over a new customer. In other words, you need to work with a strained CAC (Cost of Acquisition of Customer).
The problem with this is that the natural tendency of all CAC is to increase over time. And the first companies to see CAC reach a point where it becomes infeasible are the ones with the lowest retention.
Low Retention —> Low LTV —> Strained CAC —> Limited Acquisition = 💀
The CAC becomes unfeasible particularly fast if your market is becoming more competitive online (like most). After all, you start competing for attention and advertising space with more companies — some of them backed by investments, spending money like crazy.
I bet a lot of readers here are going through this situation right now, huh?
This is a dangerous battle where the vast majority of players lose. To ensure your survival, you need to not only increase your conversion rate at the top of the funnel, but also your LTV (retention).
The higher your LTV, the more room you have to survive the trend of increasing CAC.
So ironically, even for those who want to prioritize acquisition, one of the best things to do is to invest in retention.
How to Expand Experimentation to Increase Your Retention
Now that we're aligned on how unfeasible it is to focus your tests only on the top of the funnel, let's move on to practical actions: How can you start including customer retention opportunities in your AB testing strategy?
Visualize the customer journey
In order to act on every stage of your customer's journey, the first thing you need to do is visualize the journey.
In the data visualization software of your choice (Google Data Studio, Power BI, Looker...), create a "real-time" and reliable report showing the conversion rate data for each step of the path taken by the customer in your company: from the initial traffic acquisition to the steps where they are already a brand evangelist.
Each action that works to advance the user on this path deserves to have its conversion rate monitored: payments, upsells, use of important features, a-ha moments…
If you've created a customer journey map before, you know it can feature a lot more information, like customer emotions, thoughts, pains, etc. But the focus in this visualization is the conversion rate at each moment. You want to know where your customers are losing engagement.
The journey can be viewed in several ways. To illustrate, the image above shows one of the visualizations we created for a previous international customer I’ve worked with. The gray circles represent the various conversion rates that were relevant to the business. This dashboard is the starting point of all experiments.
After your analysis, decide on the next experiment
Before deciding which will be the next experiment that you (or your team) will work on, first assess on which moment of the journey you should focus. Is there a moment where the conversion rate drops more than usual? Has anything gotten worse lately?
This may sound silly, but if you don't define the moment that should be prioritized before the experiment itself, you run the risk of being seduced by a hypothesis that sounds interesting but happens at a moment in the funnel that is currently not the most pressing issue.
Here's an analogy for soccer fans:
Focusing on a "cool" hypothesis that is in a stage of the funnel that already outperforms others is as if Raheem Sterling tried to become a better player by improving his speed and dribbling, when it's actually his finishes that are lacking:
Is it nice to be faster and dribble better? Of course. But that's not where the biggest bottleneck for growth lies.
By focusing on the right problem, you can end the natural tendency to try another improvement on that Landing Page while your onboarding is still terrible or when almost no one makes the upgrade from your free plan to the paid version, etc.
Let's look at an example where this full journey analysis can be very useful.
Say you're on Pinterest's Growth/CRO team. Your Business Intelligence (BI) team has identified that there is a large correlation between the number of pins users make on the platform and the retention of those users.
You will immediately analyze your customer journey report to find out how you are doing in regards to this step. And unfortunately you notice that things are not so great:
Note that users who save a pin are 3 times more likely to stay active. However, 70% of all new users don't create any pins.
This is a huge gap in the customer journey, and it deserves to be prioritized. The decision is made: the moment of the journey that will be the focus of the next experiment is when new users save their first pin.
With the definition of area of expertise, you (or your team) already know that you need to focus on running experiments that make more users save their first pin. No landing pages or anything like that for now.
You may notice that the focus will allow the team to carry out further analyses of the problem, which naturally tends to lead to better hypotheses and solutions.
Run the test, learn, and start again
- With a defined hypothesis, implement a test and monitor performance
- With the finished test, analyze results, both quantitatively and qualitatively
- Have new ideas emerged from the analysis? Excellent! Document them as points to explore in the future
Now it's time to restart the testing cycle.
Always remember to start by defining the moment in the journey you want to prioritize, rather than discussing specific hypotheses.
Even if you decide to continue working on the same stage of the funnel because it still deserves to be prioritized, it is essential to look at the entire journey first, with current data, to know if any other relevant problems arose.
The process looks like this:
- Document conversion rates from the customer journey in a real-time dashboard
- Define, while analyzing the dashboard, which stage of the customer journey should be prioritized
- Hold focused team discussion to define how to improve conversion rate in the chosen stage of customer journey (create hypothesis)
- Implement the hypothesis and execute the experiment
- Analyze the results and record any new hypotheses that emerge
It is important to make it clear that it is not always simple to run tests further ahead in your funnel. You may face challenges such as:
- Difficulty setting up an A/B testing tool for your product. Especially if the product is a mobile app
- Difficulty tracking results for some more bottom-of-funnel metrics, which are not always available in tools like Google Analytics
- Difficulty running tests for features that rely on back-end, such as a recommendation algorithm
And I've already said that if you use Google Optimize, you may have a hard time in all three of the examples above. Not that it is not a good tool for web testing. It is. And its native integration with Google Analytics is a huge plus. But for more advanced experimentation programs, it unfortunately presents some limitations.
But in no way should you feel intimidated by an eventual technical barrier. Especially since it isn't major.
In addition, there are already several AB test tools ready to help you get through common problems like these.
To choose correctly, look for tools that:
- Allow server-side testing
- Allow in-app testing (if you have an app)
- Integrate with your data source to run tests with more bottom-of-funnel goals (e.g. number of active customers after 30 days)
- Integrate with your data source to run tests for more complex segmentations (e.g. only subscribers of plan X)
💡 Remember! There are several tools on the market, and the best one for you will depend on the specifics of your scenario. So do your research before you choose.
Anyway, to help you get started, some examples of tools that can help cover your entire funnel are Kameleoon, Oracle Maxymiser, and VWO.
And of course, you can also create your own built-in tool exactly the way you need it, in case you have a real need and internal resources to do so. In fact, tools for experimentation make for a complex topic that deserves its own article. If you would be interested in an issue of the newsletter on this topic alone, let me know!
Because it is the most efficient method to make experiences evolve nowadays, experimentation should not be limited to landing pages or other parts of the top of your funnel. It needs to be the standard method for the evolution of your entire customer journey, including your product as a whole.
This approach will lead your business to have better retention results, which is increasingly necessary to survive the growing competitiveness of the market.
Don't get complacent by only optimizing certain points of your conversion funnel. Look at your company and business as a whole. Make experimentation knowledge and technology available to all teams, and watch their growth accelerate.
This is not a theory. It is what companies with the best results in all highly competitive markets do.
Need help identifying areas within your funnel to optimize? Contact us to learn what CRO services are offered at Seer, and how we can work together to increase your retention through experimentation and testing.