Inside Growth at Wistia: The Process Behind Our A/B Tests
March 28, 2018
Topic tags
Over the past three years here at Wistia, weโve run over 150 A/B tests to improve our conversion rates and funnel metrics. Weโve run tests at all phases of the funnel and across most of the channels we use to communicate with our customers. Weโve tested website pages, content, product design, email flows, our sales experience, and even the way our plans are set up.
Iโll admit itโโโat first we had no idea what we were doing. We didnโt know anything about setting data-driven hypotheses, much less analyzing the results. Naturally, we made tons of mistakes. But by running all these tests, we actually avoided making some pretty bad decisions, increased our conversion rates, and also learned a ton along the way.
As time went on, we started to add more process to the mix. That process ultimately drove more learnings, and slowly, those learnings turned into results. As a video company, many of the tests we run involve video in some way. But, the process is exactly the same if youโre testing video, copy, images, complete page layouts, navigation updates, user onboarding flows, pricing modelsโโโyou name it. Regardless of what you choose to A/B test, when it comes down to it, you need a thorough process, and thatโs what Iโm here to talk about in this post!
Most marketers arenโt running experiments
As a startup marketer and data nerd, I used to dream about working at a company that had a big focus on testing and optimization. I read articles from โfamousโ marketers, hung out on Growthhackers, watched tons of videos from growth pioneers, and even took online classes from Reforge. But I never actually ran many tests, and I started to notice that a lot of the marketers I was following only shared insights into why they ran A/B tests, but not how to actually do it.
How do you balance your priorities, choose between ideas, implement a winner, or even know when to cut your losses and move on? I was convinced it had to be meโโโevery other marketing team on the planet had this stuff figured out. Right?
Turns out, I was wrong.
โHow do you balance your priorities, choose between ideas, implement a winner, or even know when to cut your losses and move on?โ
After being a bit vulnerable with like-minded, data-driven folks in the space, I learned that tons of marketers out there just donโt have the systems and tools in place to establish a culture of experimentation and A/B testingโโโin other words, I wasnโt alone.
As we started to learn more and refine our testing, we began sharing our results in blog posts and at conferences. It became clear pretty fast that this was something marketers really wanted to utilize and learn more about, so we thought weโd share what goes on here behind the scenes of our own growth experiments.
Three must-have documents
So youโre ready to start A/B testing and hopefully learn a bit more about what makes your customers tick. These are three documents I recommend getting started with to set the stage for your all of your current and future experiments to come.
1. The Master Ideas log
We use this document to keep track of every experiment or testing opportunity weโve ever had. Along with each idea, is a one sentence description of the idea, its ICE score (weโll cover that later), who suggested the idea, why they think it will be successful, and if there are any blockers. When weโre set to run an experiment, we also include a link to the individual experiment doc.
This document should have three important tabs:
- Completed work: a record of what youโve done and results
- In progress: an indication of what tests are currently running
- Backlog of ideas: a living, breathing tab that folks can contribute to
We consider this spreadsheet the home base for all of our A/B testing work. Itโs a great log of everything weโre working on can easily be shared with everyone from our executive team, to new hires. Need a high-level look at what growth work is being done and how successful the tests are? Check the Master Ideas log.
2. The A/B Test roadmap
This document lays out our overall A/B testing roadmap. It includes the next tests weโre going to run, how long weโre going to run them for, and includes time for design, copy, and build. We typically review this once a month and use this to drive our weekly sprints.
As you start to run more tests, one of the questions that almost always comes up is: โHow can I optimize the number of tests that are running at any given time without overlap?โ By setting up a roadmap, you can easily get a birdโs eye view of those potential conflicts so you can effectively communicate with your design and development teams right from the start.
3. An Experiment Test document
This is where we log all of the granular, detailed pieces of information about a specific test. The experiment doc should include your hypothesis, what you hope to achieve, the design of the test, and what youโll learn, whether itโs a win or a loss.
As a general best practice, you should always document your tests. With so many moving pieces and sometimes several tests running at the same time, having a record of all of the things youโre doing and eventually, what you learned from them, will help fuel future ideas for other areas of the business. Plus, being so explicit with your documentation makes it easy for anyone else at the company to see what youโre up to without constantly asking for updates on how a test is progressing.
Putting the process to work
Identify areas in the funnel that arenโt working well
In general, we try to identify areas that we know we have a shot at improving. For example, if thereโs an important page on our site, like a product page, that doesnโt seem to be converting really well, weโll run a survey and asking something like โWhatโs stopping you from creating a free account right now?โ Or similarly, if itโs a page thatโs driving sales conversions, we might ask โWhy havenโt you signed up to chat with a member of our sales team yet?โ
Surveys really help us gain insight into the temperature of folks on the page, which is valuable information we can use to drive new experiments. In other instances, however, weโll use tools like FullStory or CrazyEgg to watch our users interact with our content. These tools let you physically see where your customers get stuck or distracted in your productโโโthat text link you thought was super obvious? Think again.
โSurveys really help us gain insight into the temperature of folks on the page, which is valuable information we can use to drive new experiments.โ
At the end of the day, real users are our best source of insights. We use these insights to identify key problems. We then take those problems and come up with ways to solve them, which leads to further experimentation and even more learning.
Brainstorm ideas to solve the problems
When it comes to creative ideas for solving problems, your tendency will likely be to jump right to the perfect solution first. And if the problem is super obvious, that might actually work just fine.
But most of the time, youโll probably start with the wrong solution. Or at least, thatโs what weโve done here at Wistia before developing a system to help guide us. Weโd come up with an obvious, quick fix to a problem only to be totally let down by the results. So after lots of failures and time spent guessing, we found that the following process helped us pick the ideas that are likely to have the biggest impact.
First, we get in a room and brainstorm a bunch of different solutions to the problem at hand (this is the really fun part). We typically invite folks from departments across the business (like customer happiness, customer success, sales, marketing, and creative) to help solicit ideas.
Rank your ideas to find top contenders
We rank our ideas using the ICE framework, which was pioneered by Sean Ellis from Growthhackers. It isnโt rocket science, but weโve found that itโs super helpful to have some way to determine which idea is actually worth your time. Hereโs quick overview of the process.
- The โIโ stands for Impact.Does the idea have the potential to have a big impact on your conversion rates? If itโs a little change, it will probably have a little impact. Bigger changes tend to produce bigger results.
- The โCโ stands for Confidence.Do we think the idea has a high chance of succeeding or is it just a random idea that somebody tossed out? This one is usually the hardest to be honest with, and where the user research and information we collected in the previous stages becomes super helpful.
- The โEโ stands for Effort. Is this an idea that we can execute in a couple of hours, or is it something that will take weeks or months of design and engineering time?
Together, we rank each of the ICE components on a 1โ5 scale. Then, take the average to determine which ideas are the strongest.
The result? A prioritized list of ideas that will help us improve conversion rates. Typically, we do this on a whiteboard at first to keep things fluid, and then later transcribe it over into our Master Backlog of Ideas document. That way, we always have a running list of good ideas that have already been ranked, which makes it easier for us to determine which projects we want to work on sooner rather than later.
Start your test document
Creating a test document is a critical part in this process. We write down what our hypothesis is and why, what success looks like, and all the supporting information we have about the test. We also leave space for how long the test will need to run, the results of the experiment, and what we learned once the test is complete.
Another key component you should considerโโโand get ready to put your dorky hat on with this oneโโโis naming each experiment with an alphanumeric string. For us, this usually looks something like WC001, WC002, etc. (the WC in this example refers to wistia.com), but we also use EMAIL for email tests and APP for product tests. When youโre scaling up your testing, these numbers will become more helpful than you know.
We kept having conversations that sounded a little something like this: โYou know the test Iโm talking about, right? The one with the funny video. Or wait, are you talking about the one from December with Lenny?โ Referencing a test can be confusing without the proper descriptors in place, so do it right the first time and make a naming convention.
Run the test, document your results, and discuss I wonโt get into the nuance of actually setting up the A/B test (check out this post I wrote on doing A/B tests specifically with video if youโre interested), but I will focus on is arguably the most important part of our growth process: documenting your results. Iโve said it before and Iโll say it againโโโtaking the time to document your results and share it with your teammates is super important. Win or lose, these results can help impact a number of areas of the business in ways you might not even realize. Youโll make better decisions over time, which will lead to success for the business, and ultimately, thatโs what what testing is all about.
Get ready to grow
There you have it! While it might not seem glamorous, putting process behind your testing efforts can lead to serious business impact. If youโre just getting started with testing and experimentation, the best thing you can do is get your ducks in a row right from the startโโโset up your documents, establish a standard naming convention, and create a process you feel confident in. This will make the entire process run more smoothly, so you can spend less time tracking down information, and more time coming up with awesome ideas you can actually work with. Start brainstorming with your team today and get rollin' on those tests.
Have you found a process that works well for your team when it comes to A/B testing? Any resources or information youโve found particularly helpful? Share them in the comments below!