There are plenty of articles on prioritization, but few on what happens after a test has become “the chosen one.” This post focuses on the step after prioritization but before test development starts. By the end of this article you’ll understand the components of a good test plan and why it’s critical to building and scaling a test-and-learn culture across your organization.
Optimization test plans: Why you need them
When searching for “conversion rate optimization test plan templates,” the results aren’t surprising. You’ll find screenshots of prioritization framework spreadsheets and gantt charts.
While roadmap planning is understandably a big part of getting a test launched on your site, it’s far from the only part. Often, roadmap planning is straightforward and already has established processes for getting work into the test queue.
But time and time again, the challenges our clients face most after a hypothesis has been selected is the cross-functional, multi-stakeholder test planning—that is, our clients need help planning the scope, design, measurement, development, and implementation of a test.
Without a good test plan and a strategy to create proper documentation, not only is there lost time in “handoffs” between stakeholders, but processes also get dropped, and everyone has to scramble to pick up the work and move it forward. The purpose of a good test plan document is to create smooth transitions that save time and keep teams in their own lanes while working together to move tests full speed ahead!
A note on test plan documentation
It’s critical! Without proper documentation of test design, rationale, and decision planning, even if you do increase conversion rate, you will not be fostering a test and learn culture within your organization.
It’s this culture that turns the relatively small impact of a single test into a long-term organization benefit and digital transformation. How? As insights and recommendations derived from experimentation begin to impact the business, these are shared and scaled across the organization. These learnings can be applied across every customer touchpoint from the website to digital media to even traditional marketing or product development.
Ten components of an optimal test plan
- A hypothesis statement
- Logistical information
- Change log (this may be able to be put at the end if approvals “stick“ within your organization)
- Business case detailing the problem and why this hypothesis was prioritized
- Experimental design & integration specifications
- Visuals of both the original version and the competitor variation(s)
- A data dictionary for all metrics and segments
- Duration and statistical standards
- Decision matrix
- Test setup & metrics tracking QA checklist
It’s a good idea to have this be the FIRST component within your plan. I have learned that intentionally putting this first, big and highlighted, anchors the reader to the objective—testing a new treatment on their visitors to see if their behavior changes in a way that we expect (and hypothesize it would).
Logistical information should include key stakeholder contact info, must-go-live date, related documents, potential blockers or interferences, and expectations.
This section is beneficial for two reasons. First, by having a section dedicated to changes that have been made since approval of the original document, you make sure everyone is aware of the change. Think of it as supplemental to the version history of a document. Secondly, having and using the change log brings some finality to the original document. You shouldn’t be continuing to adjust components after approval because those components should be part of the approval process! Obviously, things come up and the test plan must change to accommodate. That’s what this section is about.
This explains why we’re running this experiment. Not only can it detail business context, like whether the hypothesis relates to a company goal, but it also denotes if there is any previous research or testing that has influenced the hypothesis.
Experimental design & integration specifications
This test plan section is for the four other W’s.
- Who will see the experiment? Only Desktop? PPC traffic?
- What type of test is this? A/B/n? MVT?
- Where on the site (pages, site elements like header) will this experiment run?
- When in the user journey? After a certain amount of time?
This details what is being tracked. Ideally, it contains screenshots for tracking that is supplemental to conventional funnel steps. The data dictionary also denotes WHERE KPIs will be tracked and whether this tracking exists today or needs a developer to write code before launch. This step is often glossed over, but the more definition you can give to your measurement, the more meaningful and digestible the data story you produce will become.
In this section, you’ll estimate the cost of a test in terms of traffic and time, and you’ll weigh that cost against the potential value the change could yield. What you’ll need is an estimated amount of traffic that will qualify for the test, the number of primary KPI conversions for those who qualify, and your favorite runtime calculator. We’ve got some great free calculators to check out if you don’t have a favorite yet.
|No change in rate of quote starts||Quote rate declines||Increase in rate of quote starts [quote rate flat/positive]|
What it means
Changes to pre-quote form have not been found to impact performance
Changes to pre-quote form may increase quote starts but discourages higher qualified traffic
Changes to pre-quote form encourage more people to start quoting
What will we do
Look for other ways to improve pre-quote form
Test reverting to old pre-quote form on homepage and elsewhere to increase quote rate
Rollout new pre-quote form as widely as possible on site & determine next step for iteration
This is where you write out how to qualify for the experiment, what the experiment should do, and what data should be sent to the testing tool and/or the analytics platform. From there, QA makes sure those things are able to happen.
Common concerns about test plan documentation
You might be thinking, “Do I really need ALL this to test this just to test one hypothesis?” Yes, you really do. The time you spend increasing transparency and gaining alignment on the test plan will save you time and reduce the risk of running into the post hoc fallacy or issues that arise when data is cherry-picked to support a desired outcome.
In short, companies who document their methods have more successful programs than those who simply use the testing tool and coordinate by email.
Documentation limits the back-and-forth between departments and the need for games of telephone between stakeholders because the most up-to-date information can be found in the plan. Further, with people flowing in and out of organizations all the time, having these ten items documented for every test ensures people can get up to speed quickly on what’s been tested and how.
You might also be thinking, “This is going to take too much time!” to which my response is, “Creating a test plan is a lot less effort than you might imagine, especially when you use a template.”
Yes, it takes time. But the time saved on the backend as well as the quality assurance, alignment, and statistical rigor ensured by the test plan more than makes up for that. Once you’ve got a template and a couple of test plans completed, many of these sections are ten-second copy and paste activities from previous experiments.
Coming up next
In our next post, we’ll be covering in-depth the most strategic component of a test plan—the data dictionary. We’ll discuss the different types of metrics, how to leverage them to craft rich data stories, AND how those stories can truly ignite a test & learn culture within your organization.