Objective Approaches to Test Prioritization

Got more ideas than you have people to test them on and resources to support?

You’re not alone. The most common issue we see today is around making the most out of limited resources and ensuring that the most important ideas are being prioritized.

There are so many methodologies out there for prioritizing. There’s the PIE method. Potential – is the page/tool/experience already optimized or is there lots of room for improvement? Importance – is the targeted audience large / important to the business? Ease – how easy will it be to develop (and get approval) for this test? Then there’s the ICE method. Impact – what happens if the idea wins? Confidence – how likely is it that this idea will win? Ease – how easy would this be to implement?

Both of these methods suffer from the same problem – subjectivity. How do you objectively value impact, potential, even ease could be up for debate in some cases. And here’s the kicker – if you can say with great confidence that idea B is better than existing A and will provide lift C – why are you testing it in the first place? Just go do it!

Many have come across these problems and developed complicated scoring methodologies to address. However, most of those scoring methodologies use the same types of subjective measures – just more of them and combined with some objective metrics. So the end result is a number and the ability to sort ideas – but rarely any understanding (or buy-in) from our business partners regarding how the score works.

I’d like to offer up a different way of prioritizing. What happens when we combine objective type questions with the scoring model? Here’s an example:

Supporting Data

  • None/Weak
  • Supported by analysis
  • Iteration

KPI

  • Interaction or micro conversion
  • Conversion (order, lead, etc)
  • Quality of conversion (RPV, qualified lead, etc)

Target Audience

  • Less than 25% of customer traffic would be impacted
  • 26-50% of customer traffic would be impacted
  • More than 50% of customer traffic would be impacted
  • Targeted to key customer segment(s)

Effort

Score should be provided by dev, design, legal based on their understanding of required effort. Effort should be bucketed by hours. (0 = 0-2 hours, 1 = >2-8 hours, etc with the buckets to be determined by each group. There should be an effort “score” for each.

Impact

This is a fun one to make objective – instead of subjective “high, med, low” type assessment – focus on likelihood to be seen. For example, if you have page scroll metrics, give a different score for elements on your page that are seen by at least 75% vs those that are likely to be seen by fewer than 25%. If you have historical learning library data to aggregate and measure average impact of various types of tests (adding/removing elements, changing templates, forms, checkout funnel, redirects, etc) – then you can score the test by historical impact of similar tests! (hmm…seems like another blog idea!)

Most important to any prioritization scoring is that it be flexible and customized to your specific data and needs. There is no one-size-fits all approach that works here. As with any optimization effort, start with your business and the specific needs and goals of your organization. Take into consideration the data you have available today, the bottlenecks inherent in your existing process, and your available traffic. Then – get prioritizing! And don’t settle with your first scoring model. Keep revising it and test your new model against your old models – see how it changes what tests get prioritized. That’s the great thing about optimization programs – you can apply your optimization efforts to more than just the website!

If you get stuck or just don’t have the time or energy (or internal support) to create a custom prioritization scoring model yourself – give us a call. We’re happy to help!

Scroll to Top