Objective Approaches to Test Prioritization

by | Dec 13, 2017

Got more ideas than you have people to test them on and resources to support?

You’re not alone. The most common issue we see today is around making the most out of limited resources and ensur­ing that the most impor­tant ideas are being prior­i­tized.

There are so many method­olo­gies out there for prior­i­tiz­ing. There’s the PIE method. Poten­tial – is the page/tool/experience already opti­mized or is there lots of room for improve­ment? Impor­tance – is the targeted audi­ence large / impor­tant to the busi­ness? Ease – how easy will it be to develop (and get approval) for this test? Then there’s the ICE method. Impact – what happens if the idea wins? Confi­dence – how likely is it that this idea will win? Ease – how easy would this be to imple­ment?

Both of these methods suffer from the same problem – subjec­tiv­ity. How do you objec­tively value impact, poten­tial, even ease could be up for debate in some cases. And here’s the kicker – if you can say with great confi­dence that idea B is better than exist­ing A and will provide lift C – why are you testing it in the first place? Just go do it!

Many have come across these prob­lems and devel­oped compli­cated scoring method­olo­gies to address. However, most of those scoring method­olo­gies use the same types of subjec­tive measures – just more of them and combined with some objec­tive metrics. So the end result is a number and the ability to sort ideas – but rarely any under­stand­ing (or buy-in) from our busi­ness part­ners regard­ing how the score works.

I’d like to offer up a differ­ent way of prior­i­tiz­ing. What happens when we combine objec­tive type ques­tions with the scoring model? Here’s an example:

Supporting Data

  • None/Weak
  • Supported by analy­sis
  • Iter­a­tion

KPI

  • Inter­ac­tion or micro conver­sion
  • Conver­sion (order, lead, etc)
  • Quality of conver­sion (RPV, qual­i­fied lead, etc)

Target Audience

  • Less than 25% of customer traffic would be impacted
  • 26–50% of customer traffic would be impacted
  • More than 50% of customer traffic would be impacted
  • Targeted to key customer segment(s)

Effort

Score should be provided by dev, design, legal based on their under­stand­ing of required effort. Effort should be buck­eted by hours. (0 = 0–2 hours, 1 = >2–8 hours, etc with the buckets to be deter­mined by each group. There should be an effort “score” for each.

Impact

This is a fun one to make objec­tive – instead of subjec­tive “high, med, low” type assess­ment – focus on like­li­hood to be seen. For example, if you have page scroll metrics, give a differ­ent score for elements on your page that are seen by at least 75% vs those that are likely to be seen by fewer than 25%. If you have histor­i­cal learn­ing library data to aggre­gate and measure average impact of various types of tests (adding/removing elements, chang­ing templates, forms, check­out funnel, redi­rects, etc) – then you can score the test by histor­i­cal impact of similar tests! (hmm…seems like another blog idea!)

 

Most impor­tant to any prior­i­ti­za­tion scoring is that it be flex­i­ble and customized to your specific data and needs. There is no one-size-fits all approach that works here. As with any opti­miza­tion effort, start with your busi­ness and the specific needs and goals of your orga­ni­za­tion. Take into consid­er­a­tion the data you have avail­able today, the bottle­necks inher­ent in your exist­ing process, and your avail­able traffic. Then – get prior­i­tiz­ing! And don’t settle with your first scoring model. Keep revis­ing it and test your new model against your old models – see how it changes what tests get prior­i­tized. That’s the great thing about opti­miza­tion programs – you can apply your opti­miza­tion efforts to more than just the website!

If you get stuck or just don’t have the time or energy (or inter­nal support) to create a custom prior­i­ti­za­tion scoring model your­self – give us a call. We’re happy to help!