Objective Approaches to Test Prioritization

by | Dec 13, 2017

Got more ideas than you have people to test them on and resources to support?

You’re not alone. The most com­mon issue we see today is around mak­ing the most out of lim­it­ed resources and ensur­ing that the most impor­tant ideas are being pri­or­i­tized.

There are so many method­olo­gies out there for pri­or­i­tiz­ing. There’s the PIE method. Poten­tial – is the page/tool/experience already opti­mized or is there lots of room for improve­ment? Impor­tance – is the tar­get­ed audi­ence large / impor­tant to the busi­ness? Ease – how easy will it be to devel­op (and get approval) for this test? Then there’s the ICE method. Impact – what hap­pens if the idea wins? Con­fi­dence – how like­ly is it that this idea will win? Ease – how easy would this be to imple­ment?

Both of these meth­ods suf­fer from the same prob­lem – sub­jec­tiv­i­ty. How do you objec­tive­ly val­ue impact, poten­tial, even ease could be up for debate in some cas­es. And here’s the kick­er – if you can say with great con­fi­dence that idea B is bet­ter than exist­ing A and will pro­vide lift C – why are you test­ing it in the first place? Just go do it!

Many have come across these prob­lems and devel­oped com­pli­cat­ed scor­ing method­olo­gies to address. How­ev­er, most of those scor­ing method­olo­gies use the same types of sub­jec­tive mea­sures – just more of them and com­bined with some objec­tive met­rics. So the end result is a num­ber and the abil­i­ty to sort ideas – but rarely any under­stand­ing (or buy-in) from our busi­ness part­ners regard­ing how the score works.

I’d like to offer up a dif­fer­ent way of pri­or­i­tiz­ing. What hap­pens when we com­bine objec­tive type ques­tions with the scor­ing mod­el? Here’s an exam­ple:

Supporting Data

  • None/Weak
  • Sup­port­ed by analy­sis
  • Iter­a­tion

KPI

  • Inter­ac­tion or micro con­ver­sion
  • Con­ver­sion (order, lead, etc)
  • Qual­i­ty of con­ver­sion (RPV, qual­i­fied lead, etc)

Target Audience

  • Less than 25% of cus­tomer traf­fic would be impact­ed
  • 26–50% of cus­tomer traf­fic would be impact­ed
  • More than 50% of cus­tomer traf­fic would be impact­ed
  • Tar­get­ed to key cus­tomer segment(s)

Effort

Score should be pro­vid­ed by dev, design, legal based on their under­stand­ing of required effort. Effort should be buck­et­ed by hours. (0 = 0–2 hours, 1 = >2–8 hours, etc with the buck­ets to be deter­mined by each group. There should be an effort “score” for each.

Impact

This is a fun one to make objec­tive – instead of sub­jec­tive “high, med, low” type assess­ment – focus on like­li­hood to be seen. For exam­ple, if you have page scroll met­rics, give a dif­fer­ent score for ele­ments on your page that are seen by at least 75% vs those that are like­ly to be seen by few­er than 25%. If you have his­tor­i­cal learn­ing library data to aggre­gate and mea­sure aver­age impact of var­i­ous types of tests (adding/removing ele­ments, chang­ing tem­plates, forms, check­out fun­nel, redi­rects, etc) – then you can score the test by his­tor­i­cal impact of sim­i­lar tests! (hmm…seems like anoth­er blog idea!)

 

Most impor­tant to any pri­or­i­ti­za­tion scor­ing is that it be flex­i­ble and cus­tomized to your spe­cif­ic data and needs. There is no one-size-fits all approach that works here. As with any opti­miza­tion effort, start with your busi­ness and the spe­cif­ic needs and goals of your orga­ni­za­tion. Take into con­sid­er­a­tion the data you have avail­able today, the bot­tle­necks inher­ent in your exist­ing process, and your avail­able traf­fic. Then – get pri­or­i­tiz­ing! And don’t set­tle with your first scor­ing mod­el. Keep revis­ing it and test your new mod­el against your old mod­els – see how it changes what tests get pri­or­i­tized. That’s the great thing about opti­miza­tion pro­grams – you can apply your opti­miza­tion efforts to more than just the web­site!

If you get stuck or just don’t have the time or ener­gy (or inter­nal sup­port) to cre­ate a cus­tom pri­or­i­ti­za­tion scor­ing mod­el your­self – give us a call. We’re hap­py to help!