Adobe Target Standard v. Premium: Help me choose!

by Mar 15, 2019

Confused about whether Adobe Target Stan­dard or Premium is best for you? Read on for use case clarity. Finally!

Skip ahead if you must:


The differ­ences between Adobe Target Stan­dard and Premium was the topic of a recent Test & Learn Commu­nity (TLC) conver­sa­tion that led to the creation of this post. At a very simplis­tic level, the primary differ­ence between the two pack­ages is that Target Premium offers enhanced auto­mated person­al­iza­tion features: Target Premium has all the capa­bil­i­ties of Stan­dard, plus three addi­tional person­al­iza­tion capa­bil­i­ties. In this post, we’ll outline five key capa­bil­i­ties of Target and clarify the bene­fits, draw­backs, and consid­er­a­tions of each. When you’re able to under­stand the what and how of each capa­bil­ity and use cases for when and why to use them, you’ll be better able to deter­mine (and artic­u­late) the best choice for your specific program.*


The most impor­tant deci­sion-making differ­ences between Stan­dard and Premium involve five differ­ent key activ­i­ties (consider this your TL;DR):

Let’s break these down further… (Note: for the purposes of this post, we are focus­ing only on the machine learn­ing and person­al­iza­tion capa­bil­i­ties differ­ences between Stan­dard and Premium. We will also not touch on shared capa­bil­i­ties such as split testing (A/B/n) or multi­vari­ate testing (MVT). For a deep dive discus­sion into A/B/n vs. MVT: Multi­vari­ate Testing 101.)

Experience Targeting (Target Standard & Premium)

What it does:

Expe­ri­ence target­ing (aka rules-based target­ing) is the first step of most person­al­iza­tion efforts. When busi­ness users think of “person­al­iza­tion,” they’re typi­cally think­ing of target­ing, unless they’re think­ing of Amazon (and then they’re think­ing of recom­men­da­tions!). Expe­ri­ence target­ing is simply setting up busi­ness rules to indi­cate what content/experience to deliver to each user, based on the segment to which that user belongs.


Expe­ri­ence target­ing can use the Visual Expe­ri­ence Composer (VEC) for design and can utilize Analyt­ics for Target (A4T) for report­ing. Content creation and main­te­nance demand will depend on the number of segments that busi­nesses choose to target with differ­ent expe­ri­ences. Expe­ri­ence target­ing does not utilize any machine learn­ing for automa­tion of expe­ri­ence deliv­ery.

Use Cases:

Rules-based target­ing, like Adobe’s Expe­ri­ence target­ing, is typi­cally used when we have infor­ma­tion about segments that indi­cate a pref­er­ence for a differ­ent user expe­ri­ence, OR we have a reason to show differ­ent expe­ri­ences based on some segment infor­ma­tion (such as offers based on loca­tion, customer vs. prospect, etc).

Auto-Allocate (Target Standard & Premium)

What it does:

Auto-Allo­cate is a machine learn­ing split test (A/B/n) that employs a non-contex­tual multi-armed bandit. (For a multi-armed bandit deep dive, see here.) Auto-Allo­cate begins with an equal distri­b­u­tion of all traffic (regard­less of context) among all trial groups, but, as partic­u­lar vari­a­tions begin to perform better, visi­tors are auto­mat­i­cally shifted to the higher perform­ing vari­a­tions, regard­less of visitor traits or segments.

How does Auto-Allo­cate affect opti­miza­tion?

By its nature, a split test has winners and losers. Normally, you utilize statis­ti­cal analy­sis to calcu­late before the test how much traffic you need, and you leave a test live (even the losing vari­a­tions) until test comple­tion, with the same ratio of traffic going into all vari­a­tions (again — even the losing vari­a­tions) through­out the life of the test. In prac­tice, you are know­ingly deliv­er­ing a subpar expe­ri­ence to a set percent­age of customers for the dura­tion of the test.

However, with Auto-Allo­cate, up to 80% of visi­tors will be shifted from a subpar expe­ri­ence to winning variation(s) auto­mat­i­cally. The remain­ing 20% of visi­tors will be assigned (evenly and randomly) to all vari­a­tions, in order to keep check­ing for poten­tial shifts in pref­er­ences. Auto allo­cate doesn’t eliminate–but it reduces–the impact of lower-perform­ing vari­a­tions, while it exploits the impact of higher perform­ing vari­a­tions.


Auto-Allo­cate, like other multi-armed bandits, allows you to quickly deter­mine the best vari­a­tion among a selec­tion of vari­a­tions and gain bene­fits along the way. However, it doesn’t allow you to iden­tify low-perform­ing vari­a­tions, because a multi-armed bandit only differ­en­ti­ates between high perform­ers: It’s going to shift traffic to higher perform­ers to give enough statis­ti­cal power (samples) to find out which one is best more quickly. To do that, it’s reduc­ing the power (samples) of lower perform­ers, thereby making it nearly impos­si­ble (statis­ti­cally speak­ing) to know which of the lower perform­ers is worst.

Simi­larly, conduct­ing a deep dive with Auto-Allo­cate results is more diffi­cult, because the report­ing source cannot be Adobe Analyt­ics for Target (A4T), as many have gotten used to.  Since machine learn­ing requires real-time deci­sion­ing, Auto-Allo­cate (as well as Auto-Target and Auto-Person­al­iza­tion) use Target—and not Adobe Analytics—as their report­ing source. This is a bitter pill for some to swallow and can lead to losses in produc­tiv­ity, as users are repeat­edly asked to explain results in light of data not avail­able within the Target inter­face, or worse—asked to explain discrep­an­cies between tools.

Lastly, Auto-Allo­cate does not differ­en­ti­ate between segment or user pref­er­ence. It will not iden­tify when Chrome users prefer expe­ri­ence A and Safari users prefer B. Instead, it will deliver the best overall vari­a­tion that the major­ity of users prefer, similar to an A/B/n exper­i­ment.

Use case:

Use when you’re trying to find the best overall vari­a­tion as quickly as possi­ble, since Auto-Allo­cate exploits gains and mini­mizes risk to find the best default, faster. For example, many programs hesi­tate to test during peak holiday season. Using Auto-Allo­cate, you can quickly test offers against each other and let the tool shift visi­tors to the highest perform­ing vari­a­tions, so you get the benefit while the test is running, with minimal risk.

Auto-Target (Target Premium)

What it does:

Auto-Target is a one-to-few algo­rithm. Like Auto-Allo­cate, Auto-Target deliv­ers an auto­mated form of A/B/n testing. However, unlike Auto-Allo­cate, Auto-Target is a contex­tual bandit: it consid­ers a user’s context (e.g., which vari­a­tion wins for this time of day or this time of week or your gender or whether or not you’re a gamer, etc.) as it makes deci­sions about which vari­a­tion to give a user. Auto-Target auto­mat­i­cally collects infor­ma­tion about visi­tors to build person­al­iza­tion profiles (segments). It isn’t trying to find the best vari­a­tion for every­one; it’s trying to find the best vari­a­tion for each segment.

Auto-Target initially distrib­utes users evenly between vari­a­tions, while learn­ing which combi­na­tion of vari­ables (context) seem to impact vari­a­tion pref­er­ence and conver­sion. It subse­quently uses this learn­ing to group users with like vari­ables (that lead to success­ful outcomes) into segments, and then it assigns the higher perform­ing variation(s) to each segment. In this way, Auto-Target deliv­ers the optimal vari­a­tion to each segment.

How does Auto-Target affect opti­miza­tion?

Auto-Target auto­mates rules-based target­ing. That is, the machine—rather than the business—is decid­ing what the rule should be, based on actual user pref­er­ence. This allows for “more intel­li­gent” target­ing or target­ing at scale.


Auto-Target will benefit you only when there is a differ­ence in pref­er­ence of vari­a­tion between segments. In other words, if every­one or no one likes a vari­a­tion, you won’t learn anything. You must have multi­ple vari­a­tions that differ­ent segments prefer in order to get any value from the tool. Auto-Target allows for use of the Visual Expe­ri­ence Composer (VEC) in design­ing the exper­i­ment. However, like Auto-Allo­cate, it does not allow for the use of A4T as a report­ing source. Also like Auto-Allo­cate, a percent­age of the popu­la­tion remains randomly distrib­uted for contin­ual learn­ing, so all the statis­ti­cal impli­ca­tions that apply to Auto-Allo­cate affect Auto-Target. Unlike Auto-Allo­cate, Auto-Target will place an increased demand on your design and devel­op­ment resources for the creation and main­te­nance of differ­ent expe­ri­ences.

Use Case:

Use Auto-Target if you already have evidence to suggest that differ­ent segments may respond differ­ently to the same expe­ri­ence. If you have no reason to suspect a differ­ence in pref­er­ence, run an A/B/n test first, then do a segmen­ta­tion analy­sis to see if there are any differ­ences in pref­er­ence by segment. Only if you see that differ­ence would you then follow it up with an Expe­ri­ence Target­ing (Stan­dard) or Auto-Target (Premium) campaign.

Automated Personalization (Target Premium)

What it does:

Auto­mated Person­al­iza­tion is a step beyond Auto-target’s one-to-few option and not quite to Recom­men­da­tions one-to-one option.  It works the same way as Auto-Target, as it is a contex­tual bandit, but at the MVT level. So, instead of the best single vari­a­tion for a partic­u­lar segment, it chooses the best combi­na­tion of indi­vid­ual elements (offers) on a page or within an expe­ri­ence for a partic­u­lar user. This fine-tuning of expe­ri­ence is what sets Auto­mated Person­al­iza­tion apart from Auto-Target.

How does Auto­mated Person­al­iza­tion affect opti­miza­tion?

Auto­mated Person­al­iza­tion uses the same statis­ti­cal method­olo­gies as Auto-Target but allows for the creation of new expe­ri­ences, based on the like­li­hood of specific elements within those vari­a­tions (offers) to be more or less compelling to each visitor (based on the propen­sity models).


Auto­mated Person­al­iza­tion activ­i­ties histor­i­cally had to be set up using the Visual Expe­ri­ence Composer (VEC), so designs were limited to those that could utilize the VEC, e.g. Auto Person­al­iza­tion didn’t allow the user to make struc­tural changes or rearrange elements on a page. If you wanted that kind of freedom, you’re best off using Auto-Target. However, as of Jun 2017, Auto­mated Person­al­iza­tion also can be created using a form-based editor. Addi­tion­ally, like the previ­ous two automa­tion capa­bil­i­ties, Auto­mated Person­al­iza­tion does not allow for use of A4T as a report­ing source. Lastly, note that Auto­mated Person­al­iza­tion frequently requires more traffic than Auto-Allo­cate or Auto-Target, simply due to the increased number of vari­a­tions. Like Auto-Target, content creation and manage­ment demands (highest with Auto­mated Person­al­iza­tion) can be a blocker for some orga­ni­za­tions.

Use Case:

A frequent use case for Auto­mated Person­al­iza­tion involves high-volume entry pages, includ­ing home­pages with easily modu­lar­ized elements where multi­ple differ­ent types of users likely have differ­ent reasons for coming to a site. For example, a return­ing customer would never need to see a hero banner show­cas­ing an offer only rele­vant for new customers. Simi­larly, a new user would not appre­ci­ate a quick login option as a hero image. Both users might like to see an easy visual nav, while a mobile visitor might prefer a fly-out nav. Of course, with Auto­mated Person­al­iza­tion, we can combine some of these elements into an uber-customized expe­ri­ence for the new visitor on a mobile device (new customer hero offer + fly out nav).

Recommendations (Target Premium)

What it does:

Recom­men­da­tions use an algo­rith­mic-driven rules-based approach to deliver recom­mended content at a near one-to-one level. It uses a variety of avail­able algo­rithms, such as popu­lar­ity, last viewed items, similar prod­ucts, etc. that allow you to surface offers, content, or prod­ucts that users are most likely to find compelling and might not other­wise be exposed to.

How does Recom­men­da­tions affect opti­miza­tion?

The key impact to opti­miza­tion is related to ensur­ing that the area of the page where the Recom­men­da­tions are running is not impacted by other exper­i­ments running concur­rently. Of course, it’s also impor­tant to measure poten­tial inter­ac­tion effects for customers exposed to both the Recom­men­da­tions and other activ­i­ties.


Unlike the three previ­ously described automa­tion capa­bil­i­ties of Adobe Target, Recom­men­da­tions does allow for A4T inte­gra­tion, giving analysts the ability to conduct deep dive impact assess­ments or cohort analy­ses based on algo­rithms. The content demand (arti­cles, prod­ucts, offers, videos, etc.) for produc­ers shouldn’t be much more than what is already required for creat­ing differ­ent prod­ucts or offer elements on the site. However, in some cases, the Recom­men­da­tions engine could increase demand for previ­ously un-moni­tored content. As is true with all person­al­iza­tion activ­i­ties, Recom­men­da­tions will only be impact­ful if the algo­rithms support­ing the recom­men­da­tions are solid, and if the recom­men­da­tions, them­selves, are presented in a compelling way in a high-visi­bil­ity area of the page. Recom­men­da­tions report­ing allows you to eval­u­ate the success of your algo­rithms and test differ­ent options for both the algo­rithms and creative. For devel­op­ment,  you can use the form-based Expe­ri­ence Composer or you can use the VEC, if you prefer.

Use Case:

On the retail side, we’re famil­iar with a number of great use cases, thanks to the ubiq­uity of Amazon. But there are also plenty of non-retail use cases. Netflix expertly suggests what movie or TV show we might want to watch, based on what we’ve watched in the past, and media sites do a good job deter­min­ing what article to show next. Finan­cial services can offer the right credit card, loan, or savings account, and non-profit orga­ni­za­tions can suggest a giving amount or cadence most likely to resonate with specific users.


The choice between Target Stan­dard and Premium truly comes down to your organization’s unique situ­a­tion. If you have the use cases for Adobe’s Premium automa­tion options, AND the traffic to support them, AND the resources to create and manage the increased content demand, AND you’re okay with letting the report­ing live within Target instead of Analyt­ics, then you should at least consider a trial with Premium. If any ONE of these require­ments does not fit your situ­a­tion, you might want to focus on Expe­ri­ence Target­ing and Auto-Allo­cate in Target Stan­dard.

One further consid­er­a­tion that did not neatly fit under any one of the above described person­al­iza­tion options: privacy and risk concerns. Specif­i­cally, with GDPR and other privacy laws in the works, many compa­nies need to be able to customize what variables/traits a machine learn­ing model uses to learn and make deci­sions. For example, in finan­cial services, there may be behav­iors or known traits that are illegal to use in creat­ing a model. Or in the EU, compa­nies might want to be able to quickly point to the exact formula that led to customer A receiv­ing offer X. If either of these exam­ples are some­thing that may be an issue for your orga­ni­za­tion, please keep in mind that Adobe Target does not currently support the selec­tion (or de-selec­tion) of vari­ables for your model within their GUI. Nor do they currently have a trans­par­ent model for easy commu­ni­ca­tion of all selec­tion deci­sions which is a require­ment for many in high risk or compli­ance indus­tries (or with customers in the EU). (Note: recent updates allow for iden­ti­fi­ca­tion of traits that influ­ence decisions—a step in the right direc­tion!) I’m confi­dent both are in the works and will update this post when they go live!


  1. CORRECTION: an earlier version of this post incor­rectly stated that Auto­mated Person­al­iza­tion did not use the Visual Expe­ri­ence Composer and did not control for “junk” offer combi­na­tions. 
  2. CORRECTION: an earlier version of this post incor­rectly stated that Adobe Target did not support the selec­tion (or de-selec­tion) of vari­ables for your model. The post has been updated to be more accu­rate by adding “within their GUI” as client care is able to remove vari­ables from a model with a docu­mented request that includes compli­ance concerns.
  3. Auto­mated Person­al­iza­tion has a form composer now– as of 22 June 2017. Release notes are here.
  4. Recom­men­da­tions can now use the “Custom Code” in the VEC. With the new capa­bil­ity as of Feb – Recom­men­da­tions can be embed­ded inside of A/B and XT activ­i­ties and so have full access to that composer as well. Read more here.
  5. Other key differ­ences worth consid­er­ing: Premium unlocks the Enter­prise Permis­sions capa­bil­i­ties with Prop­er­ties and Work­spaces – this is a valu­able addi­tion to any brand who requires completely discrete working envi­ron­ments for differ­ent teams, coun­tries, tech­nol­ogy chan­nels, brands, lines of busi­ness, or busi­ness units. Premium also allows for all of the IOT+ use cases beyond Web, App, and Email. 

* Disclaimer: There are other differ­ences between Target Stan­dard & Premium beyond the scope of this blog, but in this post I bring to light the differ­ences that continue to come up in my conver­sa­tions with clients and the Test & Learn Commu­nity (TLC).

Ready to get started?
Reach out to learn more about how we can help.

I consent to having Search Discovery use the provided infor­ma­tion for direct market­ing purposes includ­ing contact by phone, email, SMS, or other elec­tronic means.