The differences between Adobe Target Standard and Premium was the topic of a recent Test & Learn Community (TLC) conversation that led to the creation of this post. At a very simplistic level, the primary difference between the two packages is that Target Premium offers enhanced automated personalization features: Target Premium has all the capabilities of Standard, plus three additional personalization capabilities. In this post, we’ll outline five key capabilities of Target and clarify the benefits, drawbacks, and considerations of each. When you’re able to understand the what and how of each capability and use cases for when and why to use them, you’ll be better able to determine (and articulate) the best choice for your specific program.*
The most important decision-making differences between Standard and Premium involve five different key activities (consider this your TL;DR):
Let’s break these down further… (Note: for the purposes of this post, we are focusing only on the machine learning and personalization capabilities differences between Standard and Premium. We will also not touch on shared capabilities such as split testing (A/B/n) or multivariate testing (MVT). For a deep dive discussion into A/B/n vs. MVT: Multivariate Testing 101.)
(Target Standard & Premium)
What it does:
Experience targeting (aka rules-based targeting) is the first step of most personalization efforts. When business users think of “personalization,” they’re typically thinking of targeting, unless they’re thinking of Amazon (and then they’re thinking of recommendations!). Experience targeting is simply setting up business rules to indicate what content/experience to deliver to each user, based on the segment to which that user belongs.
Experience targeting can use the Visual Experience Composer (VEC) for design and can utilize Analytics for Target (A4T) for reporting. Content creation and maintenance demand will depend on the number of segments that businesses choose to target with different experiences. Experience targeting does not utilize any machine learning for automation of experience delivery.
Rules-based targeting, like Adobe’s Experience targeting, is typically used when we have information about segments that indicate a preference for a different user experience, OR we have a reason to show different experiences based on some segment information (such as offers based on location, customer vs. prospect, etc).
(Target Standard & Premium)
What it does:
Auto-Allocate is a machine learning split test (A/B/n) that employs a non-contextual multi-armed bandit. (For a multi-armed bandit deep dive, see here.) Auto-Allocate begins with an equal distribution of all traffic (regardless of context) among all trial groups, but, as particular variations begin to perform better, visitors are automatically shifted to the higher performing variations, regardless of visitor traits or segments.
How does Auto-Allocate affect optimization?
By its nature, a split test has winners and losers. Normally, you utilize statistical analysis to calculate before the test how much traffic you need, and you leave a test live (even the losing variations) until test completion, with the same ratio of traffic going into all variations (again – even the losing variations) throughout the life of the test. In practice, you are knowingly delivering a subpar experience to a set percentage of customers for the duration of the test.
However, with Auto-Allocate, up to 80% of visitors will be shifted from a subpar experience to winning variation(s) automatically. The remaining 20% of visitors will be assigned (evenly and randomly) to all variations, in order to keep checking for potential shifts in preferences. Auto allocate doesn’t eliminate–but it reduces–the impact of lower-performing variations, while it exploits the impact of higher performing variations.
Auto-Allocate, like other multi-armed bandits, allows you to quickly determine the best variation among a selection of variations and gain benefits along the way. However, it doesn’t allow you to identify low-performing variations, because a multi-armed bandit only differentiates between high performers: It’s going to shift traffic to higher performers to give enough statistical power (samples) to find out which one is best more quickly. To do that, it’s reducing the power (samples) of lower performers, thereby making it nearly impossible (statistically speaking) to know which of the lower performers is worst.
Similarly, conducting a deep dive with Auto-Allocate results is more difficult, because the reporting source cannot be Adobe Analytics for Target (A4T), as many have gotten used to. Since machine learning requires real-time decisioning, Auto-Allocate (as well as Auto-Target and Auto-Personalization) use Target—and not Adobe Analytics—as their reporting source. This is a bitter pill for some to swallow and can lead to losses in productivity, as users are repeatedly asked to explain results in light of data not available within the Target interface, or worse—asked to explain discrepancies between tools.
Lastly, Auto-Allocate does not differentiate between segment or user preference. It will not identify when Chrome users prefer experience A and Safari users prefer B. Instead, it will deliver the best overall variation that the majority of users prefer, similar to an A/B/n experiment.
Use when you’re trying to find the best overall variation as quickly as possible, since Auto-Allocate exploits gains and minimizes risk to find the best default, faster. For example, many programs hesitate to test during peak holiday season. Using Auto-Allocate, you can quickly test offers against each other and let the tool shift visitors to the highest performing variations, so you get the benefit while the test is running, with minimal risk.
What it does:
Auto-Target is a one-to-few algorithm. Like Auto-Allocate, Auto-Target delivers an automated form of A/B/n testing. However, unlike Auto-Allocate, Auto-Target is a contextual bandit: it considers a user’s context (e.g., which variation wins for this time of day or this time of week or your gender or whether or not you’re a gamer, etc.) as it makes decisions about which variation to give a user. Auto-Target automatically collects information about visitors to build personalization profiles (segments). It isn’t trying to find the best variation for everyone; it’s trying to find the best variation for each segment.
Auto-Target initially distributes users evenly between variations, while learning which combination of variables (context) seem to impact variation preference and conversion. It subsequently uses this learning to group users with like variables (that lead to successful outcomes) into segments, and then it assigns the higher performing variation(s) to each segment. In this way, Auto-Target delivers the optimal variation to each segment.
How does Auto-Target affect optimization?
Auto-Target automates rules-based targeting. That is, the machine—rather than the business—is deciding what the rule should be, based on actual user preference. This allows for “more intelligent” targeting or targeting at scale.
Auto-Target will benefit you only when there is a difference in preference of variation between segments. In other words, if everyone or no one likes a variation, you won’t learn anything. You must have multiple variations that different segments prefer in order to get any value from the tool. Auto-Target allows for use of the Visual Experience Composer (VEC) in designing the experiment. However, like Auto-Allocate, it does not allow for the use of A4T as a reporting source. Also like Auto-Allocate, a percentage of the population remains randomly distributed for continual learning, so all the statistical implications that apply to Auto-Allocate affect Auto-Target. Unlike Auto-Allocate, Auto-Target will place an increased demand on your design and development resources for the creation and maintenance of different experiences.
Use Auto-Target if you already have evidence to suggest that different segments may respond differently to the same experience. If you have no reason to suspect a difference in preference, run an A/B/n test first, then do a segmentation analysis to see if there are any differences in preference by segment. Only if you see that difference would you then follow it up with an Experience Targeting (Standard) or Auto-Target (Premium) campaign.
What it does:
Automated Personalization is a step beyond Auto-target’s one-to-few option and not quite to Recommendations one-to-one option. It works the same way as Auto-Target, as it is a contextual bandit, but at the MVT level. So, instead of the best single variation for a particular segment, it chooses the best combination of individual elements (offers) on a page or within an experience for a particular user. This fine-tuning of experience is what sets Automated Personalization apart from Auto-Target.
How does Automated Personalization affect optimization?
Automated Personalization uses the same statistical methodologies as Auto-Target but allows for the creation of new experiences, based on the likelihood of specific elements within those variations (offers) to be more or less compelling to each visitor (based on the propensity models).
Automated Personalization activities historically had to be set up using the Visual Experience Composer (VEC), so designs were limited to those that could utilize the VEC, e.g. Auto Personalization didn’t allow the user to make structural changes or rearrange elements on a page. If you wanted that kind of freedom, you’re best off using Auto-Target. However, as of Jun 2017, Automated Personalization also can be created using a form-based editor. Additionally, like the previous two automation capabilities, Automated Personalization does not allow for use of A4T as a reporting source. Lastly, note that Automated Personalization frequently requires more traffic than Auto-Allocate or Auto-Target, simply due to the increased number of variations. Like Auto-Target, content creation and management demands (highest with Automated Personalization) can be a blocker for some organizations.
A frequent use case for Automated Personalization involves high-volume entry pages, including homepages with easily modularized elements where multiple different types of users likely have different reasons for coming to a site. For example, a returning customer would never need to see a hero banner showcasing an offer only relevant for new customers. Similarly, a new user would not appreciate a quick login option as a hero image. Both users might like to see an easy visual nav, while a mobile visitor might prefer a fly-out nav. Of course, with Automated Personalization, we can combine some of these elements into an uber-customized experience for the new visitor on a mobile device (new customer hero offer + fly out nav).
What it does:
Recommendations use an algorithmic-driven rules-based approach to deliver recommended content at a near one-to-one level. It uses a variety of available algorithms, such as popularity, last viewed items, similar products, etc. that allow you to surface offers, content, or products that users are most likely to find compelling and might not otherwise be exposed to.
How does Recommendations affect optimization?
The key impact to optimization is related to ensuring that the area of the page where the Recommendations are running is not impacted by other experiments running concurrently. Of course, it’s also important to measure potential interaction effects for customers exposed to both the Recommendations and other activities.
Unlike the three previously described automation capabilities of Adobe Target, Recommendations does allow for A4T integration, giving analysts the ability to conduct deep dive impact assessments or cohort analyses based on algorithms. The content demand (articles, products, offers, videos, etc.) for producers shouldn’t be much more than what is already required for creating different products or offer elements on the site. However, in some cases, the Recommendations engine could increase demand for previously un-monitored content. As is true with all personalization activities, Recommendations will only be impactful if the algorithms supporting the recommendations are solid, and if the recommendations, themselves, are presented in a compelling way in a high-visibility area of the page. Recommendations reporting allows you to evaluate the success of your algorithms and test different options for both the algorithms and creative. For development, you can use the form-based Experience Composer or you can use the VEC, if you prefer.
On the retail side, we’re familiar with a number of great use cases, thanks to the ubiquity of Amazon. But there are also plenty of non-retail use cases. Netflix expertly suggests what movie or TV show we might want to watch, based on what we’ve watched in the past, and media sites do a good job determining what article to show next. Financial services can offer the right credit card, loan, or savings account, and non-profit organizations can suggest a giving amount or cadence most likely to resonate with specific users.
The choice between Target Standard and Premium truly comes down to your organization’s unique situation. If you have the use cases for Adobe’s Premium automation options, AND the traffic to support them, AND the resources to create and manage the increased content demand, AND you’re okay with letting the reporting live within Target instead of Analytics, then you should at least consider a trial with Premium. If any ONE of these requirements does not fit your situation, you might want to focus on Experience Targeting and Auto-Allocate in Target Standard.
One further consideration that did not neatly fit under any one of the above described personalization options: privacy and risk concerns. Specifically, with GDPR and other privacy laws in the works, many companies need to be able to customize what variables/traits a machine learning model uses to learn and make decisions. For example, in financial services, there may be behaviors or known traits that are illegal to use in creating a model. Or in the EU, companies might want to be able to quickly point to the exact formula that led to customer A receiving offer X. If either of these examples are something that may be an issue for your organization, please keep in mind that Adobe Target does not currently support the selection (or de-selection) of variables for your model within their GUI. Nor do they currently have a transparent model for easy communication of all selection decisions which is a requirement for many in high risk or compliance industries (or with customers in the EU). (Note: recent updates allow for identification of traits that influence decisions—a step in the right direction!) I’m confident both are in the works and will update this post when they go live!
- CORRECTION: an earlier version of this post incorrectly stated that Automated Personalization did not use the Visual Experience Composer and did not control for “junk” offer combinations.
- CORRECTION: an earlier version of this post incorrectly stated that Adobe Target did not support the selection (or de-selection) of variables for your model. The post has been updated to be more accurate by adding “within their GUI” as client care is able to remove variables from a model with a documented request that includes compliance concerns.
- Automated Personalization has a form composer now– as of 22 June 2017. Release notes are here.
- Recommendations can now use the “Custom Code” in the VEC. With the new capability as of Feb – Recommendations can be embedded inside of A/B and XT activities and so have full access to that composer as well. Read more here.
- Other key differences worth considering: Premium unlocks the Enterprise Permissions capabilities with Properties and Workspaces – this is a valuable addition to any brand who requires completely discrete working environments for different teams, countries, technology channels, brands, lines of business, or business units. Premium also allows for all of the IOT+ use cases beyond Web, App, and Email.
* Disclaimer: There are other differences between Target Standard & Premium beyond the scope of this blog, but in this post I bring to light the differences that continue to come up in my conversations with clients and the Test & Learn Community (TLC).