Are You Down with RCT? The Solution to the 360-degree Fallacy

Marketers and analysts have operated under a set of mistaken assumptions since the very beginning of digital when it comes to the completeness and accuracy of the data they have and what they can do with it. This post  considers complementary—but fundamentally different—means of measuring the impact of their investments and one alternative that addresses the issue: the randomized controlled trial, or “RCT.”

The emergence and then steady and rapid growth of “digital” in the world of the consumer—and as a result, the world of the marketer—prompted a belief that, while understandable, was never true:

The digital world would provide visibility into all of the customer's interactions with a brand, and, as a result, would empower brands with a "360-degree view" of their customers and prospects.

This promise was quite the siren song, and countless marketers have blissfully navigated their marketing budgets towards that enchanting tune.

image3

This is entirely understandable, as access to a complete view of a customer’s interactions would yield several amazing benefits:

  • Marketing attribution models could be run that would show the exact contribution that each marketing channel and combination of channels contributes to the brand’s bottom line.
  • This would actually enable getting to a true and definitive marketing ROI.
  • Predictive models could be built that used the past interactions of other prospects and customers to deliver a personalized experience.

In short, take that, Mr. Wanamaker!

image4

This Promise Was Always a Pipe Dream

Unfortunately, the growth of digital never delivered on this promise, and it never was going to. While the trackability of consumers is certainly enhanced in a digital world, we are not yet living in 1984. Despite martech and media agency hype to the contrary:
  • Marketers cannot track and quantify the impact of a consumer’s lifetime of past experiences with the brand (“I grew up using Colgate, and I’ve always been happy with it.” “My two best friends in high school drove Fords, and they were both always breaking down!”).
  • Tracking non-authenticated users (which is the norm!) across devices (their phones, their work laptops, their home computers…and the replacements of any and all of those when they update their hardware) only happens at the margins (martech claims notwithstanding).
  • Marketers cannot track—at a person level—the exposure to and consumption of billboards, magazine ads, circulars, broadcast TV, radio, or countless other non-digital channels that may influence their behavior (often in concert with digital channels).
  • Even in digital channels, tracking of a user’s exposure (impression) to an ad is often not possible, either due to the nature of the medium (e.g., podcast advertising) or due to the nature of where the ad is placed (e.g., Facebook’s “walled garden” approach to ads).
  • Most cross-session and cross-channel tracking of marketing interactions rely on cookies and cookie gymnastics to persistently link different activities back to the same user, and cookies-for-tracking has always been a pretty brittle strategy—roughly as reliable as giving an 8-year-old a new pair of winter gloves at the onset of winter and expecting them both to be in his possession come spring.
And this 360-degree ideal is growing more distant by the week. Marketers need to consider complementary, but fundamentally different, means of measuring the impact of their investments.

Privacy: You Sorta’ Saw Me, But Now You Don’t / Can’t / Shouldn’t

Understandably, consumers have gotten pretty antsy about all of this tracking that marketers were so excited about. There is some pretty rich irony here:

  • Thanks to high profile stories in the media and even a mainstream movie, consumers are convinced that every brand knows exactly what they’re doing at all times with incredible precision. Which is what marketers would love to do.
  • In reality, most brands still invest heavily in digital channels and simply hope that the magic of “programmatic” is getting their messages in front of their intended audiences. And while they pay a premium for the promise of the magic, there’s little evidence it’s paying off.

Most brands are not realizing the benefits of all of this detailed user tracking that digital was supposed to provide, but, rather than recognizing that those expectations were unrealistic, they assume they are simply behind their competition, and they just need to find the right agency and the right martech to let them catch up.

Consumers’ concerns have generated an unstoppable wave that is going to move that goal even farther out of reach. That wave is being powered by two completely different forces:

  • Regulation—GDPR, CCPA, and numerous other regulations are putting stringent constraints on brands when it comes to what, who, and when they can track behavior, as well as what they do with the data they’ve collected through that tracking. Stiff penalties are in the offing for brands that don’t take these regulations seriously.
  • Consumer Technology—all of the major browser, operating system, and device manufacturers are racing to prove that they are the most privacy-friendly tech. Cookies are shifting from being unreliable to being ephemeral to being blocked outright. And, as the martech ecosystem scrambles to find workarounds, those same players find ways to shut them down (it’s generally easier to block a new form of tracking than it is to design and enable a new form of tracking).

The future is very clear on this:

Consumers do not want to be tracked, and they have politicians, regulators, and the providers of their means of accessing the internet all in their corner for the fight.

User-level tracking at scale was never comprehensive, and it’s rapidly heading towards being point-in-time only.

User-Level Tracking Isn’t Needed to Answer Some Big Questions

There are multiple underlying, structural reasons as to why we’ve gotten to this point. Tim Hwang’s Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet is a short (141 pages) and thorough exploration of some of these. The Freakonomics Radio podcast ran a two-part series (episodes 440 and 441) that also got to the roots of the issue, but also pointed to some potential solutions.

At its core, there is a fundamentally backwards approach: the “promise of digital” was “we’ll have all this data…so then we will be able to put it to use to answer all of our biggest questions.” That started with a solution rather than starting with a clearly articulated business problem.

This is the classic, “when all you have is a hammer, all the world looks like a nail.” The hammer, in this case, is wildly-incomplete-and-becoming-even-more-incomplete user-level tracking.

Let’s, instead, start with one of the larger questions that this data was intended to answer:

What is the return I am getting from my marketing investments?

This question can be interpreted in several ways or, as tends to happen, interpreted in all ways:

  • What is the ROI from marketing overall?
  • What is the ROI for each of my marketing channels?
  • How can I tweak and tune specific marketing tactics day in and day out to maximize the return on my marketing spend?

These are actually different questions. But the “promise of digital” was that if we can just answer the last question with precision and accuracy, then the first and second questions will also be readily answerable.

That would be true, but we can’t answer the third question with the precision and accuracy that is envisioned…and promised by martech and Big Media, both of which have incentives that are not aligned with the wants and needs of advertisers. Martech and media agencies are incentivized to convince advertisers to spend more money on advertising, while advertisers are incentivized to identify which tactics are and are not working so that they can invest as efficiently as possible.

As it turns out, the first and second questions can be answered well without user-level tracking at all! Put that hammer back in the toolbox and pull out the screwdriver labeled “randomized controlled trials” (RCT) or, possibly, “experimentation” (it’s the same tool—just labeled differently depending on who you’re speaking to).

The solution to the 360-degree fallacy

What are the right things that we should measure and how can we actively work to measure our marketing investments using results-based evidence? Of course, we should ensure we’ve got clarity on what success looks like (outcome-oriented KPIs), and we need to capture and manage data as efficiently and comprehensively as is warranted. 

But, beyond that, we can design robust experiments—using well-established and mature methodologies—to get at causal effects.

Think of marketing attribution as an equation:

Results ($) = Base Results + Marketing Investments ($) + Noise

Results, which would be represented as revenue or profit, are the dependent variable in an equation that is made up of three parts:

Base Results are the results that would have been realized even if there was no marketing whatsoever (for any statisticians following along, this is the intercept, or β0).

Marketing Investments are the money spent on marketing/advertising (again, for any statistical types following along, these are the independent variables, which wind up being a series of βnxn values).

Noise is the reality of dealing with humans—the formula won’t be perfect, but the magnitude of the noise can be quantified.

There is no user-level tracking involved here. But, if we can figure out what the actual values for this equation are, we can quantify the impact of individual channels and of marketing overall (marketing doesn’t get to take credit for Base Results or for Noise).

This is not an impossible task, but it does require designing an experiment (or a series of experiments) and then diligently executing against that plan. In an admittedly oversimplified way, this can be thought of like an A/B test on a website landing page…but using different geographic regions to test “exposed” vs. “not exposed.”

Using RCT as an alternative solution

While the example above shows a pure “on/off” design, this is not required. A test can be run that simply “heavies up” and “reduces” spend on different channels in different “blocks” (geographies). This is actually preferred, in that it enables getting a “response curve”—how much do various spend levels move the needle on results—rather than a simple on/off assessment. The methods are quite mature and robust! This was a topic that was explored on a recent episode of the Digital Analytics Power Hour podcast.

Marketers have often shied away from this sort of approach under the mistaken belief that this is less “pure” than an A/B test when, in fact, social scientists have been putting these sorts of designs into practice for decades.

Brands that embrace on-going experimentation rather than chasing increasingly noisy and unavailable user-level “complete journeys” will come out ahead.

The obstacle is not technology. It’s the education of internal stakeholders and then overcoming the barriers of the media agencies and martech who don’t necessarily want the impact of their efforts to be accurately quantified.

We’d love to continue this conversation.

Fill out the form below to request Case Study materials.

Related Posts

Join the Conversation

Check out Kelly Wortham’s Optimization based YouTube channel: Test & Learn Community.

Search Discovery
Education Community

Join Search Discovery’s new education community and keep up with the latest tools, technologies, and trends in analytics.

Follow Us

Scroll to Top