A Cranium-Expanding Machine Learning Discussion with Matt Gershoff

Learn more about our exciting, new partnership with Conductrics!

Recently, the Test and Learn Community met with Conductrics co-founder Matt Gershoff to slice through a lot of hype and talk about the different components involved in machine learning.

Since this conversation has been one of our most-viewed advanced videos, we decided to bottom-line it here to provide the testing community with fast/historical access to this conversation on the edge of tremendous industry change.

Machine learning made simple

To kick off the meeting, Matt addresses the media hype about machine learning head-on. He makes it simple. Machine learning is the overlap between computer science (the study of programs that people write) and statistics (the method by which we make inferences about data at a high level).

What does machine learning do? It constructs computer systems that can automatically improve through experiences. The output of a machine learning program is a program that can execute based on data and observation, as opposed to having a person write out a program. So, at its simplest, machine learning is this process of generating and automating a program that’s executing on a task: it’s solving a problem that we want to solve through automation.

The Machine Learning Sweet Spot
Image courtesy of Matt Gershoff

Three types of machine learning

Machine learning doesn’t have to be complex. In fact, we want to use the most elegant solution possible to execute on our task. There are three basic approaches to machine learning, most of which people have already been using.

ONE:
Unsupervised learning occurs when the raw data is distributed to infer some additional structure. It’s a definitional problem: we’re looking at the data to try to find the structure, and we’re not being told what to look for. An example of unsupervised learning is cluster analysis, or, using Matt’s illustration, attempting to “find the galaxies from the stars,” where each star is an individual datapoint (say, a customer) and the galaxies are meaningful relational connections (customers are more like one group than another).

Cluster Analysis is like stars

TWO:
Supervised learning
occurs when we already have a true result for some set of data. We can think of the “supervisor” in this type of learning as a teacher who already knows the answer, and we’re the student trying to learn. We, the students, make a guess, and the teacher tells us if we’re right or wrong and how far off we are. Supervised learning can solve a classification problem (when we’re trying to assign a class or type) or a regression problem (when we’re trying to find a score or a predicted numeric value) or a predictive targeting problem (when we’re trying learn which experience to give to particular user).

THREE:
Reinforcement learning
tries to learn a policy. Until recently, this was the “neglected child” of machine learning. We use reinforcement learning when we don’t have data upfront, so we must interact in order to learn. There’s no instruction, only reward, as in operant and classical conditioning which teaches a behavior we desire. This type of learning encompases problems like the multiarmed bandit: an automated approach to learning (as quickly as possible) which action to take based on the conversion rate for a particular action. The higher the conversion rate for a particular action, the more the “reinforced” action is delivered.

Machine learning isn’t magic, it’s a design decision

Marketers should choose approaches to machine learning that suit their needs, and sometimes that means there should be a tradeoff of accuracy for interpretability. Here’s why: Marketing problems involve complexities that necessitate a scale of interpretable methods.

Since marketing is a complex, open system, we don’t have a deterministic model for how to do it the “best.” For one thing, external factors are constantly altering the payoff of our efforts. Given the shifting nature of reality, we don’t always know what context and targeting features make a difference when we’re making our marketing decisions. In a sense, this open system ensures that we’re always defining the marketing “game.” What’s more, as we’re defining the game, we’re also constantly attempting to optimize it. But we’re never really sure if our efforts make a difference. This complexity makes it unlikely that complex models (like deep learning) are going to be very effective for all marketing.

Further, complex models are difficult to interpret. But marketers need systems that are easy to interpret, because we need to create buy-in among stakeholders in order to generate trust, share insights, and comply with privacy and regulatory standards. The models we use need to be explainable and, in the best cases, simple: for a given input, you can expect this output. If you can understand an approach, then you can scale it, you can debug it, you can interpret it.

Interpretable methods

We can use a couple of interpretable methods within machine learning.

  1. Linear regression: In this mathematical function, we can interpret the relationship between our input data (for example user data–say, the amount Tim purchased in the past) and the impact of what we want to learn (say, the probability that Tim will buy something). So the weight assigned to our input data determines how much we alter our predictive value.
  2. Decision tree: A simple, compact approach that anyone (even accounting!) can understand in order to deliver an experience based on if/then rules. A decision tree makes sure data is in compliance and is identifiable by rule (therefore the decisions are loggable and reportable). In addition to being loggable, decision trees are human-readable–we can readily gather insights from them, and we can trust them, because they’re compact and they deliver content that logically maps data to experience.

How do you use A/B testing to tell how good your machine learning model is?

Our metrics have no intrinsic meaning to machines, which only identify relationships that are conditioned on the data that is given to them. For this reason, machine learning is not a substitute for human marketing efforts–it needs to be a complement to our judgement. It isn’t a panacea, and it doesn’t even necessarily make our processes easier. Perhaps, though, machine learning does make our processes more effective. And we can and should test for that.

Machine learning is about prediction and estimation. It’s building a policy based on our hypothesis that our program is going to do better than random guessing or better than us writing the logic for ourselves. So the efficacy of a program (how much return we achieve), for either the machine learning or the person-written logic, should be tested and confirmed.

What you want to do is a split test, the same as you would for business rules. You can add an extra arm (for example, “all of the users selected under a targeting rule like machine learning”) to an A/B test. In this way, you can test for answers such as how much extra return do I get from using predictive targeting over just randomly selecting? or how much extra return do I get from using predictive targeting than if we just picked the single best option?

You want to make sure the extra return is worth the hassle (i.e., worth the complexity/cost involved in automating the task vs. the time and resources involved in manually writing the program). You can size the opportunity and determine the payoff you want to get, rather than just jumping into machine learning. The value of the analysts is doing this analysis to determine how to prioritize the work.

A/B Test the Machine Learning
Image courtesy of Matt Gershoff

How do you know when you’re ready to use machine learning?

Whether your business is ready or not for machine learning depends on what the problem is that your organization is attempting to solve and the level of sophistication within the organization. Are you an organization that is trying to improve its process long term? Are you aware of the needs and benefits of machine learning or are you, essentially, objectifying the resource and virtue signalling your sophistication by leaping onto the machine learning bandwagon?

Jumping in without giving thought to the particular processes you’re attempting to improve is simply asking for disaster. Payoff does not happen right away, and figuring out how to scale an integration may depend on bespoke points of achievement. In other words, payoff will come over time as capacity builds and as an organization, culturally, learns how to think in terms of how machine learning can help improve processes.

Have questions about your team’s machine learning readiness?

Fill out the form below and let’s chat.

Related Posts

Join the Conversation

Check out Kelly Wortham’s Optimization based YouTube channel: Test & Learn Community.

Search Discovery
Education Community

Join Search Discovery’s new education community and keep up with the latest tools, technologies, and trends in analytics.

Follow Us

Share

Share on facebook
Share on twitter
Share on linkedin
Scroll to Top