An Honest Layer Cake

Micaela Shaw
3 min readMay 7, 2020

How a (non-ballistic!) Probabilistic Impact Model helped my marketing team talk about cross channel incrementality without an MTA.

We had a problem. Possibly, the worst kind of problem: a data integrity problem.

It was as though every channel we played was a different board game. If we put a dollar into paid search, the dollars we reported coming out were in Monopoly money. If we put a dollar into paid social, the dollars coming out were in Life money. We were locked in walled gardens where currency couldn’t cross lines.

This was coming from a disciplined marketing team. We were committed to having hold-outs, even when it meant paying extra for PSAs. We looked at sales YoY comp by the hour, and had a seasonally adjusted run rate that contemplated holidays and school years. This wasn’t some Mickey Mouse business that took earnings at face value. The origin of those dollars- large and small- mattered.

And as a team, we were on a quest to be able to call our shot and take it.

However, we did not yet have a consistently clean perspective on whether to place a dollar right or left. The reality was, our discounted ROAS out of our search program had different assumptions baked in than our discounted ROAS on display programs running through networks.

For months, we settled our tornadoes of doubt with tales of a magic bullet: a multi-touch attribution system. We spent hours in researching and doing intro calls. We even had a group joke where whenever someone mentioned getting an MTA, the group had to take a pretend shot.

But with Google’s removal of third-party cookies and the heightening of walls around Facebook, the MTA we had finally secured budget for in 2020 went out of business. Soon we were the ones getting retargeted with ads that read “The MTA is Dead.” So there we were, left to figure out the problem ourselves. No enchanted martech tool in realistic sight to solve the problem for us.

The solution, as it turned out, would be more about making the invisible visible than waving a magic technology wand over data.

My colleague, Andrew, introduced the concept for the model we’d build like this: “We are already making assumptions about performance that impact spend. Let’s just document those assumptions and see how they layer up.”

Once built, there were three main inputs per month per channel:

  • Attributed Sales- We have some perspective on what revenue each channel touches.
  • Incrementality- We have assumptions on the incrementality of that revenue.
  • Untracked Revenue- We have assumptions on how much of that revenue we can’t track or attribute to the channel, whether it’s a direct mail being passed along or a TV ad that was on at the gym.

Here’s the punchline: Our confidence in that perspective varied by channel. Enter the probabilistic part of our little model…

Side note: When this first became part of our marketing team’s vocabulary, I couldn’t help but think of ballistic missiles every time I heard it. Anyone?!

For both incrementality and untracked revenue, we put in percent ranges- high, mid and low range. This allowed us to visualize our confidence in the results of each channel, and have a productive discussion not just around output, but also around the testing needed to tighten our confidence intervals by channel.

This model was useful at two important levels. At the channel level, it increased our ability to communicate the marginal revenue return of each channel. At the marketing plan level, it helped us have a high level conversation with our leadership and board about the percentage of sales that marketing was directly touching through direct performance marketing investment.

Nothing is perfect, but for our team, this was a transparent way of communicating our assumptions and creating visualizations that empowered the entire team to make educated decisions about spend trade-offs.

One honest layer cake of data to help our team market smarter.

--

--