With more people consuming content on social media than ever before, it’s never been wiser for brands to work with influencers. Once you’ve taken the time to track down the perfect influencers for your industry, you have the potential to grow your target audience by thousands (or even millions).
But, like most worthwhile marketing ventures, hiring influencers can take a chunk of the budget. Navigating the influencer networks, negotiating fair rates for photo and video, deciding your platform, and testing into new audiences, learning the ins and outs can be tricky
Luckily, we’ve got the industry know-how to share some insights on how much influencers charge.
Here is everything you need to know about working with an influencer in 2021.
When marketers talk about measurement, they’re really talking about causation. Why, you may ask? Because at the root of it, marketers want to understand causal links between their advertising and their KPIs. They want to know whether the ads they run on any given channel actually cause people to take action and buy their product.
Causation is important because it helps us understand where to best spend our budget. It helps demonstrate value, and gives us the information we need to make decisions, like whether to run a channel or not. Today we take a deep dive into the world of causation, understanding its dynamics within the user journey and how it impacts attribution.
You may have heard the common phrase: correlation doesn’t imply causation. In the context of marketing, this reminds us not to treat correlations in our data as if they are causations. And yet, many of us do just that.
If you’re running any digital ad, you’re inevitably reliant on some form of rule-based attribution. Last touch, first touch, all of it. All these types of methods try to derive causation from correlation.
Rule-based attribution goes like this: If a user sees an ad on Channel X, and then goes on to convert, that’s sufficient justification for the ad manager to decide that the ad Channel X caused the conversion.
This default method for attribution pervades across every platform we’ve all worked on. It’s hard to imagine marketing without it. And yet, for all its popularity, it’s an idea that’s deeply flawed.
The Problem With Rule-Based Attribution
To see where rule-based attribution falls short, let’s consider an example.
In a game of soccer, one team scores a goal. Now, prior to it being scored, 5 different players from that team touch the ball.
So, which player caused the goal to happen? In marketing, we’d have to say this.
- A last-touch approach says only the last player to touch the ball caused the goal.
- A first-touch approach says only the first player to touch the ball caused the goal.
- A linear approach says that all players played an exactly equal part in causing the goal.
While some approaches may work in specific circumstances, none truly offer a full understanding of which player caused the goal.
To determine causality, we have to look deeper than the simple order of events.
How Can We Do Better?
So if the ordering in certain events doesn’t explain causation, what does?
Let’s continue our soccer example. Normally, people would say that the player touched the ball (first, last, whichever), so that means the player is responsible for the goal. But, that’s the wrong way to think about it. Instead, we should be saying “if the player didn’t touch the ball – regardless of whether first, last, or otherwise Causation is important because it helps us understand where to best spend our budget — would the goal have been scored? That is a more accurate way of understanding attribution: the player caused the goal to be scored if the goal wouldn’t have been scored had that player not touched the ball.
This “counterfactual” idea — the idea that something would not have happened but for that player’s involvement — seems to fit better than a rule-based approach because you are able to specifically weigh the player’s contribution to the goal. The absence of that player means the goal wouldn’t have been scored. This information is more relevant to causation than who touched the ball first or last before the goal.
Transferring This Knowledge To Marketing
So how can we apply this approach to channel measurement for something like display ads.
Instead of asserting that a given display ad caused a user to convert, simply based on the fact that it was the last (or first) channel they interacted with before converting, we consider a different question: Would that buyer convert if they hadn’t interacted with our display ads?
Of course, we can’t rewind time and see what that user would’ve done if they hadn’t seen our display ads. We can, however, compare the buyer’s journey to other identical user journeys that had not seen the display. If users who didn’t interact with display are just as likely to convert as those who did, then display probably isn’t causing buyers to convert (and vice versa!). This is how ‘data-driven’ attribution methods work.
The Challenge Of Delivery Bias
This method of measuring causation works well, but it requires an assumption that pre-existing likelihoods don’t impact ad interaction. In other words, users with high pre-existing likelihoods of converting are no more or less likely to interact with display ads than users with low pre-existing likelihoods of converting.
But this presents a bit of a challenge. If we only show our display ads to users who are already likely to convert (perhaps because they are part of a retargeting audience of users who have recently browsed the site), then we have created something called delivery bias.
We can’t use the data-driven approach to determine causation, because our display ads are inherently biased toward users already likely to convert. Because of this, it’s not fair — nor accurate — to compare users who have and haven’t interacted with display ads.
Given how effective ad platforms are at finding users that are predisposed to convert, this is a real issue.
Overcoming Bias With Lift Testing
The only way around the problem of delivery bias is to stop channels from showing content exclusively to users predisposed to convert. Since it’s not necessary to remove all targeting to do this, we can simply run what are called lift tests.
A lift test creates a control group and an exposed group that is created just before the ad is shown. Users in the control don’t see the ad or any future ads, while users in the exposed group continue to see them. The fact that the assignment happens just before a user sees the ad removes the potential for delivery bias — users in the target audience are no more or less likely to be in the exposed group.
With a lift test, you can measure how much more likely the exposed group was to convert over time. And best of all, this gives us what we’ve been looking for: a way to measure the causal effect of a channel on conversions.
Given its use in academia, measuring everything from vaccine efficacy to economic interventions, it’s no surprise that lift testing is generally considered the gold standard of marketing attribution. They do take time and can be difficult to run, but ultimately, they offer marketers the best chance of accurately measuring causation.
As grating as the term “content is king” may be, one thing is for sure: It’s the only monarch effectively fostering goodwill and growing in popularity. Currently, 84% of B2C content marketers call their content marketing programs successful, with about 60% expecting to increase their content budgets this year in efforts to drive brand awareness, educate consumers and build credibility.
Good content marketing builds brand identity and consistency; successful content marketing engages the consumer, nurturing loyalty. Building goodwill this way is effective, with 70% of consumers saying that customized content shows that organizations want to forge genuine connections with them. Those connections bring more than just warm fuzzies, showing that a loyal customer is worth 10 times the dollar value of their first purchase.
One of the ways brands can engage directly and personalize touchpoints effectively across platforms is with modular content. A modular approach breaks away from one-at-a-time asset creation providing a flexible, scalable — and testable — way to stretch content across channels to meet more customers.
How Modular Content Maximizes Creative Assets
Modular content is remixable, like a set of Legos. You can create one large asset from many blocks, or you can pull out sections and individual pieces to reconfigure into entirely different scenarios. Along with being endlessly configurable, it’s also easily tested and refined.
However, just like Legos, your modular content components need a sturdy foundation if you plan to build them out. For example, let’s say you do a 30-second spot with dialogue, but you didn’t think about modularization during the planning stages. When the spot is complete, it will be impossible to slice up into multiple usable pieces for different channels, since each piece is context-dependent on the larger narrative.
If you had planned ahead, you could have worked in sections specifically intended to stand on their own. That way, the segments at seconds 9–14 or seconds 22–27 (or any given section) could be plucked out for double-duty elsewhere. And the order of “content units” in such a video doesn’t have to be A–Z. It can be F+B+Y or nearly any variation that makes sense within the context of the whole story.
“Creating modular content is not simply an executional afterthought where we try to capture as much as possible from a production standpoint,” says Michelle Moscone, Head of Creative Operations at WITHIN, “but really the basis of our ideation process. The team really challenges itself creatively to ideate around visual vignettes and other aspects of the shoot to extrapolate as many scenarios as possible from the same set of variables.”
Consider this 30-second spot created by WITHIN for Anheuser-Busch.
Why would you run a marketing campaign that costs more than the profit it generates? As absurd as it might sound, this situation actually represents one that many marketers unknowingly find themselves in.
Let’s say you’re running all of the marketing for a single-product ecommerce store. The shop makes a $20 profit from each sale before taking into account any marketing spend. Now let’s say that someone pitched you an idea on how you could generate more sales by spending $30 of marketing dollars per sale. Chances are you’d scoff at the idea, right? However, even though marketing campaigns can appear profitable at an aggregate level if the cost per sale is below $20, it doesn’t mean that every sale generated is costing the brand less than $20. That means that even when we’re profiting at an aggregate level, we can still be losing money. To understand why this is the case, we need to understand the idea of marginal metrics.