Mastering incrementality in ads: an introductory guide
Nov 8, 2024
This article was written by Anton Bugaev, with contributions from Juan Carlos Medina Serrano and Garret OâConnell. It was supported by the work of Bolt Marketing Technology and Performance Marketing Teams: Polina Shevchenko, Avishek Raha, Carlos Eduardo Trujillo Agostini, Nauris Bruvelis, and Anastassia Tsarenko. We also extend our gratitude to our partners at Google, Justas LoĆŸinskas and Ritvars Reimanis.
Accurately measuring the impact of ads on business results is a complex challenge. Traditional attribution methods tend to measure user interactions with ads rather than their actual influence on behaviour. Moreover, these approaches are becoming less reliable due to evolving privacy regulations and tracking limitations. At Bolt, after extensive experimentation with ads, weâve developed methods to reliably assess their real impact. In this article, weâll explore these approaches and their advantages and potential pitfalls.
The gold standard in testing (almost anything) is âRandomised Controlled Trialsâ (RCT, often referred to as a âConversion Liftâ study in advertising). In an RCT, you divide your audience into a treatment group that receives the ads and a control group that doesnât. Randomly assigning users to these groups helps eliminate external factors, ensuring that any difference in results can be attributed to the ads themselves.
To conduct an RCT, youâll typically need to partner with ad platforms like Google, Meta, or TikTokâespecially when your goal is new user acquisition. In these cases, since your target audience isnât yet onboarded to your product, you donât have direct control over who sees the ads or how to randomise the exposure. These platforms offer Conversion Lift Studies, which enable random audience assignment and make setup easier by handling the technical aspects. However, relying on their systems means you depend on their infrastructure and updates, limiting your control over the experiment.
Here, we take a moment to express gratitude to our partners at Google, Meta, and TikTok for their support in our day-to-day work, including organising conversion lift studies.
RCTs still rely on tracking capabilities. While Conversion Lift studies provide a clearer picture of ad impact compared to simple attribution data, they still depend on device-level tracking. Ad platforms need to match the users who saw the ads with those who converted to your app, which requires user-level data.
This is why experiments on iOS have become less feasible since the introduction of Appleâs App Tracking Transparency (ATT) framework, which limits access to such data. As privacy regulations tighten, we can expect even more constraints on running these experiments.
Only digital ads allow for randomisation. If youâre running offline ads, such as out-of-home billboards or TV commercials, you canât control which users are exposed to the ads and which are not.
Watch for Spillover Effects! Even if you only target the treatment group in your experiment, the control group can still be indirectly influenced. For example, if your offer is particularly compelling, people might talk about it, spreading awareness from the treatment group to the control group. A more specific case for Bolt: if your treatment group generates a surge in orders, youâll likely have more couriers displaying your brandâs logo on the streets. This increased visibility could remind users in the control group about your service, unintentionally influencing their behaviour.
While both scenarios are great for business, they can skew your experiment. You might end up with a false negativeâconcluding that the ads didnât drive the expected results when, in fact, they did.
A particularly tricky situation arises when you run a large-scale awareness campaign designed to generate buzz throughout an entire city but measure its effectiveness using an A/B test. In such cases, the control group could be exposed to your brand indirectly, making it hard to gauge the true impact of your ads.
Brand Lift Studies are a variation of Conversion Lift studies designed to evaluate upper-funnel campaigns. Instead of tracking conversions, these studies survey users to gauge their awareness or perception of your brand. If your goal is to measure brand awareness or consideration rather than sales or sign-ups, this approach could be more suitable. However, youâll still need to account for potential spillover effects, as indirect exposure to your brand can influence the control group and affect your results.
Quasi-experiments: the universal but complex alternative
A quasi-experiment can be simplified as: âLetâs run the ads and see what happens.â Instead of splitting users into treatment and control groups, we make a prediction â known as a counterfactualâthat estimates what would have happened if the ads hadnât run. By comparing the actual performance of your target metric to this counterfactual scenario, the difference between the two represents the incremental impact of the ads.
Quasi-experiments have less technical dependencies. Typically, all you need is access to your target metric data, such as sign-ups, sales, or revenue â data thatâs usually already available in your companyâs backend systems. This makes quasi-experiments more flexible, as they donât require the extensive infrastructure or tracking dependencies other methods do.
Quasi-experiments can be used to measure a wide range of advertising types. Unlike Conversion Lift tests, which are limited to digital ads, quasi-experiments can evaluate not only digital campaigns but also TV and out-of-home banner ads. Additionally, they extend beyond advertising to assess the impact of various interventions, such as pricing strategies, discounts, and other marketing activities.
However, this universality comes at a cost. Quasi-experiments are significantly more complex to organise, requiring extensive preparation and careful consideration of underlying assumptions. Unlike Randomised Controlled Trials, which can be conducted at any time with confidence that randomisation will account for external effects, quasi-experiments place the responsibility on the analyst to construct a reliable and unbiased counterfactual model. And this is a whole new story to consider in the next chapter.
Summary
In this chapter, we explored the importance of incrementality experiments in advertising and examined both Conversion Lift experiments and quasi-experiments as key methods in an advertiserâs toolkit. In the following sections, weâll delve deeper into these approaches. Stay tuned!
Join us!
Bolt is a place where you can grow professionally at lightning speed and create a real impact on a global scale.
Take a look at our careers page and browse through hundreds of open roles, each offering an exciting opportunity to contribute to making cities for people, not cars.
If youâre ready to work in an exciting, dynamic, fast-paced industry and are not afraid of a challenge, weâre waiting for you!