Mastering incrementality in ads: an introductory guide

Nov 8, 2024

Mastering incrementality in ads: An Introductory Guide

This article was written by Anton Bugaev, with contributions from Juan Carlos Medina Serrano and Garret O’Connell. It was supported by the work of Bolt Marketing Technology and Performance Marketing Teams: Polina Shevchenko, Avishek Raha, Carlos Eduardo Trujillo Agostini, Nauris Bruvelis, and Anastassia Tsarenko. We also extend our gratitude to our partners at Google, Justas Ložinskas and Ritvars Reimanis.

Accurately measuring the impact of ads on business results is a complex challenge. Traditional attribution methods tend to measure user interactions with ads rather than their actual influence on behaviour. Moreover, these approaches are becoming less reliable due to evolving privacy regulations and tracking limitations. At Bolt, after extensive experimentation with ads, we’ve developed methods to reliably assess their real impact. In this article, we’ll explore these approaches and their advantages and potential pitfalls.

Conversion lift studies (randomised controlled trials)

The gold standard in testing (almost anything) is ‘Randomised Controlled Trials’ (RCT, often referred to as a “Conversion Lift” study in advertising). In an RCT, you divide your audience into a treatment group that receives the ads and a control group that doesn’t. Randomly assigning users to these groups helps eliminate external factors, ensuring that any difference in results can be attributed to the ads themselves.

To conduct an RCT, you’ll typically need to partner with ad platforms like Google, Meta, or TikTok—especially when your goal is new user acquisition. In these cases, since your target audience isn’t yet onboarded to your product, you don’t have direct control over who sees the ads or how to randomise the exposure. These platforms offer Conversion Lift Studies, which enable random audience assignment and make setup easier by handling the technical aspects. However, relying on their systems means you depend on their infrastructure and updates, limiting your control over the experiment.

Here, we take a moment to express gratitude to our partners at Google, Meta, and TikTok for their support in our day-to-day work, including organising conversion lift studies.

RCTs still rely on tracking capabilities. While Conversion Lift studies provide a clearer picture of ad impact compared to simple attribution data, they still depend on device-level tracking. Ad platforms need to match the users who saw the ads with those who converted to your app, which requires user-level data.

This is why experiments on iOS have become less feasible since the introduction of Apple’s App Tracking Transparency (ATT) framework, which limits access to such data. As privacy regulations tighten, we can expect even more constraints on running these experiments.

Only digital ads allow for randomisation. If you’re running offline ads, such as out-of-home billboards or TV commercials, you can’t control which users are exposed to the ads and which are not.

Watch for Spillover Effects! Even if you only target the treatment group in your experiment, the control group can still be indirectly influenced. For example, if your offer is particularly compelling, people might talk about it, spreading awareness from the treatment group to the control group. A more specific case for Bolt: if your treatment group generates a surge in orders, you’ll likely have more couriers displaying your brand’s logo on the streets. This increased visibility could remind users in the control group about your service, unintentionally influencing their behaviour.

While both scenarios are great for business, they can skew your experiment. You might end up with a false negative—concluding that the ads didn’t drive the expected results when, in fact, they did.

A particularly tricky situation arises when you run a large-scale awareness campaign designed to generate buzz throughout an entire city but measure its effectiveness using an A/B test. In such cases, the control group could be exposed to your brand indirectly, making it hard to gauge the true impact of your ads.

Brand Lift Studies are a variation of Conversion Lift studies designed to evaluate upper-funnel campaigns. Instead of tracking conversions, these studies survey users to gauge their awareness or perception of your brand. If your goal is to measure brand awareness or consideration rather than sales or sign-ups, this approach could be more suitable. However, you’ll still need to account for potential spillover effects, as indirect exposure to your brand can influence the control group and affect your results.

Quasi-experiments: the universal but complex alternative

A quasi-experiment can be simplified as: “Let’s run the ads and see what happens.” Instead of splitting users into treatment and control groups, we make a prediction — known as a counterfactual—that estimates what would have happened if the ads hadn’t run. By comparing the actual performance of your target metric to this counterfactual scenario, the difference between the two represents the incremental impact of the ads.

Quasi-experiments have less technical dependencies. Typically, all you need is access to your target metric data, such as sign-ups, sales, or revenue — data that’s usually already available in your company’s backend systems. This makes quasi-experiments more flexible, as they don’t require the extensive infrastructure or tracking dependencies other methods do.

Quasi-experiments can be used to measure a wide range of advertising types. Unlike Conversion Lift tests, which are limited to digital ads, quasi-experiments can evaluate not only digital campaigns but also TV and out-of-home banner ads. Additionally, they extend beyond advertising to assess the impact of various interventions, such as pricing strategies, discounts, and other marketing activities.

However, this universality comes at a cost. Quasi-experiments are significantly more complex to organise, requiring extensive preparation and careful consideration of underlying assumptions. Unlike Randomised Controlled Trials, which can be conducted at any time with confidence that randomisation will account for external effects, quasi-experiments place the responsibility on the analyst to construct a reliable and unbiased counterfactual model. And this is a whole new story to consider in the next chapter. 

Summary

In this chapter, we explored the importance of incrementality experiments in advertising and examined both Conversion Lift experiments and quasi-experiments as key methods in an advertiser’s toolkit. In the following sections, we’ll delve deeper into these approaches. Stay tuned!

Join us! 

Bolt is a place where you can grow professionally at lightning speed and create a real impact on a global scale.

Take a look at our careers page and browse through hundreds of open roles, each offering an exciting opportunity to contribute to making cities for people, not cars.

If you’re ready to work in an exciting, dynamic, fast-paced industry and are not afraid of a challenge, we’re waiting for you!

Download Bolt

Recent posts