Amsive

Insights / Data + Intelligence

PUBLISHED: Nov 17, 2022 6 min read

How to Maximize Your Ad Testing Budget with Indirect Incrementality Measurement

Bill Reynolds

Bill Reynolds

Director, Advertising Strategy

Additional Contributors: James Connell, Inna Zeyger

Many marketers are still looking to unlock the full potential of their media mix. While direct incrementality measurement often takes center stage, indirect incrementality measurement offers unique value — that brands can take advantage of to gain a fuller picture of their media performance. Read on to learn about direct vs. indirect incrementality, and other valuable measurement and testing frameworks.

Direct vs. Indirect Incrementality

Direct and indirect incrementality measurements are two methodologies to quantify the impact and success of marketing campaigns. 

Direct measurement seeks to quantify a specific metric—like cost per acquisition or response rate—through a direct consumer survey. 

Indirect incrementality measurement assesses changes in tactics, regions, or audiences to find whether there is added lift in desired outcomes (web visits, conversion, revenues).

Both direct and indirect incrementality measurement yield beneficial analyses—but the difference in costs can be considerable: direct measurement often requires significant traffic and budget, often putting it out of the reach of mid-sized companies. Alternatively, indirect incrementality measurement offers valuable insights to complement—or even supplant—direct measurement.

Direct Measurement, Lift Studies, and Costs

Measuring direct incrementality measurement with a brand lift study—through a publisher like Facebook, Google, or Nielsen—gives marketers the ability to ask ad-exposed and non-exposed audiences specific survey questions to ascertain the effectiveness of branding efforts. 

In a lift study, marketers might show ads to one audience and not another on YouTube, then ask people who saw the ads questions like “Do you recall seeing this ad?” or “Are you more/less likely to purchase this brand?”

Similarly, a search lift study can quantify how many more people searched for a brand after viewing an ad. 

While direct measurement can be beneficial for any type of business across any stage in the buyer’s journey, decisions to use this methodology ultimately depend upon whether or not a company has the budget to access enough data for statistical significance. For instance, Google and Meta brand lift studies require a minimum spend level over a 7-14-30 day period, while Nielsen and other measurement platforms charge $10,000 or more per study.

With indirect measurement, marketers might run ads in New York City and not in Chicago, then compare Google Analytics traffic and sales outcomes from the two geographies. While indirect measurements do not show irrefutable causation, they provide insightful correlation to help determine media mix, audience targeting, and creative direction.

Indirect Measurement via Inclusion/Exclusion Testing

Inclusion/exclusion testing is used to help solve numerous marketing questions. 

Using an inclusion/exclusion test, marketers establish baseline traffic and conversion metrics, then make a change to specific audiences or geographic regions and compare the effects on leads, traffic, and conversions. 

For instance, marketers may wonder, “Should I even bother running a branded keyword search, given that we have such a strong organic presence already?”

An inclusion/exclusion test can answer this question with the assistance of indirect measurement. In fact, it often proves that branded keyword search should be active and on at all times, which helps to determine brand search budget allocation and supporting performance.

Direct Measurement via Matchback Analysis

Matchback ROI analysis is another form of indirect measurement marketers can ask agency partners to provide. “Matchback” is a well-established term, originating in direct mail marketing—where marketers would cross-reference the list of customers who responded against the list of customers who received mailed advertisements to identify 1:1 conversions and calculate return on spend. 

In today’s digital world, data companies such as Oracle help partner agencies do the same type of analysis with their digital advertising. 

While any brand can use matchback ROI analysis to assess its digital footprint, it’s especially helpful for long-cycle industries, where people won’t likely respond to advertising until they have an explicit need.

Matchback can prove that individuals were influenced by advertising, even if they didn’t leave behind a digital footprint by clicking on an ad or making an online purchase. 

Vendors such as Oracle can only provide a match file for people identified as prospects in advance. Marketers (or their agency partners) therefore need to assess their customers in order to create predictive modeling data sets, then use those sets to create target audiences of prospects. Marketers then share these target audiences with a vendor like Oracle, run their campaigns, then assess which audience members converted and also which received an ad impression.

Want Amsive insights sent straight to your inbox?

Subscribe to our newsletter

This field is hidden when viewing the form
This field is for validation purposes and should be left unchanged.

Maximizing a Testing Budget

Companies can cost-effectively extend their capabilities through an agency partnership, using lift studies, inclusion/exclusion testing, and matchback analysis to guide their strategic business decisions. Marketers can leverage partnerships to strengthen the reliability of their results by: 

  • Using only high-quality test data. “Garbage in, garbage out,” as the old saying goes. 
  • Casting a broad net with indirect inclusion/exclusion tests that might identify upper funnel correlations. As we noted in our chapter about attribution and incrementality testing, conversion or sales is the culmination of many preceding touchpoints—not necessarily the last ad seen. 
  • Creating a strong testing framework. A poorly designed study can generate misleading results. Incorporating best practices can be a tall order for midsized brands lacking a large staff or deep in-house expertise. Fortunately, agency partnerships can serve both direct and indirect measurement needs.  

The right partnerships can increase a brand’s in-house capabilities, helping to improve the outcomes of:

  • Determining which regions, audiences, or creative campaigns are the most impactful
  • Designing test parameters based on the specific questions you wish to answer
  • Gathering the highest quality data
  • Running custom analysis
  • Translating the results into actionable language
  • Implementing strategies that tangibly move the needle forward, no matter the landscape

As we lose cookie data touchpoints and focus on rising channels like OTT/CTV, having an agency partner with a depth of expertise, diverse skill set, and performance-driving strategy helps businesses adapt nimbly while improving cross-channel measurement. Take advantage of the new opportunities for improving and expanding your measurement within the shifting media mix landscape. This is the fourth in a series of five articles diving deep into measurement today—unpacking what works and what’s next. So, stay tuned for our closing article!

To dive deeper into next-gen ad testing, check out The Future of Measurement: Attribution, Incrementality, and What’s Next in Ad Testing.

Share: