Over the last decade, the advertising and media research industry has made exceptional strides in understanding how advertising works. Despite these advancements, something big is missing: Our industry has no consensus on how adverting works or how to measure its effectiveness. The plethora of digital metrics documented in this book shows that all too clearly.

Ask the typical ad exec how advertising works and, most likely, she or he will cite a variation on AIDA (Awareness, Interest, Desire, and Action) and the metaphor of the purchase funnel. It was originally coined in 1898 by Elias St. Elmo Lewis of National Cash Register. It assumes the consumer processes marketing information in a sequential and conscious manner, from Awareness to Action. Today, we know the consumer’s decision process does not have to be sequential, nor does it have to be a conscious decision. Like or not, AIDA is the archetypal way the industry views how ads work.

We need an industry standard on how advertising works which is openly verifiable and validated.

– By verifiable, I mean that at the macro level, such as a market category, there is an array of advertising and marketing inputs, which has been proven to match a known set of sales and consumer response outputs.

-By validated, I am suggesting that such a macro model needs to be intuitively understood, by aligning it at the micro level to individual consumer responses seen in studies such as a neuroscience trial.

Identifying the connection between macro effects and micro responses is, arguably, marketing’s missing link.

To replace AIDA, or any other industry model of advertising effectiveness, we need both a macro and micro approach. Without both approaches, the missing link will always be apparent and, consequently, doubt will linger about how advertising really works:

Quantifying macro effects will allow us to understand brand and category shifts. Econometrics can already do this to an extent, but such studies are only as good as the data that are placed in these models. Such studies can be improved and enhanced by media and retail sales panels that can track response at the level of the individual consumer. Yet even panels do not reveal an individual’s motivations behind their behavior to buy a particular brand on a specific occasion.

Probing individual responses via approaches such as large-scale ethnographic studies or neuroscience trials, etc., alongside quantitative macro data, would allow us to understand why the shifts occurred when they did.

Studies such as fused media-exposure and retail panels via organizations such as TRA and Dunnhumby have highlighted previously unseen insights on price promotions. For example, in the Journal of Advertising Research, September 2012, they showed that simultaneous TV advertising and temporary price reductions drove significantly higher sales than either tactic alone.

While quantitative insights are undoubtedly powerful, and can be transformational in how we architect the MarComs solution, they do not tell us why that architecture is important. What exactly triggered the consumer’s mind to cause this behavioral change? And equally as important, was the consumer even aware of this change in their behavior?

At the individual motivational level, the industry is making inroads to decrypt this enigma via initiatives such as the ARF’s NeuroStandards Collaboration Project. But the importance of creating a validated and verified replacement for AIDA goes beyond just ‘getting at the truth’. It could fundamentally restructure how we view advertising and media research and the resources we put behind it.

If we truly understand how advertising works, at both macro and micro level, we may also appreciate why advertising works. With this insight we could reallocate our advertising and media research resources to best effect with a more confident degree of certainty.

The USA spends $150 billion annually on advertising expenditure. With this level of ad spend, the US has an enviable array of media research resources. For many major media channels, we have two major suppliers. For example, we have GfK MRI and Experian Simmons for print, Nielsen and comScore for digital and Scarborough and The Media Audit in local media. Notwithstanding the proposed acquisition of Arbitron, we also have Arbitron and Nielsen in radio as well as Triton Digital in online radio. In TV ratings, we do have a dominant supplier, Nielsen, but even here there are alternatives such as Rentrak.

Paradoxically, if we had a new, evidence-based, industry standard for determining how advertising and media works, rather than limit the choice of suppliers, our new knowledge could accelerate the range of potential research partners. One could assess a research partner’s capacity to meet and exceed an array of known metrics that determine ad effectiveness for a particular market category. For media companies and media agencies, criteria such as measuring audience reach could be superseded by advertising effectiveness metrics which were known to be related to audience reach; for example, this might be some measure of ad awareness or recall. Media research organizations might be judged not only on service, expertise, etc., but also in their abilities in how best to track and even anticipate advertising and media effectiveness.

A scenario which places ad effectiveness ahead of audience delivery for measuring media vehicles may not be that far way. For example, in the new burgeoning phenomenon of social TV, Nielsen and Twitter have joined forces to launch the Nielsen Twitter TV Ratings (NTTR), slated to be released in Q4 2013. This will take TV audience ratings data into an entirely new realm where, for the first time, we will have true social TV ratings calibrating the number of people talking about shows they watch in real time. These ratings will be based on the full, active universe of Twitter users where each show’s ratings are weighted by its demographics as tracked by Nielsen.

If Nielsen and Twitter can validate the advertising effectiveness of social TV ratings, it could detonate a revolution in how we value and buy TV airtime.

Whether social TV ratings fail or succeed, it is only the beginning of a sea change. I foresee many more media research services increasingly focusing on ad effectiveness rather than potential media exposure. Ad effectiveness metrics may transcend media audience metrics in a way that would have been unimaginable just ten years ago.

Rappaport, S. D. (2013). The Digital Metrics Field Guide.

Leave a Reply