top of page
Writer's pictureSarah Banks

Are Norms still Normal?



In today’s “Age of Accelerations” how do we keep pace with meaningful benchmarks?


Debating the usefulness of norms and benchmarks is one of those not-so-rare pleasures any Insight or research specialist will occasionally find themselves indulging in. It tends to be an area where agencies inevitably argue the corner for norms if they think they have them and against if they don’t! Of course, those possessing huge databases will also find good arguments for why theirs are better, even if completely different to others.

The “Dummies Guide to Debating Norms” might read something like this:


The arguments pro and contra remain as valid as when the practice of benchmarking first surfaced in the 80’s and run something like this:

Pro: How do you tell if 30% top box is good or bad? I can’t use the data unless I understand what it means: What’s the context?

Aha! You need a benchmark! Then you’ll know if your idea performs better or worse than the competition and whether it’s likely to succeed or not.

Contra: Hang on a sec… meaningful benchmarks need to be within the same context as the product/ service being tested. By the time your database has been filtered down to the exact same market, industry, channel, category, specific target group etc., how many data points are still left? Is that meaningful?

It doesn’t take long for even the biggest global volumetric testing agencies with databases running into the thousands of Purchase Intent and Liking scores to struggle when asked to test in a particular market or an emerging category that doesn’t yet exist as such, or is genuinely new. What is a flavoured yoghurt’s true competitive set? Try asking your agency just what tests they are using for benchmarking: you’ll be lucky to get a straight answer.

For benchmarks to be of any use at all, a whole raft of criteria needs to remain constant for all surveys/ measures in a database. Norms also need constant updating to remain relevant. This poses a challenge as shoppers today don’t think or act like shoppers twenty, ten or even two years ago. So how up to date are your benchmark tests?

Age of Accelerations:


That last point is perhaps worth examining more closely. Today’s speed of technology-driven change and resulting consumer and shopper behavioural changes shouldn’t be underestimated. (If you’re in any doubt about the current rate of acceleration and impact read Thomas L. Friedman’s book “Thank You for Being Late: An Optimist's Guide to Thriving in the Age of Accelerations ”.

Friedman discusses three key areas where acceleration is posing fundamental and existential challenges to human society: Technology (Moore’s Law), globalisation and climate & ecological change. The implications for consumer and shopper could be profound.

The Impact on Benchmarking


Many common benchmarks are based on prior analysis of what drives purchase intent (System 2 claimed behaviour), but these drivers are changing at a rate of knots - possibly faster that we can create new databases? Who, five years ago, would have considered that perceived ecological impact (for example avoiding the use of plastic) would be something we might need a benchmark for when assessing packaging?

Another example of the impact of what Friedman describes, is that technological changes are driving an increasing diversity in shoppers’ Paths to Purchase - coupled with fragmentation in product routes to market. Consumers and shoppers’ behaviour are fragmenting; and they are becoming less and less likely to be a homogenous group. These days, even targeting ‘millennials’ is likely to be a misnomer.

When working with benchmarks it’s vital that a test product be compared within a competitive set that closely represents the real choice set for shoppers - and in a similar context to the one in which the shopper would be choosing it. As we’ve just seen however, this context is potentially fragmenting and changing too quickly for any norms to be able to keep up. No more normal.

So how can we find a normative context to effectively benchmark against?

Best Foot Forward:


This doesn’t mean benchmarks are totally out, just that we need to create them within any given survey so that they’re in-the-moment, with the right target group, and in the correct context(s). It’s more telling if the shopper prefers the new product over their existing one or other popular products (no matter their test PI score).

To do this, it becomes even more important to reflect on the consumer and shopper’s competitive choice set and decision context in a way that replicates reality as closely as possible. This might mean upfront exploratory work to establish what these choice sets are, using virtual shelves to recreate the likely choice set in store, replicating online shopping platforms or using other methods at our disposal to get closer to reality. The truth is; every project will likely merit a tailored approach.

Behavioural Science approaches, such as MindTrace, can add another layer of insight by overcoming the reliance on interpreting claimed behaviour (such as PI) through understanding System 1 emotional responses. MindTrace is flexible in that it can be integrated into any competitive environment and context (e.g. virtual shelves) using the very technologies mentioned earlier!

Conclusion: Benchmarks remain useful - but good ones adapt with the times and are harder to find.


If you’re interested in talking to us about how we benchmark or to find out more about MindTrace and other behavioural science approaches please do get in touch on:


Sarah Banks

Follow us on Twitter and LinkedIn.

Commentaires


Les commentaires ont été désactivés.
bottom of page