Your Brand Tracking Data May Be a Mess

Thanks to our long history of revamping clients' brand trackers, we're sharing some telltale signs that you're suffering from poor brand tracking data.

Your Brand Tracking Data May Be a Mess

SHARE

At Material, we take over a lot of trackers from other research partners who weren’t meeting their clients’ needs. Clients typically come to us for at least one of a few reasons:

  1. Better service, or
  2. More strategic analysis and insights.

Often we discover something else when we’ve had the opportunity to dig into their archived data sets, which brings us to reason number three: higher quality data.

Though not all of these new clients were aware of it at the time, they had been getting bad data. Now might be the time to audit your data integrity.

Do you have a brand tracking data quality problem?

There are a few telltale signs that your brand tracking data quality might be subpar:

  • Results move too little,
  • Results fluctuate too much,
  • Results don’t align with in-market performance,
  • The client’s brand wins on everything,
  • Scores are too flat across attributes, and/or
  • Results make no sense.

Tracking results move too little

You know your market is shifting, yet your tracking results are frustratingly stable. Why are you not picking up on the changes? You might have widened your study qualifications too broadly (screening in past-year shoppers, for instance), which means recent in-store changes haven’t impacted most respondents.

Maybe you aspire to have your emergent niche product category adopted widely, so you wanted to set yourself up for the future by interviewing everyone rather than just current category users.

That means most people have only vague general impressions of the category and brands within it, which would yield flat results over time, and across brands and attributes.

Tracking results fluctuate too much

You know your category is pretty stable, so you don’t expect results to shift much; however, you have the tracker for peace of mind — that is, to ensure you don’t get blindsided by any changing market dynamics. But instead of peace, it brings you stress as results bounce inexplicably from wave to wave.

Why on earth is that happening?

The most common problem is insufficient control of sample composition. Likely culprits are fluctuating demographics or shifts in the type of device they are taking the survey on.

The results you get from different sample sources will tend to differ from each other, usually within some reasonable bounds. If your tracker is not controlling the mix of sample sources from wave-to-wave (in addition to the demographics and device type), then you may be seeing fluctuations that represent random noise.

Tracking results don’t align with in-market performance

This was a common problem during the brunt of the pandemic. Brand equities for most brands in most categories remained stable; yet sales for some brands skyrocketed as category penetration and frequency surged.  In some cases, sales went up while equities declined — try explaining that to management!

Careful analysis of the data was able to uncover that sales increases for those brands lagged the increases of healthier brands, making it particularly important to examine the competitive weaknesses identified in the tracking.

This was also a time when pre- and post-test/control tracking was crucial to understanding the impact of store redesigns, adjusted service models, or heavy-up ad campaigns.

Sometimes, the category was doing one thing that might amplify or drown out the impacts of your actions if a control path were not made available for comparison.

The client’s brand wins on everything

You might wonder why this is listed as a problem. As a category leader, you can fall into complacency and lose sight of challenger brands who are nipping at your heels.

If you ask people to rate brands they’ve heard of, but don’t require them to be familiar with those brands, then you’ll exaggerate the brand halo that makes you dominate lesser-known alternatives.

The proper place to capture the familiarity benefit is in your brand funnel, not in the perceptions battery.

Scores are too flat across attributes

This is related to the last problem. If people are rating brands they don’t know well, they’ll tend to straight-line the attribute battery, giving the same rating to all attributes.

This tends to inflate the scores of category leaders, deflate the scores of smaller brands, and undermine the ability to statistically infer which attributes are most important in driving key business outcomes.

One way to mitigate this effect is to clean out these respondents for poor-quality data — but do it carefully. This solution may also introduce systematic bias against people who are familiar with fewer brands.

Remember, giving the same rating to every attribute of a brand you’ve merely heard of is not necessarily evidence of inattentive responding.

Tracking results make no sense

You’re right to be deploying cleaning steps in your tracker. Survey research fraud is less lucrative than advertising fraud, but it’s still commonplace.

Survey bots, click farms, and professional respondents all may be polluting your results if you don’t identify and screen them out. Fraudsters are a particularly large problem in low-incidence studies and high-incentive ones.

You can deploy a variety of tactics to catch fraud, both real-time during field and post-hoc once data collection ends.

While AI methods are improving the accuracy and efficiency of quality control efforts, there’s still no substitute for a careful human review of open-end responses and general response patterns.

Other common solutions to bad brand tracking data

Tracking studies are unique in the level of pressure they place on research execution and data quality.  Unlike a one-time study, all waves need to be perfectly consistent on a host of design and execution decisions that could reasonably go either way if not focusing on trends.

This is why we keep a study-specific tracker manual that documents every decision (what, when, and why it was made) so we can ensure consistency over time.

You probably already knew that any shift in tracking survey content or processes can undermine trendability. What you may not realize is that doggedly sticking to past processes can also hurt you.

You must introduce mobile surveys

The majority of trackers today were launched before mobile-optimized survey formats were available. From fear of upsetting the trendability of the data, many of these trackers have not upgraded to a mobile-optimized format.

This means that few (or sometimes no) respondents on mobile devices are being included, despite the fact that most surveys today are now taken on mobile. So while the measurement has stayed consistent, the population surveyed has shifted as people partial to mobile are now opting out of the study.

In other words, these trackers are no longer getting a representative view of their markets.

You must control for survey length

Partly due to the shift to mobile survey-taking, but also due to shrinking attention spans and proliferating entertainment options, respondents have become increasingly unwilling to take surveys over 15 minutes in length and sample providers are loath to ask their respondents to take them.

This is happening in the same moment that long-standing trackers have become bloated through the addition of new questions over time.

Long surveys receive lower cooperation rates, completion rates, and quality data participants. Make the hard decisions about what you really must track, then trim the rest.

Data quality is of utmost importance in tracking studies. If you suspect any of the above culprits may be to blame for your bad tracking data, it may be time to rethink your tracking study design — or your tracking partner.

Related Content

"*" indicates required fields

Interested in . . .