Your Brand Tracking Data May Be a Mess

Light

post-banner

Listen to this Article

  • 00:00 | 00:00
Space

 

 

This article was written by Hilary DeCamp, Chief Methodologist

 

A tracker is often the largest item in the research budget, so companies have appropriately high expectations about the value such studies must produce for the enterprise.  
    • At worst, trackers are too backward looking – an ineffective post-mortem 
    • At best, they are a strategic spend – a regular pull up with the C-suite about forward looking strategy 

 

At Material, we take over a lot of trackers for clients that aren’t getting a strong return on their substantial investment in monitoring the health and performance of their brands.  
When clients switch to us, they typically do so to get: 
    1. Better service and partnership 
    2. More strategic analysis that provides clearer direction for improving performance 

 

When we’ve had the opportunity to dig into their archived data sets, we often discover that they also needed higher quality data. 
Though not all of our new clients were aware of it at the time, many of them had been getting bad data. Now might be the time to audit your data integrity. 

 

Do you have a brand tracking data quality problem? 

Perhaps your tracker has become bloated and inefficient, filled with obsolete questions that no longer provide relevant insight.  If so, survey fatigue may be sacrificing quality answers to your priority questions; strip out the metrics that are no longer actionable. 
There are a few other telltale signs that your brand tracking data quality might be subpar: 
    • Results move too little, 
    • Results fluctuate too much, 
    • Trends don’t align with in-market performance, 
    • One brand wins on everything, 
    • Scores are too flat across attributes, and/or 
    • Results make no sense. 

 

Tracking results move too little 

You know your market is shifting, yet your tracking results are frustratingly stable. Why is the tracker not picking up on the changes? You might have widened your study qualifications too broadly.  For example, if screening in past-year shoppers, recent in-store changes haven’t had the chance to impact most respondents. 
Or maybe you aspire to have your emergent niche product category adopted widely, so you wanted to set yourself up for the future by interviewing everyone rather than just current category users.  That means most people have only vague general impressions of the category and brands within it, which would yield flat results over time, and across brands and attributes. 

 

Tracking results fluctuate too much 

You know your category is pretty stable, so you don’t expect results to shift much; however, you have the tracker for peace of mind — that is, to ensure you don’t get blindsided by any changing market dynamics. But instead of peace, it brings you stress as results bounce inexplicably from wave to wave. 
 
Why on earth is that happening? 
The most common problem is insufficient control of sample composition. Likely culprits are fluctuating demographics or shifts in the type of device they are taking the survey on. 
Also, since the results you get from different sample sources will tend to differ from each other (usually within some reasonable bounds) a tracker that is not controlling the mix of sample sources from wave-to-wave (in addition to the demographics and device type), may show fluctuations that represent random noise. 

 

Tracking results don’t align with in-market performance 

This was a common problem during the brunt of the pandemic. Brand equities for most brands in most categories remained stable; yet sales for some brands skyrocketed as category penetration and frequency surged.  In some cases, sales went up while equities declined — try explaining that to management! 
Careful analysis of the data was able to uncover that sales increases for weaker brands lagged the increases of healthier brands, making it particularly important to examine the competitive weaknesses identified in the tracking. 
This was also a time when pre/post test/control tracking was crucial to understanding the impact of store redesigns, adjusted service models, or heavy-up ad campaigns.  This is because the category was doing one thing that might amplify or drown out the impacts of your brand’s individual actions – a control path was needed for comparison. 

 

One brand wins on everything 

If it’s your brand, you may wonder why this is listed as a problem! But as a category leader, you can fall into complacency and lose sight of challenger brands who are nipping at your heels. 
If you ask people to rate brands they’ve heard of, but don’t require them to be familiar with those brands, then you’ll exaggerate the brand halo that makes you dominate lesser-known alternatives. The proper place to capture the familiarity benefit is in your brand funnel, not in the perceptions battery. 
If this is not the explanation, it could be how you’re assigning brands to be rated or what measurement tool you’re using. 

 

Scores are too flat across attributes 

This is related to the last problem. If people are rating brands they don’t know well, they’ll tend to straight-line the attribute battery, giving the same rating to all attributes. 
This tends to inflate the scores of category leaders, deflate the scores of smaller brands, and undermine the ability to statistically infer which attributes are most important in driving key business outcomes. 
One way to mitigate this effect is to clean out these respondents for poor-quality data — but do it carefully. This solution may also introduce systematic bias against people who are familiar with fewer brands.  
Remember, giving the same rating to every attribute of a brand you’ve merely heard of is not necessarily evidence of inattentive responding.  Better to only rate brands people are familiar with. 

 

Tracking results make no sense 

You’re right to be deploying cleaning steps in your tracker. Survey research fraud is less lucrative than advertising fraud, but it’s still commonplace. 
Survey bots, click farms, and professional respondents all may be polluting your results if you don’t identify and screen them out. Fraudsters are a particularly large problem in low-incidence studies and high-incentive ones. 
You can deploy a variety of tactics to catch fraud, both real-time during field and post-hoc once data collection ends.  But for trackers, we strive to clean real-time since you can’t go back into field to replace a moment in time; once that week is past, it’s gone forever. 
While AI methods are improving the accuracy and efficiency of quality control efforts, there’s still no substitute for a careful human review of open-ended responses and general response patterns. 

 

Other common solutions to bad brand tracking data 

Tracking studies are unique in the level of pressure they place on research execution and data quality. Unlike a one-time study, all waves need to be perfectly consistent on a host of design and execution decisions that could reasonably go either way if not focusing on trends. 
This is why we keep a study-specific tracker manual that documents every decision (what, when, and why it was made) so we can ensure consistency over time. 
 

You must control for survey length 

Partly due to the shift to mobile survey-taking, but also due to shrinking attention spans and proliferating entertainment options, respondents have become increasingly unwilling to take surveys over 15 minutes in length and sample providers are loath to ask their respondents to take them. 
This is happening in the same moment that long-standing trackers have become bloated through the addition of new questions over time. 
Long surveys receive lower cooperation rates, lower completion rates, and participants who tend to provide lower quality responses. Make the hard decisions about what you really must track, then trim the rest. 
Data quality is of utmost importance in tracking studies. If you suspect any of the above culprits may be to blame for your bad tracking data, it may be time to rethink your tracking study design — or your tracking partner.