Finished Your Global Segmentation? Watch Out for These Red Flags

Light

post-banner

Listen to this Article

  • 00:00 | 00:00

Look out for these 3 pitfalls after a global segmentation; it could mean the difference between data that is or isn't statistically sound.

A poorly defined global segmentation can create significant implementation and strategy problems. When Material reviews segmentation solution options with our clients, we reassure them that we’re presenting only statistically sound alternatives. This allows them to focus on evaluation criteria such as face validity (do the groups feel real?), comprehensiveness (are any groups missing?), and actionability (do they drive different business actions?).

That said, it’s important for every client to take a critical look at their segmentation findings since not all solutions actually are statistically valid. Since segmentation typically relies on sophisticated analytics applied to survey responses or first-party behavioral data, it can be difficult for your team to assess the validity of the segments they receive from their research partner or internal data scientist.

What are some signs your segmentation data might not be statistically sound? Here are 3 red flags to look out for as you’re evaluating new segments.

Red Flag #1: There is a group in your global segmentation that is high or low on everything

Do you have one segment that is high on everything and perhaps another that is low on everything? Or does your high engagement group balloon in high-rater countries like Mexico and India, but shrink in middle-rater countries like Japan? Either of those signs may indicate that the algorithm grouped people based on how they use rating scales rather than what they are trying to communicate with their ratings.

This is not to say that the presence of high and low engagement groups is a bad thing. Most categories have groups like these, but it’s important to be sure that the groups emerged due to high quality measurement methods and not as the result of scale usage differences: a low engagement person should score high on measures of price sensitivity and indifference.

Even if it is not a global segmentation study, we strongly recommend segmenting based on metrics that reduce or eliminate scale usage differences, which are a catastrophe across countries but also problematic within countries.

Examples of good segmentation inputs are bipolar scales (where people choose between opposing statements rather than indicating how much they dis/agree with one of those statements) and partial or complete rankings (depending on the list length).

We do not recommend using MaxDiff scores as inputs since those scores are modeled by borrowing information from the very respondents you hypothesize may belong to a different group!

Red Flag #2:  The typing tool for classifying future respondents into segments is really long and complicated

Overly lengthy typing tools (the set of questions you need to ask to predict what segment someone belongs to) drive up future research costs and deter the organization from deploying the segmentation in all relevant research settings.

If you can’t squeeze your typing tool into your brand tracker, then how are you supposed to judge the effectiveness of your marketing activities which are presumably geared toward target segments more than others?

Clients that have received really lengthy typing tools tend to request a shorter, and necessarily less accurate, typing tool for some use cases.

  • If 1-in-4 respondents get misclassified in your quantitative research, you are adding this modeling error on top of your statistical sampling error, running the risk of drawing erroneous conclusions and thus making bad business decisions.
  • It’s especially dangerous to use a weak typing tool when recruiting for qualitative research designed to illuminate segments since that is a high cost-per-interview in terms of both investment and risk.If you rely heavily on a misclassified person to enhance your understanding of the segment and build personas, then all your strategies will be built on shaky ground.

 

Furthermore, if it takes so many questions to identify segment membership accurately, it suggests that the definitions of the segments are more complex than your understandings of them.

We attempt to follow Occam’s razor to seek an elegant solution that defines segments based on just the key differentiating traits, allowing us to generate typing tools that are efficient, highly accurate, and consistent with how you think about the segments.

Red Flag #3:  You’re having trouble understanding one of the groups in your global segmentation

This might be a problem with the framework itself or simply with the quality of the report writing and presentation. Signs that you have a problem with the framework include:

  • A group that scores roughly average on everything. Some people refer to this as the “muddy middle,” and while an “average” group is arguably real, it’s not very actionable, so grouping respondents into a different segmentation framework might serve you better.
  • A big group that feels like a mish-mash of a couple of distinct groups that do share some common traits but would be better broken apart since they differ amongst themselves on something that would suggest different strategic actions.For example, a segment defined by a preference for high protein diets might be comprised of both body builders and people trying to lose weight, which is unlikely to create a single coherent target or strategy.

 

Too often, market segmentations are a case of “buyer beware.” Once you receive the results, it can feel like you have to make do with what you’re given, but that’s not the case.

If you don’t like your segmentation, send it back!

And if you don’t trust your partner to solve the problems on their own, contact us. It would not be the first time we’ve been hired to re-analyze the data from someone else’s segmentation study.