Trust in the Age of AI: The Changing Dynamics of Reputation in the AI Space

Light

post-banner

By Jenessa Hunter, SVP, Portfolio Lead.

In the last two decades we’ve seen technology drive sweeping changes in how we shop, connect and consume media. We’re used to seeing tech disrupt the way we do things. But what we’re seeing with the generative AI boom is different. The shift isn’t just technological; it’s existential. Where previous disruptions altered the way we do things, AI is disrupting how we think of ourselves – specifically the role of human intelligence, talent and labor in both the workplace and in the world.
For brands building and marketing AI products, this makes navigating trust and reputation particularly difficult. The unique nature of AI’s disruption invites questions about both its utility and the narratives around the technology and those creating it. To mitigate the reputational fallout of these questions, tech companies need ways to monitor their credibility in the market and build trust with consumers – using methods that are as innovative as their products.

 

 

What makes AI disruption unique?

There are three factors that make AI disruption unique.
1.    Tech previously disrupted convenience, but Al disrupts meaning.
Traditional technology disrupted logistics and lifestyle, and redefined convenience, but generative AI has disrupted skill, identity, creativity, career paths and, to an extent, truth. AI triggers deeper anxieties and ethical debates than other technologies.
The reputational stakes for AI brands are especially intense because the technology isn’t just changing our lifestyle – it’s challenging our human self-perception.

 

2.    The core “product” of AI is invisible, leaving space for speculation and ambiguity.
There are some visible components of AI products; on the surface, we can interact with the design of a generative AI interface and we can (to an extent, with effort) judge the quality of its output.
That said, most of what AI actually is is invisible: data provenance, model training and safety guardrails that all live inside an AI “black box.” And because users – and even some experts – cannot easily see how it operates, AI reputation is shaped more readily by media narratives, expert commentary and imagined risks.

 

3.    Trust in AI must be re-earned daily.
Constantly influenced by model updates, emerging competitors, new use cases and a lot of hype (both good and bad), reputation is less linear in the AI category than in most others. Constant change requires constant redefinition and ongoing re-earning of trust.

 

 

Re-earning Trust: Integrating Daily Experience and Long-Term Values

Reputation in this context has two facets; there’s little “r” reputation and Big “R” Reputation, the former being a measure of daily interactions and the latter relating to longer-term, “big picture” issues. Much in the same way that brands strengthen customer relationships by aligning in-the-moment and over-time experiences, reputational growth and resilience require brands to meet both the daily and long-term metrics of trust.

 

Little “r” reputation: The metrics of daily trust
Little “r” reputation is built on daily experiences. It asks if a brand is reliable and useful. Does it provide transparency, clarity and utility? Are there easy-to-understand user controls and safety guardrails? This aspect of reputation is built from direct interaction. Falling short here may earn a brand the reputation of being unhelpful, unsafe or even unusable.
Little “r” reputation answers the question, “Can I trust you right now with this task?”

 

Big “R” Reputation: The metrics of values-based trust
Big “R” Reputation is a measure of big-picture credibility and trust; it considers alignment with values. Who is the brand in the world? What are its intentions, its vision of the future and what sort of governance and transparency keep it in check? Where little “r” reputation is defined by daily experience, Big “R” is shaped by brand narratives. Falling short here can get a company labelled greedy or untrustworthy.
Big “R” Reputation answers the questions, “Who are you in the world and what do you stand for?”

 

Most industries rely more heavily on building either little “r” or Big “R” reputation. Generative AI providers are among the rare companies where both layers of reputation must be fully engaged with all the time. This is because different audiences and stakeholders are looking at different reputational indicators to decide if they can trust generative AI.
Developers, for example, may trust an AI product while remaining wary of the corporate values of its creators – skewing towards little “r” and away from Big “R.” Policymakers may be more attuned to intentions, while lacking the product and technical expertise to trust the little “r” utility. And consumers tend to sit in the middle, uncertain about both the utility and the big picture impact of generative AI.

 

 

The Credibility Gap: Assessing Reputational Risk

At Material, we’ve developed a way to measure the alignment between daily experiences and big-picture values. We call it the Credibility Gap. The Credibility Gap is not just an abstract idea; it’s a quantifiable risk assessment, and it’s actionable.
Here are some sample scenarios.
Material+

In scenarios A and B, daily experiences (little “r”) and values (Big “R”) are out of alignment, casting doubt on the credibility of the brand. But in Scenario C, brand values are aligned with the experiences they deliver; they’ve earned trust by being what they say they are and delivering what they promise.
Because generative AI leans so heavily on both daily experiences and big-picture values, the Credibility Gap is particularly useful for diagnosing shortfalls in trust between its brands and audiences. And, again, the Credibility Gap is actionable; discovering misalignment between types of trust is just the first step in building a roadmap for creating and maintaining a consistent and positive reputation.
When audiences are more skeptical of daily experiences, Gen AI brands might develop a roadmap that addresses:
  • Improvements to in-product clarity
  • Shoring up visible guardrails
  • Expanding developer documentation
  • Increasing speed of issue response

 

When a generative AI company’s actions outpace or diverge from its perceived values and mission, their roadmap might address:
  • Communicating safety investments
  • Increasing the visibility of governance
  • Highlighting their mission and vision
  • Publishing transparency reports

 

In either scenario, the point is to close the gap between the experiences the brand delivers and the narrative that describes their intentions, mission and values.

 

 

Closing the Credibility Gap with Material

Material is uniquely positioned to help tech companies in the AI space identify misalignments in their credibility and rebuild trust. Our approach is informed by our behavioral science lens that underpins our approach with core truths about human behavior. We also have deep experience serving the tech sector and building alignment with its diverse, demanding audiences and stakeholders.
To learn how we can help you uncover gaps in the way your brand is perceived and help you level up your approach to building and maintaining trust, reach out today. Let’s start the conversation.