Beyond the Better Tool Fallacy: What Are Users Actually Trying to Solve with AI?

Light

post-banner
By James Lanyon, EVP, Strategy + Innovation; Portfolio Lead, Technology

 

There is a near-universal assumption at the heart of the current AI race: give people a more powerful tool and they’ll do more with it. It’s the logic of the industrial revolution applied to the knowledge economy. Lower the cost of production and supply will rise.
The data tells a more complicated — and more human — story.
Stanford’s Human-Centered AI Institute surveyed 1,500 workers across more than 100 occupations and found that attitudes toward AI are more practical than ideological. People are open to automation for repetitive work, but they want to be in the room for anything requiring judgment. A Wharton/Slack analysis put a finer point on it: 76% of workers feel a sense of urgency to become AI experts, but only 33% actually use it in daily tasks.
This is not a technology challenge. And while some may argue this is a knowledge challenge (i.e., people just don’t know what AI might do for them), the reality is that it’s a behavioral challenge, with enormous implications for any company building AI products and trying to measure whether those products are creating real value.
Meanwhile, Gartner reports that fewer than one in three CEOs are satisfied with returns on their AI investment. Pew Research finds that half of American consumers are more concerned than excited. C-suite patience, which was already wearing thin, is running out.

 

 

The Say-Do Gap – and Why It Keeps Winning

In the mid-1990s, Clayton Christensen introduced the “Jobs to Be Done” framework to explain why customers so often confound market research. The insight was deceptively simple. Christensen showed how people don’t buy products. They “hire” products to perform a job, and what they say they’ll hire, and what they actually hire are often different things.
Consider mobile devices. The mobile phone was sold as a better way to make calls. The job consumers actually hired it to do — one no survey could have predicted — was to replace the camera, the newspaper, the map, the wallet and the waiting room. Apple didn’t win the smartphone market by building a superior telephone. It won by recognizing that the devices people carried everywhere could be hired for whatever jobs were close at hand.
The same pattern is playing out in AI. Surveys show users expressing enthusiasm for advanced generative capabilities, including more powerful image synthesis, more sophisticated writing and more complex analysis. And usage data from Notion, GitHub, Canva and others largely confirms the gap between aspiration and behavior. The most-used AI features are rarely the most powerful ones. Instead, they tend to be the ones that eliminate the smallest, most immediate frictions: auto-complete, grammar correction, background removal and template suggestions. Users say they want a magic wand, but their behavior reveals they want a better eraser or an editor. And that gap is narrowing — power users who lean into the more capable, agentic features report meaningfully higher productivity gains; but for the median user, AI remains less about conjuring something new and more about getting out of your own way faster.

 

 

Three Jobs Users Are Actually Hiring AI to Do

When you apply Jobs to Be Done with genuine rigor, the picture sharpens quickly, and what users want from AI becomes clear.

 

1. Ensure Competence
The knowledge worker who subscribes to an AI writing assistant says the job of AI is “help me write better.” The actual job, more often than not, is “don’t let me look stupid.” The executive using AI for presentation design isn’t hiring the tool to be creative. They’re hiring it to guarantee a minimum threshold of professional competence. Users are not seeking novelty. They are seeking social insurance, and that reframes the entire value proposition of AI away from increasingly high-gloss outputs toward a trusted, personalized utility. The highest-value feature is not a generator, it is a launch pad that transforms an intimidating blank slate into a workable first draft.

 

2. Enable Agency
Research on flow states establishes that humans enter peak engagement when a task slightly exceeds their perceived skill level. If the task is too easy, we disengage. If it’s too hard, we freeze. Through that lens, many AI tools solve for the wrong variable. By reducing the difficulty of a task to near zero, they eliminate the productive tension that makes work feel meaningful. Users sense this. It’s why adoption curves for generative AI show initial enthusiasm followed by quiet disengagement. The output is impressive, but it doesn’t feel like the user’s product. The real challenge isn’t to make the task easier. It’s to reduce the barrier to starting while preserving the sense of agency that sustains engagement.

 

3. Encourage Participation
The third job is the most undervalued, which is belonging. Across collaborative workspaces, creative platforms and educational settings, users consistently describe a desire not merely to produce output, but also to participate in a shared activity. Examples include the developer who uses AI to contribute to an open-source project they’d otherwise find too intimidating, or the employee who uses it to draft a proposal they’d otherwise never submit. In each case, the AI is not generating value by producing output. It is generating value by lowering the social and psychological barrier to participation. Users are hiring it not as a replacement for their voice but as the confidence to use it.

 

 

The Strategic Implication: AI as a Flywheel, Not a Factory

The prevailing metric for AI value is volume, including tokens processed, images generated and tasks automated. This is analogous to measuring a social network’s success by the number of accounts created rather than the quality of interactions sustained.

 

It is a supply-side metric applied to a demand-side problem.

 

The tools that will prove most durable aren’t those with the most advanced models. They’re the ones most deeply embedded in the user’s existing context — their documents, their history or their workflows. Context doesn’t merely improve the quality of AI output; it transforms the user’s relationship to the output. When an AI suggestion draws on a user’s own prior work and patterns, it stops feeling like a foreign imposition and starts feeling like an extension of the user’s thinking.
Knowing this, it’s our belief that the companies that will define the next era of AI won’t be those that generate the most output. They’ll be the ones who understand a deceptively simple truth, which is that users are not looking for a generator. They are looking for a bridge from intention to action, from passive consumption to active participation, from I wish I could to I just did.
That is the metric that matters. That is the job to be done. And any platform with the strategic clarity to measure this will most likely be the one to break the Nash Equilibrium that currently has every major AI player locked in the same capability arms race, running faster on a track that leads to the same place.

 

 

The Opportunity

The AI companies that break out of that pattern will not do it by outspending on compute or out-releasing on cadence. They will do it by knowing something their competitors don’t — specifically, what users will actually do, not just what they say they want.
Behavioral insight is that edge. Integrating qualitative behavioral research into the strategic mix alongside the quantitative methods most tech companies already use creates a richer picture of where adoption is forming, where it’s stalling, and, most importantly, what jobs users will be hiring AI to do — before those jobs are legible in any usage dashboard. Survey data tells you what people intend. Behavioral research tells you what they’ll do. The combination of the two is where real forecasting lives.
This isn’t a replacement for the technical roadmap. It’s the input that makes the roadmap point somewhere worth going.
If you’re looking to add behavioral insights to your quantitative research and upgrade your strategic approach to AI, Material can help. Reach out and let’s start the conversation.