Growth
-
March 24, 2026

88% of Companies Use AI. Only 5% Create Real Value. Here's What They Do First.

Michel Gagnon
Michel Gagnon
Co-Founder, Stun and Awe

Nate B. Jones recently made the point that effective AI usage isn't about mastering prompting. It's about context. If the context is a mess, you're not helping yourself.

He was talking about how you feed information to a model. I'd extend that principle further.

A client of mine was scaling an impact learning program. Over the years, their tech stack had grown into a big ball of mud. Manual data manipulation was everywhere, their IT budget was ridiculously high for what they actually needed, and the team had a habit of changing program elements without warning, which kept the underlying data in constant flux.

They wanted to add AI on top of that.

I told them that adding AI to a messy context generates one thing consistently: crap.

So we did the unglamorous work first. We cleaned up the tech stack and saved them almost $100K in the process. We simplified their processes, moved to platforms that integrated cleanly, set guardrails to prevent ad-hoc program changes, and cleaned up their CRM. Only once that foundation was solid did AI become a conversation worth having.

BCG's 2025 research found that 60% of companies generate no material value from AI despite significant investment. In 2025 alone, 42% abandoned most of their AI initiatives. McKinsey found that 88% of organizations use AI in at least one function, but only 39% report any impact on EBIT. The post-mortems all say the same thing: the failure wasn't the technology. It was what the technology was operating on top of.

AI is an amplifier. Give it a clean, aligned system and it makes you faster. Give it a mess, and it makes the mess bigger, faster, and more expensive.

Why do most AI implementations fail to deliver results?

McKinsey's 2025 State of AI survey found that 88% of organizations now use AI in at least one function. But only 39% say AI has had any impact on EBIT at all, and two-thirds are still stuck in pilot or proof-of-concept stage with no clear path to scaling. That's not a tools gap. Every company at your scale has access to the same ChatGPT, the same Copilot, the same library of AI agents. The tools are commoditized. The gap is organizational.

BCG's 2025 research is even more stark: 60% of companies generate no material value from AI despite significant investment, and only 5% have created substantial value at scale. In 2025 alone, 42% of companies abandoned most of their AI initiatives, up from 17% the year before. The barriers they cite aren't technology. They're organizational: unclear ownership, misaligned incentives, and weak feedback loops.

Sound familiar? That's because these aren't AI problems. They're the same execution problems that existed before anyone said "AI" in a board meeting. They were there before the last agile transformation. They were there before the CRM rollout nobody finished configuring. Organizational dysfunction has remarkable job security.

How does AI make organizational dysfunction worse?

Here's a client story that made this concrete for me. A 70-person B2B services firm spent three months building an AI-powered sales system. The technical work was solid. But the system was producing garbage at scale, hundreds of outreach emails going out daily, almost no replies.

The problem wasn't the AI. The problem was that their sales team had no shared definition of what a qualified lead looked like. Every rep had their own interpretation. When they fed that ambiguity into the AI system, it obediently automated it and multiplied it across their entire target market.

We stopped the system. We spent four weeks doing the human work: building a shared ICP definition, clarifying qualification criteria, agreeing on what "a good sales conversation" looked like and who should have it. Then we rebuilt the AI layer on top of that foundation.

Conversion rates improved significantly. Not because the AI got better. Because what the AI was amplifying finally made sense.

What does poor team alignment look like before an AI rollout?

Most teams don't recognize dysfunction because it's been normalized. It's just how we work here.

Watch for these: strategy that lives in the founder's head and hasn't been written down in two years, decisions driven by whoever speaks loudest in the room (the HiPPO, highest paid person's opinion), accountability that's everyone's responsibility and therefore no one's, and meetings where people say yes and then do nothing different.

Most companies call this "how we operate around here," which is technically accurate and explains nothing.

Gallup's 2025 State of the Global Workplace report found that only 21% of employees globally are engaged at work, matching the lowest point recorded during COVID lockdowns. The price tag on that disengagement: $438 billion in lost productivity globally in 2024 alone. When you build an AI layer on top of that, you're not solving a productivity problem. You're automating it.

AI-powered meeting summaries on a team that never commits to anything just produces faster documentation of non-decisions. An AI prospecting tool deployed without a shared customer vocabulary sends more emails into the void. The dysfunction doesn't disappear. It scales.

Technically speaking, the implementation is working exactly as designed.

What's the right sequence for AI implementation in a growing company?

Every company I've seen generate real, sustained results from AI did the same thing first: they fixed the human system before they touched the technology.

That means getting strategy out of the founder's head and into shared language the whole team can use. It means deciding on the one metric that matters most this quarter, not the twelve that matter equally. It means having the hard conversation about ownership, who makes which decisions, and what happens when things break down. It means building the communication habits and accountability structures that let the team operate without the founder in every room.

Once that foundation exists, AI stops being a liability and starts being a multiplier. Prospecting tools produce qualified leads because the qualification criteria are clear and agreed upon. AI-generated content sounds like the brand because the brand voice has been documented. Meeting tools surface real decisions because real decisions are actually being made.

The sequence isn't optional. Human system first. AI layer second. That's the order.

And to be clear: this isn't about doing the same work faster. The companies doing genuinely interesting things with AI right now, discovering formulations no food scientist would have tried, modeling what previously required years of lab work, spotting patterns that were always in the data but never within reach, all of them built a foundation solid enough to actually aim at something new. The foundation work isn't the destination. It's the entry fee.

How do you know if your team is ready for AI implementation?

Here's a test you can run this afternoon. Ask your five most senior people, separately, these three questions:

What is the single most important thing we are working on this quarter? Who owns it? What does success look like by the end of the quarter?

If you get five aligned answers, you're ready to go deeper on AI implementation. If you get five different answers, or five politely vague ones, you've just found your real implementation roadmap. And it doesn't start with tools.

That's not a failure. It's a starting point. Most companies at 50 to 150 people are running on strategic clarity that was designed for 20 people and hasn't been updated since.

How to start fixing your foundation before you invest in AI tools

Run that diagnostic with your senior team. Five questions, five people, fifteen minutes each. Compare the answers.

The gaps you find are where to start. Fix those first, then build your AI layer on top of something solid.

I know it's counter intuitive, but AI can wait. Clarity can't.

Want a structured way to run that diagnostic? I run a 90-minute team alignment session designed exactly for this moment, before the AI investment, not after. No slides, no theory. Just the questions your leadership team needs to answer together.

Book a call here.

Or if you'd rather read more first, my newsletter covers one real implementation story per week, what worked, what didn't, and why. Subscribe here.

Sources & Further Reading

McKinsey & Company. "The State of AI in 2025." McKinsey Global Survey, 2025.

BCG. "The Widening AI Value Gap." Boston Consulting Group, 2025.

Gallup. "State of the Global Workplace: 2025 Report." Gallup Inc., 2025.

Mollick, Ethan. Co-Intelligence: Living and Working with AI. Portfolio/Penguin, 2024.

McKinsey Organizational Health Index. McKinsey & Company.

Stanford Human-Centered AI Institute. "Artificial Intelligence Index Report 2025." Stanford University, 2025.

Pre-publication note: Verify Stanford HAI multiplier on process redesign before deployment before publishing.