- The Logical Box
- Posts
- Define the Outcome, Not the Tool
Define the Outcome, Not the Tool
And why 95% of AI projects start in the wrong place

Welcome to The Logical Box!
For leaders who want AI to help, not add more work.
Hey,
If this is your first time here, welcome! The Logical Box is a weekly newsletter for owners and leaders who want AI to reduce real work, not add new work. Each issue focuses on one idea: see where work breaks down, fix the clarity first, then add AI where it actually helps.
If your business only works because you are in the middle of everything, this newsletter helps you build systems so it does not have to.
Now, on to this week.
The question nobody asks
Picture the scene. A leadership team gathers to discuss AI adoption. They have budget. They have urgency. Someone pulls up a comparison chart of platforms.
The conversation goes straight to features. Integrations. Pricing tiers. Which tools the competitors are using.
But one question rarely gets asked first: what outcome are we actually trying to create?
Not which tool. Not which vendor. What measurable result would tell us this worked?
That question tends to create silence. Because most teams have researched the platforms, read the comparison articles, watched the demos. But they have not written down what success looks like before the tool shows up.
This is where January's theme lands. We talked about why AI is not the starting point. We covered the clarity gap and why AI feels like extra work. We explored the wrong way to adopt AI.
But all of those lead here. To the moment of decision.
And the decision is not which tool to choose. It is what outcome to define.
95% of AI projects miss this step
PwC's 2026 AI predictions report captures the problem in one sentence: crowdsourcing AI efforts can create impressive adoption numbers, but it seldom produces meaningful business outcomes.
That pattern keeps repeating in the research.
McKinsey's 2025 State of AI survey found that only 1% of companies describe themselves as mature in AI deployment, meaning AI is fully integrated into workflows and drives substantial business outcomes. The rest are still experimenting.
And the MIT study we referenced last week makes the picture even sharper. 95% of generative AI pilots fail to deliver measurable impact on the bottom line. The projects that succeed are not the ones with the biggest budgets or the best tools. They are the ones that started by defining the outcome.
The pattern is simple. Organizations that begin with a clear, measurable business objective succeed at roughly twice the rate of those that start by selecting tools.
How tool-first decisions play out
This is what tool-first thinking looks like inside a company.
The leadership team decides AI is a priority. They allocate budget. They start evaluating platforms. Sales demos happen. Pilots launch.
Then something quiet happens. Each pilot defines success differently. Marketing wants faster content. Operations wants fewer errors. Customer success wants better ticket routing.
Nobody aligned on a shared definition of what good looks like at the business level. Nobody asked: what outcome matters most to this company right now?
The result is what researchers call "pilot purgatory." Projects show enough promise to continue, but never enough impact to scale.
According to S&P Global's 2025 survey, 42% of companies abandoned most of their AI initiatives this year. That is up from 17% just one year earlier.
The tools are not the problem. The work underneath was never defined clearly enough for AI to help.
Three things that break without a clear outcome
When a project starts with the tool instead of the outcome, three things tend to break.
First, accountability disappears. Nobody owns the result because nobody agreed on what the result should be. Teams track usage metrics instead of business impact.
Second, iteration dies. Without a clear target, there is no way to know if a change helped. Teams make adjustments based on opinion, not evidence.
Third, adoption stalls. People stop using AI when it does not make their work noticeably better. And it cannot make work noticeably better if nobody defined what better looks like.
McKinsey's research found that organizations seeing real financial returns from AI are more than twice as likely to have redesigned end-to-end workflows before selecting their models.
That redesign starts with the outcome.
Start here instead
Outcome-first thinking is not complicated. It just requires discipline.
Before you evaluate any tool, answer one question. What specific business result would justify this investment?
Not "improve efficiency." Not "save time." Not "leverage AI."
Something measurable. Something you would be proud to report in a quarterly review.
For customer onboarding, that might be: 80% of new customers complete setup within 48 hours without support intervention.
For proposal creation, that might be: reduce average proposal cycle from 14 days to 5 days while maintaining a 30% close rate.
For reporting, that might be: deliver weekly executive summaries by 9am Monday without manual compilation.
Notice what these outcomes do not mention. They say nothing about which tool to use. They describe what success looks like. The tool decision comes later, after the outcome is clear.
One question to ask this week
Here is one thing you can do in the next few days.
Pick the AI project that gets the most attention at your company right now. It might be a pilot. It might be something already deployed.
Write down the answer to this question: what measurable business outcome will tell us this worked?
If you cannot answer it clearly, that is the first problem to solve. Not which tool to buy. Not which features to enable. The outcome.
Once the outcome is clear, every other decision gets simpler. You know what to measure. You know when to adjust. You know whether to keep going or stop.
Looking ahead to February
This week closes out January's theme of work clarity before AI. We started with why AI is not the starting point. We covered why AI often creates more confusion than clarity. We explored why most adoption paths fail.
And now we end where every strong system starts. With the outcome.
February's theme shifts to ownership and defining what done looks like. Because once the outcome is clear, the next question is: who owns it? And how do they know when the work is finished?
If you want a place to build this step by step, AI Clarity Hub is where I teach the process live. Monthly sessions, tools, and a group of leaders working through the same challenges.
But whether you join or not, the principle stays the same.
Fix the work. Then add AI.
Build your skills here!
I launched AI Clarity Hub, a private space for owners and leaders who want AI to reduce real work, not add new work.
Inside the hub, we work through exactly what I described today: finding the clarity gaps, building simple systems, and only then adding AI where it actually helps. Members get access to live training sessions, ready to use templates, and a library of AI assistants built for real business workflows.

Thanks for reading,
Andrew Keener
Founder of Keen Alliance & Your Guide at The Logical Box
Please share The Logical Box link if you know anyone else who would enjoy it!