Nobody Needed That
Play all you want — but in a professional setting, most AI builds are solving problems that never existed.
We’re all experiencing it. Everyone’s pushing AI, build something, do something, show something. The pressure is real. Your leadership wants an AI story. Your team wants to feel ahead of the curve. There are Slack channels full of “look what I just built.”
But I keep coming back to the same question: are we actually solving real problems, or are we just creating solutions to things that never really bothered us?
Let me be clear upfront, play is a path to learning. Exploration builds intuition and intuition compounds. So don’t stop f*cking around with AI just for the sake of it. Seriously, don’t. Tinkering is how you develop the instincts to know what AI can and can’t do, and that’s worth something. But there’s a difference between play that builds skills and play that gets mistaken for work.
That’s where I think a lot of teams are getting confused right now.
The Slack Problem
I watch people build things and then share them in Slack. There’s a quick rush of reactions, a couple of “this is cool” replies, and then... nothing. The thing lives for that person, in that moment. Sometimes I look at what was built and genuinely ask myself: what did you actually solve with that? And I’m sure for them it felt like a solution. But I also wonder, how much time did it take to build, tune, and get to the point where it was actually doing what it needed to do? And is it still running? Is anyone else using it? Does it matter if it stops?
This isn’t a dig at curiosity. It’s a pattern I keep noticing where the act of building has become its own reward, disconnected from whether the thing built actually delivers ongoing value.
MIT’s NANDA initiative found that 95% of generative AI pilots at companies fail to achieve measurable P&L impact. BCG puts the number of companies struggling to scale AI value at 74%. Those aren’t technical failures, those are problem-selection failures. People building the wrong thing, for the wrong reasons, with no clear success metric going in.
So How Do We Get Smarter About This?
I think we can be more intentional. Not rigid… just honest. Before building something in a professional context, I’d run it through three questions:
Is the problem actually real?
Not “could AI help here?” but “does this hurt right now?” A real problem has a clear before-state that’s worse than the after-state. You don’t need to convince people it’s a problem — they already feel it. If you find yourself explaining why something is worth solving, that’s usually a signal that it isn’t painful enough yet.
Will it stick?
How often does this come up? How many people hit this problem? A tool one person uses twice a month that took two weeks to tune isn’t leverage, it’s a hobby project wearing a work hat. The problems worth solving professionally are the recurring, shared ones. The friction that compounds over time.
Does the math, math?
This is the one most people skip. Time to build + time to maintain + ongoing prompting overhead, versus time saved × frequency × number of people affected. Your gut answer is usually too optimistic. Factor in the edge cases, the “it stopped working” moments, the re-prompting when the model updates. If the math still works, build it. If it doesn’t, maybe it’s a Zone 1 project.
Two Zones, Not One
Here’s a simple way I’ve started thinking about this:
Zone 1 is play. Personal, low stakes, no shared dependency, short time commitment. This is where you build the thing because you’re curious, because you want to learn how something works, because it might come in handy later. No justification required. Just don’t confuse it for Zone 2.
Zone 2 is professional build. Shared use or huge individual impact, recurring problem, someone owns it, there’s a success metric, and it’s still running in three months. This one requires the three questions above. Not as a bureaucratic gate, just as an honest check on whether you’re solving something real.
Most of what gets shared in Slack is Zone 1 work getting Zone 2 recognition. And that’s fine, as long as everyone’s clear on what it actually going on.
What Strategic AI Actually Looks Like
The teams getting real leverage from AI aren’t the ones with the most demos. They’re the ones who picked one specific, painful, recurring problem, defined what success looked like before they started and built something narrow that actually works. Then expanded from there.
The pattern that doesn’t work: “AI copilot for everything.” The pattern that does: “this specific thing takes 3 hours a week per person and we’re going to get it to 20 minutes.” That’s boring to talk about. It’s not Slack-worthy. But it’s what actually moves the needle.
The goal isn’t to have an AI story. It’s to have results.
So play hard in Zone 1, seriously, keep going, it compounds. But when you step into Zone 2, earn it. Run the three questions. Make sure the problem is real, that it’ll stick, and that the math closes. If it passes, build it properly. If it doesn’t, at least now you know what it is.
Sources
MIT NANDA Initiative — The GenAI Divide: State of AI in Business 2025 (95% of AI pilots fail to achieve measurable P&L impact)
→ Fortune coverageBCG — AI Adoption in 2024: 74% of Companies Struggle to Achieve and Scale Value
→ BCG press release

