A few weeks ago, I had the pleasure of being a guest on the Leadership in Flux podcast, hosted by Sanjiv Augustine at LitheSpeed. We covered a lot of ground in that 24-minute conversation, but one of the threads I keep coming back to is this: AI is giving engineers the gift of time. And most organizations are squandering it. If you haven’t listened to the episode yet, you can find it here.
Now that AI is handling more and more of the coding work, what should your engineers and product teams be doing with the time that frees up? There’s a real benefit to utilizing that engineering time instead of firing your experienced engineers. The answer, I’d argue, is product discovery. Real, rigorous, customer-focused product discovery. And most teams are not doing nearly enough of it.
What AI Is Actually Doing to Development Velocity
Let’s start with what’s actually happening. AI coding tools like GitHub Copilot, Cursor, Claude Code, and others are genuinely accelerating how fast developers can write code. We’re not talking about marginal gains. Teams that have adopted these tools well are reporting substantial reductions in the time it takes to go from concept to working software. Although, many of my colleagues report spending far more time doing peer reviews and finding issues with AI-powered code, so some of the time-savings has been shifted to that.
In most organizations, there is still a savings, sometimes a significant one. This is good news. But it creates an interesting problem that most organizations haven’t fully grappled with: if your team can build things faster, the cost of building the wrong thing has gotten relatively higher, not lower. You can now ship a misguided feature in two weeks that would have taken two months before. The speed amplifies both your wins and your mistakes.
This is precisely why I feel the freed-up time needs to flow into discovery, not just more delivery.
What Most Teams Are Doing With That Time-savings
Here’s what I observe happening in most organizations when AI starts accelerating delivery: they ship more features. The backlog gets shorter. The roadmap gets more ambitious. Stakeholders get excited about how much the team is accomplishing. Everyone declares victory.
And then, six months later, the metrics don’t move. The features are shipped but the customers didn’t change their behavior in the ways the team hoped. The product didn’t meaningfully improve. And nobody quite understands why, because by any measure the team was incredibly productive.
The problem isn’t execution. The problem is that the team was solving the wrong problems, or solving the right problems in the wrong way, because nobody did the discovery work to truly understand what customers needed before the building started.
More speed in delivery without more rigor in discovery is just a faster way to build the wrong thing.
The Case for Outcome-Based Roadmaps
This is where I want to introduce a concept that I think should be table stakes for every product team right now: the outcome-based roadmap. I absolutely love Marty Cagan’s product books and he pushes heavily for outcome-based roadmaps in his latest book, Transformed: A Guide to Product Operating Model Success. (I’ve posted a quick summary of the book here.)
Most roadmaps are feature lists. They look something like:
- Q1 — build X feature
- Q2 — build Y feature
- Q3 — build Z feature
The team knows what they’re building. What they often don’t know is why or, more specifically, what customer or business outcome each feature is supposed to drive, and how they’ll know if it worked.
An outcome-based roadmap flips this around. Instead of committing to specific features, you commit to specific outcomes:
- “Reduce the time it takes new users to complete their onboarding by 40%.”
- “Increase the percentage of users who return within 7 days of signup from 30% to 50%.”
- “Reduce the number of support tickets related new feature onboarding by 15%.”
The features become hypotheses about how to achieve those outcomes, not commitments in themselves. And here’s what that shift unlocks: it gives the team permission, nay, it gives them an obligation, to do the discovery work needed to figure out the best way to achieve the outcome, rather than just building what was put on the roadmap six months ago.
When you give a team an outcome to hit rather than a feature to ship, you’ve given them a problem to solve. And solving problems well requires discovery.
What Good Discovery Actually Looks Like
Discovery is one of those words that gets used a lot in product circles and understood very differently by different people. So let me be specific about what I mean.
Good product discovery involves regularly and rigorously engaging with the people who use your product to understand their actual experience, their real pain points, and the outcomes they’re trying to achieve. It means running experiments before you build, not after. It means testing your assumptions cheaply and quickly with prototypes, conversations, and data, before committing your engineering team to weeks of work.
In practice, this looks like customer interviews scheduled every single week, not just when something is about to launch. Usability testing on prototypes before a single line of production code is written. Quantitative analysis that helps you understand where users are dropping off and why. Hypothesis-driven experiments that let you test whether your proposed solution will actually drive the outcome you’re targeting. It doesn’t matter which discovery framework you use (Lean Start-up, Design Thinking, Google Design Sprints, or nothing so formal), they’re all doing the above-mentioned things and they all facilitate better empathy and understanding of where your product needs to go.
Why Product Discovery Might Be Even More Important Than Before
Before AI, good businesses used discovery to save a lot of time during the development process. Development used to take a lot of time and so you’d want to be darn sure that you’re building the right thing. Now, AI makes development easy, so why wouldn’t we just build willy-nilly and see what sticks with our customers? My answer to that is that we’re still wasting time and we’re not building the empathy needed to connect with our customers, understand their problems, and solve them in the best way possible. In short, we’re still wasting just as much time, we’re just not doing it coding anymore.
This discovery work takes time. It takes dedicated attention from product managers, designers, and engineers too. And most teams don’t do nearly enough of it. Not because they don’t believe in it, but because there’s never enough time. The delivery work always crowds it out.
AI is changing that equation. If your team now has, say, 20% more capacity because AI is handling a meaningful chunk of the implementation work, that 20% should be flowing into discovery, not into shipping more features faster.
Giving Teams Problems, Not Solutions
There’s a related shift that I think is equally important, and it’s a cultural one: moving from a model where leadership hands teams solutions to implement, to a model where leadership hands teams problems to solve.
The old model, common in organizations where product decisions are made at the top and handed down, treats engineers and product managers as implementers at best and overhead at worst. Their job is to build what they’re told, on time and on budget. Discovery, to the extent it happens at all, is a pre-sales or strategy function, not a team function.
The new model recognizes that the people closest to the technology and closest to the customer are often best positioned to figure out the right solution. Leadership’s job is to define the problem clearly, articulate the outcome they’re trying to drive, and then give the team the space and resources to figure out how to get there.
This is harder than it sounds. It requires leaders to hold the “what” and “why” tightly while loosening their grip on the “how.” It requires trust. And it requires building teams that have the skills and the habits to do discovery well. It requires roadmaps and planning processes that create space for discovery rather than crowding it out with feature commitments.
But the payoff is enormous. Teams that own their problems, not just their tasks, are more engaged, more creative, and, most importantly, more likely to build things that actually work.
What This Means Practically for Your Team
If you’re a product leader, an engineering manager, or a team lead trying to figure out how to make the most of the AI moment, here’s what I’d suggest:
Start by auditing where your team’s time is actually going. How much time is spent in delivery (writing code, testing, deploying) versus discovery (understanding customers, testing hypotheses, exploring solutions)? If your discovery time is less than 20% of the team’s total capacity, you have room to grow there, and AI may be giving you the opportunity to do it.
Next, look at your roadmap. Is it a feature list or an outcome list? If it’s a feature list, try rewriting the next quarter’s worth of commitments as outcomes and see what that does to the team’s conversations. You may find that some features suddenly look less important, and some gaps in your understanding suddenly look more urgent.
Finally, think about how work gets into your team’s hands. Are you handing them solutions or problems? Are the discovery questions being asked before the building starts, or after? The answers to those questions will tell you a lot about whether you’re set up to take advantage of what AI is making possible.
The teams that will win in the next few years aren’t the ones that ship the most features. They’re the ones that figure out the right things to build and then build them exceptionally well. AI is a powerful accelerant for the second part. But only you can do the first.
I talked about some of these ideas on the Leadership in Flux podcast, Episode 4 which is available now. And if you want to go deeper on product discovery, my book Agile Discovery & Delivery: A Survival Guide for New Software Engineers & Tech Entrepreneurs covers a lot of this ground.