Something remarkable happened in 2025. Coding agents — tools like Claude Code, Cursor, and GitHub Copilot — made it possible for a single developer to produce work that would have taken a team weeks. The delivery bottleneck, for many organisations, effectively vanished.
And yet the data keeps getting worse. MIT reported that 95% of generative AI pilots failed to deliver measurable value. S&P Global found that 42% of companies abandoned most of their AI initiatives, up from 17% the year before. The failure rate for AI projects is now roughly double that of traditional IT projects.
How is that possible? The tools are better than ever. The models are more capable. The infrastructure is cheaper. So why are the results getting worse?
The bottleneck moved
For years, the constraint on analytics and AI delivery was speed. Building a dashboard took weeks. Training a model took months. Getting data from source systems was a project in itself.
Coding agents removed that constraint. But they exposed a deeper one: most organisations don’t have clarity on what’s worth building.
When it took six weeks to build a dashboard, the cost of a bad decision was six weeks of wasted effort. Now that it takes six hours, the cost per mistake is lower — but the rate of mistakes has accelerated. Organisations are producing more outputs, faster, with less certainty about whether any of them matter.
Dashboards don’t change decisions
Here’s a question that most data teams struggle to answer: for each dashboard you maintain, what specific business decision does it enable?
Studies suggest that only around one in five employees actually use the BI tools their company has deployed. Up to 90% of dashboards are abandoned within six months. The problem isn’t the technology. It’s that the dashboards were built without a clear link to a decision that has value at risk.
If a metric can’t influence a choice or a process, it’s just a cost.
Workflow redesign is the #1 predictor of success
McKinsey’s 2025 State of AI survey tested 25 organisational attributes against EBIT impact from AI. The single strongest predictor? Whether the organisation had fundamentally redesigned its workflows before selecting tools.
Not the sophistication of the model. Not the size of the data estate. Not the technology budget. Workflow redesign.
Organisations that redesigned workflows first were roughly three times more likely to see real financial impact from their AI investments.
What this means in practice
Before your data team builds the next dashboard, model, or data product, three questions need clear answers:
-
What decision does this enable? If nobody can name the decision-maker and the choice they face, the product doesn’t have a reason to exist.
-
What’s the value at risk? Every decision has a monetary consequence. If you can’t estimate it, you can’t prioritise the work.
-
What does the decision-maker need to know? Not “what data do we have” — but “what questions need answering to make this decision well?”
Starting from these questions changes everything downstream. Requirements become specific. Scope becomes manageable. The data team knows exactly what to build, and the business can see exactly why it matters.
The speed trap
Coding agents are extraordinary tools. But speed without direction is just expensive motion. The organisations that will get the most value from this new era of AI-accelerated delivery are the ones that invest in decision clarity before they start building.
The bottleneck was never the code. It was always the thinking.