Most Australian enterprises and scaleups I've talked to in 2026 are stuck at the same place: they've run an AI proof of concept, it technically worked, and now nothing's in production. The explanation is rarely about model quality. It's almost always about the practice layer above the model — who owns governance, how enablement is done, whether strategy connects to P&L at all.
I spent the better part of this year talking to practitioners across Australian financial services, government, and scaleup delivery teams. The pattern holds everywhere: the model choice accounts for maybe 8% of the variance in outcome. The other 92% is practice layer — can your team reason about when to delegate, how to structure a hand-off, what observability tells you when an agent is confused, and how to write a rollback plan when it goes sideways.
"The model layer is commoditising. The practice layer above it is how Australian enterprises will pay for AI through 2026."
None of the teams I spoke with call themselves minimalists about AI. But the ones shipping in production are disciplined about a specific thing: they treat the practice layer as load-bearing infrastructure, not as a wrapper around the API calls. That distinction is the whole game.
What follows isn't a manifesto. It's a structured inventory of the three failure modes I've watched recur across every stalled deployment — and the specific practice decisions that separate the teams who shipped from the teams who presented another PowerPoint deck about their AI roadmap.
01. The failure modes aren't about the model
The first failure mode is architecture friction. Teams build a spike, realise it works, then spend six months retrofitting into production — only to discover that their agentic patterns don't compose with their existing data layer or compliance gates. The PoC looked clean because it ignored the constraints the production system can't. The second is ownership collapse: nobody owns the agent, everybody owns it, so when it drifts there's no single throat to choke and no rollback authority. Third is enablement theatre — the framework exists, but the squad doesn't know how to use it, so they burn out and revert to doing things manually. All three are about practice, not models.
The teams who got to production did something structurally different from the start: they designed the governance surface before they picked the model. They wrote down who owns what decision, what a confused agent looks like, and what the human-in-the-loop trigger is — before the first line of production code. That's not compliance overhead. That's the practice layer doing its job.