
Most AI investment goes into build. Most AI value comes from operate. The disconnect between the two is one of the largest sources of disappointment in mid-market AI programmes: the system ships, the consultants leave, and within six months the value is leaking out of the bottom because nobody has been monitoring drift, retraining prompts, or extending the system as the workflow changed around it.
Why operate is harder than it looks
AI systems are not set-and-forget software. They have moving parts that change with or without your attention.
- ·Models change. Vendors release new versions, deprecate old ones, and adjust behaviour underneath you.
- ·Data changes. The patterns the system was tuned against shift over time. Customer language evolves. Document templates get updated. Categories diverge.
- ·Workflows change. The team adjusts how they work. New systems get added. Edge cases that were rare become common.
- ·Vendor pricing changes. Costs that made the business case three months ago do not necessarily make it today.
- ·Regulatory expectations change. APRA, ACQSC, ASIC, and equivalent bodies are publishing AI guidance at pace. What was compliant yesterday is guidance-relevant today.
Each of these can erode the value of an AI initiative quietly. A system that is not actively cared for in production trends towards underperformance, not steady state.
What managed services should actually cover
We have seen managed AI services described as everything from glorified support contracts to full operational ownership. Useful managed engagements have a clear scope. Ours typically cover six things.
1. Monitoring and drift detection
Live observability of accuracy, latency, refusal rate, escalation rate, cost per call. Drift alerts when behaviour shifts. Periodic re-evaluation against the curated harness so we know whether the system is still doing what it was meant to do.
2. Prompt and model maintenance
Prompts are not stable artefacts. They get refined as we learn what fails, what users override, and what edge cases emerge. Model versions change, sometimes intentionally (we move to a better model), sometimes externally (the vendor deprecates a version). Either way, every change runs through the evaluation harness before it goes live.
3. Cost and capacity management
Model API costs are usage-based and vendor-determined. Without active management, costs can drift in ways that erode the business case. Managed services include rate limit configuration, caching, model selection optimisation (use the cheaper model where it works, the expensive one where it does not), and reporting so finance has visibility.
4. Incident response
When the system misbehaves (and it will) there is a defined response: rollback, root cause, fix, evaluation, redeploy. Integrated with your existing incident framework so AI incidents do not sit in a parallel process. Documentation suitable for audit and regulatory review.
5. Adjacent extension
The AI system should grow with the operation. New depots come online. New product lines are added. New customer segments are onboarded. The managed service includes the work to extend the system to those adjacent areas without rebuilding from scratch.
6. Governance and reporting cadence
Quarterly review with the operational owner. Periodic reports to risk, audit, and the board. Updates against regulatory guidance changes. The system stays defensible across all the surfaces that matter.
How to scope managed services so they are worth the spend
There are three rules that separate managed AI services that earn their keep from ones that turn into expensive standing arrangements.
Tie the engagement to outcomes, not effort
Hours-based managed contracts are easy to write but hard to defend. Outcome-based contracts (accuracy maintained above X, cost per call kept below Y, incident response within Z, quarterly extensions delivered) give you something to measure. The work supports the outcome rather than substituting for it.
Keep the operational owner in the loop
The managed service is in service of the operation, not in parallel with it. The operational owner (head of customer service, COO, head of credit, whoever) runs a quarterly review with us, sees the metrics, and signs off on changes. If the operational owner does not feel ownership, the service has drifted into a vendor relationship rather than a partnership.
Plan for the end of the engagement
We always document the system, the prompts, the evaluation harness, the runbook, and the operating model so the client can take the work in-house at any time. A managed service that creates dependence is a managed service that will eventually create resentment. The right test is whether you could end the engagement next quarter and still operate the system. If the answer is no, we have done the wrong work.
Where managed services pay back
The clearest payback is in environments where the AI system is critical, the in-house capability is thin, or the workload changes fast enough that maintaining the system in-house would distract the team from higher-value work. For most mid-market Australian businesses, that means the first one to three AI systems run on a managed engagement, and progressively more capability moves in-house as the team builds depth.
AI in production is not a one-time spend. The operate phase is where the value is realised, and where it leaks. Plan for it accordingly.
Related service
Managed AI Services
Want to apply this thinking to your operation? Our managed ai services engagement is the structured next step.
Learn about Managed AI Services

