
We have audited a lot of AI training programmes for Australian businesses in the last two years. Most are generic, theoretical, and forgotten within a fortnight of the workshop. The ones that change behaviour share a structure, and it does not look like the typical training course. It looks more like a working session with the right people in the room and the right artefact at the end.
Why most AI training does not stick
Generic AI training is delivered by a trainer who does not know your business, against generic examples that are not your work, with no follow-up beyond a feedback form. The team leaves with a notebook of ideas, no working environment, no validated prompts, no integration with their actual workflow. Two weeks later, the work has resumed exactly as before.
The pattern is the same whether the training is run internally, by a generic AI training vendor, or by the AI lab of a global consultancy. The deliverable is awareness. The expectation is that awareness translates to use. It rarely does.
What AI training that sticks looks like
The training programmes we have seen change behaviour share three characteristics.
1. Built around your workflows, not generic examples
Before the workshop, we audit the actual work the team does. Which documents they read, which emails they write, which reports they produce, which decisions they make. The training material uses real examples from their week. Generic examples are skipped. The team is not learning AI in the abstract. They are learning AI on the work that is on their desk.
2. Pairs use with judgement
Hands-on use is necessary. It is also not sufficient. Without the judgement layer (when does AI help, when does it not, how do I know if the output is wrong, what do I do about hallucinations, what should never go into a model) the team will use the tools but use them badly. The judgement layer is where the senior people in the room earn their keep.
3. Ends with a working artefact
The most powerful intervention we have seen is to end the training with each participant having a small working tool: a prompt template they will use tomorrow, a custom GPT or Claude project tuned to their work, or an automated workflow that fires when they need it. The artefact lives on. The training does not.
Different audiences, different training
There is no universal AI training. The work to design depends on who is in the room.
Executive briefings
Two-hour sessions for the C-suite or board. The objective is informed decision-making, not personal use. The content is current state of the technology, what it does and does not do well, regulatory landscape, what is realistic in the next twelve to twenty-four months, what questions to ask of teams proposing AI work. We treat these as discussion sessions, not lectures.
Functional team workshops
Half-day or full-day sessions for an operational team (customer service, marketing, HR, finance, legal). Built around their workflows. Hands-on with the actual tools. End with each participant having a working asset. We do these in cohorts of eight to twelve. Larger groups dilute the hands-on component.
Technical team enablement
Multi-day programmes for software, data, and product teams that will be building AI features. Different content: model APIs, prompt engineering, evaluation harnesses, structured outputs, observability, deployment patterns. Pairs with a real project so the team learns by shipping rather than by drill.
Champions network
A cross-functional group of people identified by their teams as the AI lead for that function. Quarterly cadence. Shares what is working, what is not, what is being tried. Becomes the connective tissue that keeps the organisation aligned without a heavyweight central programme. Higher-leverage than mass training.
What we do not include
Generic prompt frameworks unrelated to the team's work. Long history-of-AI sections. Vague 'thinking about AI' content that does not produce a behaviour change. Vendor-led training where the objective is platform commitment rather than capability.
Training is one of the easiest places to spend money badly in the AI space. Most of the spend never lands. The fix is not to spend more. It is to spend on training that is built around your work, paired with judgement, and ends with something the team can keep using.
Where to start
If you have not done structured AI training in your organisation yet, start with a single functional team workshop in a department where AI is already creating informal value (people are already using ChatGPT for parts of their work). Make it real, hands-on, and built around their workflow. Run it well, and the rest of the organisation will ask for the same. Run it badly, and the conversation goes cold for a year.
Related service
Training and Workshops
Want to apply this thinking to your operation? Our training and workshops engagement is the structured next step.
Learn about Training and Workshops

