How to use this template
Open a blank Google Doc. Copy the five fields below in, fill each one in for your business, and have whoever signs commercial agreements sign off. The whole thing should fit on a page when printed. If it spills onto a second page you've written too much.
The five fields are the floor, not the ceiling. They're the minimum a business should have committed to writing under the federal Voluntary AI Safety Standard and the Privacy Act 1988. Industries with heavier regulation (healthcare, finance, government services) need more on top — but the five fields go first; they answer questions your auditor will ask, in language your staff actually read.
The five fields
Field 1 — Sanctioned tools
List the AI tools your team is allowed to use for work, with the plan tier. Three is a good target; six is the upper bound before you have tool sprawl. Be specific: "ChatGPT Team plan", not "ChatGPT". Include any vertical-specific tools (CRM AI, ad-creative AI, design AI) your team actually uses, and the customer-data status of each.
Why this matters: staff use AI tools whether or not you've sanctioned them. Naming the sanctioned set converts shadow IT into governed IT. Plans matter because free tiers usually train on your inputs; paid plans usually don't — but verify it in the current ToS for your specific tool.
Field 2 — Prohibited use
Three short bullets. Examples that apply to almost every Australian business:
- No customer personal information into a public model (this includes ChatGPT's free tier; including the GPT-4o tier with chat history retention).
- No financial information, legal advice, or medical advice without human review.
- No AI-generated content that misrepresents who wrote it when a customer or regulator might reasonably need to know it was AI-generated (e.g. testimonials, expert opinions, news articles).
Why this matters: the Privacy Act 1988 governs the first two; the Australian Consumer Law (section 18 — misleading or deceptive conduct) governs the third. Naming them in policy makes it easy for staff to ask "is this on the list?" rather than relying on judgement they don't have the legal training to make.
Field 3 — Customer-data rule
One sentence. Either: "No customer data into any AI tool, ever" (strictest; appropriate for businesses with sensitive data). Or: "Customer data may only be processed via [sanctioned tool X], which is configured for [data residency / no-training / data-deletion-on-cancellation]." (looser; requires you to verify the configuration claims).
If you go with the looser version, name the verification frequency: "reviewed quarterly by [role]". AI vendor terms change; the verification is the only thing standing between your policy and a regulator-visible breach.
Field 4 — Named human reviewer
Identify one role (not a person — a role title) responsible for human review of AI-generated outputs before they go to customers. Marketing manager, head of operations, whichever role fits. They don't have to review every output — they sign off on the categories that need review (anything regulated, anything making a substantive claim about your business, anything in a customer-facing channel for the first time).
Why this matters: the Voluntary AI Safety Standard expects a documented human-review step. Naming the role in your policy makes that step traceable and auditable, which is what an actual audit (or ACL inquiry, or VAISS gap analysis) will look at.
Field 5 — Review cadence + version
Final field, two lines:
- Reviewed on: [date]. Next review: [+6 months from now].
- Version: 1.0. Owner: [role]. Sign-off: [name + role].
Why this matters: a policy without a review date is dead the moment a vendor changes their ToS, or a new tool launches, or the regulation evolves. The 6-month rhythm is generous enough not to be a burden, frequent enough to catch drift.
What this template DOESN'T cover (and what to do about it)
The five fields are deliberately the floor. Three things they don't address that you may need to write more about:
- Sector-specific obligations (APRA prudential standards for finance, the Therapeutic Goods Act for healthcare, the Real Estate Institute's practice rules). Add a sector-specific addendum, one page max.
- Incident response — what staff do if they accidentally paste customer data into a public model. Three lines: stop, notify [role], document. Doesn't need its own document at SMB scale.
- AI procurement — how a new AI tool gets approved into Field 1. One sentence: "new tools require [role]'s sign-off after a 14-day evaluation period; criteria are in the AI Tool Buyer's Checklist" (link to our AU AI Tool Buyer's Checklist).
The Friday workshop
Thirty minutes. The person who signs commercial agreements, plus two staff who use AI tools daily. Walk the five fields, fill each in live, sign off at the end. The version that ships on a Friday and gets reviewed in 6 months beats the version that's still being revised by a committee a year later.
Companion reading: our AI Strategy service hub describes how we run this engagement at mid-market scope, and the AdCreative.ai review + the AU AI Tool Buyer's Checklist are the tool-side companions.