Two regulatory frames matter most for Sydney businesses running AI training: one federal, one state. Both will shape which tools your team can use with customer data, and how you document their use.
At the federal level, Australia's AI Ethics Principles — eight voluntary principles developed by CSIRO's Data61 with the Department of Industry, Science and Resources and released on 7 November 2019 — are the reference point most Australian regulators and enterprise procurement teams fall back on. The eight principles cover human, societal and environmental wellbeing; human-centred values; fairness; privacy protection and security; reliability and safety; transparency and explainability; contestability; and accountability. The Privacy Act 1988, amended by the Privacy and Other Legislation Amendment Act 2024 (Royal Assent 10 December 2024), adds enforceable requirements on automated decision-making — with new transparency obligations under APP 1.7 commencing 10 December 2026. And the ACCC's AI transparency statement confirms that Australian Consumer Law prohibitions on misleading or deceptive conduct apply to AI systems — a chatbot hallucination that misrepresents product or service information can breach the ACL, with penalties up to A$50 million per contravention.
At the NSW state level, the NSW Artificial Intelligence Assessment Framework (originally 2022, modernised 2024) is the more concrete operational obligation. When NSW launched the AIAF, it became the world's first government to mandate an assurance framework for AI systems. It requires NSW agencies (and, increasingly, their suppliers and contracted partners) to complete a 16-question self-assessment across four sections — Instructions, Assessment, Deep Dive, and Post-Assessment Actions — with high- and critical-risk systems reviewed by the NSW AI Review Committee. If your Sydney business sells to or partners with NSW government, the AIAF will sit in your contract clauses. Even if it doesn't, it's the benchmark most Sydney enterprises are converging on for internal AI governance.
Our AI training explicitly covers both frames. Every workshop ends with a short governance module: what data you can and can't put into public LLMs, how to document AI-assisted decisions in a way the AIAF would accept, and how to flag high-risk use cases up the chain before they become procurement problems. For regulated teams — financial services, health, legal, government suppliers — we run an extended governance-first variant of the program.