AI training in Canberra sits at the heart of Australia's federal AI governance stack — no city has tighter or more specific AI regulatory requirements for its workforce.
The DTA Policy for the responsible use of AI in government (Version 2.0), effective 15 December 2025, is the operational framework. It's built around three pillars — enable and prepare, engage responsibly, evolve and integrate — and introduces mandatory requirements: every agency must identify Accountable Officials (notified to DTA within 90 days of policy effect), publish a public transparency statement within 6 months, develop a strategic approach to adopting AI, establish governance to operationalise responsible use, ensure designated accountability per AI use case, and undertake risk-based use-case-level actions. The DTA also launched the Australian Government AI Assurance Framework — piloted September–November 2024, now in operational use — with an AI Impact Assessment tool.
The APS AI Plan 2025, released 12 November 2025, layers on top: Chief AI Officers in every agency, mandatory foundational AI literacy training for all APS staff, the sovereign GovAI Platform + GovAI Chat running inside Australian Government infrastructure, and a new AI Review Committee providing cross-disciplinary scrutiny of sensitive and high-risk AI deployments.
Federally, Australia's AI Ethics Principles apply across government. The Privacy Act 1988 as amended by the Privacy and Other Legislation Amendment Act 2024 introduces APP 1.7 transparency obligations for automated decision-making commencing 10 December 2026. The ACCC's AI transparency statement confirms Australian Consumer Law applies to AI outputs.
Every Mindiam AI training engagement in Canberra closes with an extended governance module: what data can/cannot go into public LLMs vs GovAI Chat under the DTA policy, how to document AI-assisted decisions to APS AI Plan standards, how Accountable Officials should maintain their AI-use register, and how to escalate high-risk use cases through the AI Review Committee process. For Defence and ASD-adjacent teams we layer on security-classification handling.