AI automation in Melbourne runs under two regulatory frames — federal and Victorian — and both matter even for entirely private-sector work. They shape what data your automations can touch, how decisions get documented, and what you're liable for if something fails.
Federally, the relevant rules are Australia's AI Ethics Principles (eight voluntary principles — human/societal wellbeing, human-centred values, fairness, privacy and security, reliability and safety, transparency, contestability, accountability) and the Privacy Act 1988 as amended by the Privacy and Other Legislation Amendment Act 2024. The 2024 amendment is particularly relevant to automation: new APP 1.7 transparency obligations for automated decision-making commence on 10 December 2026. If your automation touches customer decisions — loan approvals, eligibility, triage, dynamic pricing — APP 1.7 will require you to disclose in your privacy policy how and where those decisions are made.
The ACCC's AI transparency statement confirms Australian Consumer Law prohibitions on misleading or deceptive conduct apply regardless of whether the misleading output came from a human or an AI. A chatbot that 'hallucinates' product information can breach the ACL, with penalties up to A$50 million per contravention. That's the single most common failure mode we see in production automations — and the main reason our builds include explicit guardrails, fallbacks, and human-escalation paths for any customer-facing interaction.
At the Victorian state level, the relevant frames are the Administrative Guideline for the Safe and Responsible Use of Generative AI in the Victorian Public Sector (plus accompanying guidance) and Victoria's commitment to the National Framework for the Assurance of AI in Government signed June 2024. The Victorian Government Solicitor's Office also maintains its Prompt Action framework providing legal guardrails for public-sector AI use. If your Melbourne business sells to or partners with Victorian government, these flow through procurement clauses. Even when they don't, Victorian enterprises increasingly use them as their internal AI-automation governance baseline.
Every Mindiam automation build includes governance documentation: what the automation does, what data it processes, what decisions it makes autonomously versus hands off to humans, how failures are logged, and how it would be described under the Victorian guideline and the National AI Assurance Framework. This isn't bureaucracy for its own sake — it's the only way to deploy automations that will survive a privacy audit, a procurement review, or an ACCC complaint.