AI automation in Sydney runs under two regulatory frames. Both matter even for private-sector work — they shape procurement, liability and internal governance.
Federally, the relevant rules are Australia's AI Ethics Principles (eight voluntary principles — human/societal wellbeing, human-centred values, fairness, privacy and security, reliability and safety, transparency, contestability, accountability) and the Privacy Act 1988 as amended by the Privacy and Other Legislation Amendment Act 2024. The 2024 amendment is particularly relevant to automation work: new APP 1.7 transparency obligations for automated decision-making commence on 10 December 2026, requiring organisations to disclose in their privacy policy how and where automated systems make decisions that affect individuals. If your automation touches customer decisions — loan approvals, eligibility checks, triage, pricing — APP 1.7 will apply.
The ACCC's AI transparency statement confirms that Australian Consumer Law prohibitions on misleading or deceptive conduct apply regardless of whether the misleading output came from a human or an AI. A chatbot that 'hallucinates' product information can breach the ACL, with penalties up to A$50 million per contravention. This is the single most common failure mode we see in production automations — and the main reason our builds include explicit guardrails, fallbacks, and human-escalation paths for any customer-facing interaction.
At the NSW state level, the NSW AI Assessment Framework (originally 2022, modernised 2024 — world-first mandatory AI assurance framework in government) applies to any automated system NSW government agencies procure or contract. If your Sydney business sells into the NSW public sector, AIAF compliance is now written into procurement clauses. Completing the 16-question AIAF self-assessment across Instructions, Assessment, Deep Dive, and Post-Assessment Actions is becoming the de-facto baseline Sydney enterprises use for internal AI governance too — even when they don't sell to government.
Every Mindiam automation build includes governance documentation: what the automation does, what data it processes, what decisions it makes autonomously vs hands off to humans, how failures are logged, and how it would be described under the AIAF. This is not bureaucracy for its own sake — it's the only way to deploy automations that will survive a privacy audit, a procurement review, or an ACCC complaint.