Data and Digital Ministers have agreed on a set of nationally consistent approaches to the safe and ethical use of AI in government projects and programs.
Commonwealth, state and territory governments agreed to and released the National framework for assurance of AI in government after meeting in Darwin, according to a joint statement released by the Digital and Data Ministers Meeting (DDMM) group.
“Today we’ve agreed across all levels of government that the rights, wellbeing, and interests of people should be put first whenever a jurisdiction considers using AI in policy and service delivery," DDMM chair and Minister for the Australian Public Service, Senator Katy Gallagher said
The set of guidelines, best practices and standards are based on the federal government’s eight AI ethics principles, which are promoted across the public and private sectors.
The principles include: ‘human, societal and environmental wellbeing’, ‘human-centred values’, ‘fairness’, ‘privacy protection and security’, ‘reliability and safety’, ‘transparency and explainability’, ‘contestability’ and ‘accountability’.
NSW Minister for Customer Service and Digital Government Jihad Dib said that the framework would afford flexibility for a jurisdiction’s unique needs while defining consistent expectations for oversight of AI and people’s experience of government.
“It is important to have national consistency on something as significant as AI, and this builds on the work NSW is doing to guide the responsible and ethical use of AI within government," he said.
In March this year, Western Australia became the second jurisdiction to establish AI-specific risk assessments for public sector projects.
The national framework falls short of mandating that other jurisdictions implement the same assessment and review regimes but does encourage governments to consider similar auditing processes in its list of possible oversight mechanisms.
“Governments should also consider oversight mechanisms for high-risk settings, including but not limited to external or internal review bodies, advisory bodies or AI risk committees, to provide consistent, expert advice and recommendations," it states.
The guidelines also encourage governments to assess AI use cases through impact assessments.
“Governments should assess the likely impacts of an AI use case on people, communities, societal and environmental wellbeing to determine if benefits outweigh risks and manage said impacts appropriately.”