Mandatory AI and automated decision risk reviews to land in Queensland – Software


Queensland public sector projects will soon be subject to internal assessments and external reviews designed for evaluating and mitigating risks specific to their use of artificial intelligence (AI) and automated decision-making (ADM).

Mandatory AI and automated decision risk reviews to land in Queensland


The state’s whole-of-government ICT oversight and enablement body told iTnews that it was on the verge of releasing the mandatory framework, which it first revealed it was drafting during a presentation in March

Queensland government chief customer and digital officer Chris McLaren said the customer and digital group – known as QGCDG – is in the final stages of developing an AI governance policy and supporting AI risk assessment framework to support use cases across Queensland government.

“QGCDG is also in the early stages of exploring possible adoption of the new ISO42001 AI Management System Standard and other relevant industry standards within Queensland government that will allow the identification and management of AI risk across its lifecycle of use,” McLaren said.

Last week, all federal state territory data and digital ministers agreed to a national framework that recommends but does not mandate assurance scoring and controls for government projects through internal assessments and external reviews, contingent on whether they meet high or low financial and risk thresholds, respectively.

To date, NSW and WA are the only jurisdictions to introduce the regime, with the former including only projects that use AI and the latter’s also covering projects that use ADM. 

Although every detail has not been finalised, QGCDG’s AI risk assessment framework will cover both AI and ADM, McLaren said. 

“The AI risk assessment framework would be applied to AI and ADM within large-scale projects.” 

He added that the new framework and policy will be incorporated into existing mandatory regimes, which include the ‘ICT Investment review’ and ‘portfolio, program and project assurance framework’. 

“Any new risks identified with the new AI risk assessment framework in a project or program, will be managed under existing risk and assurance processes.”

Although the AI risk assessment framework will add more AI-specific checks and balances to the broad regimes, QGCDG has already supported departments and agencies in planning and deploying projects that use AI, McLaren said. 

“The current assurance process includes consideration of risk factors, including those associated with AI and ADM.” 

“Where AI and ADM elements are identified, the specialist data and artificial intelligence team are consulted and involved in advice regarding the identification, evaluation and mitigation of these risks.

“The data and AI team have technical and ethical specialists who provide subject matter expertise and advice in managing AI risk. 

“Where necessary, AI and ADM risk are discussed with the Digital Economy Leaders Sub-Group (the overarching governance body for digital investments).”

This means that although the Sunshine State has fast adopted AI projects, there has still been scrutiny. 

The projects include the Department of Agriculture and Fisheries, which already uses drones to alert sugarcane growers of weeds spotted by drones trained on classification algorithms.

The Queensland public sector’s assistive chatbot QChat is another example. The platform has already been deployed to several generative AI uses-cases in customer enablement, RegTech, productivity improvement and cyber security, McLaren told iTnews’ 2023 State of IT report

Highlighting the whole-of-government group’s focus on maximising interoperability and maintaining oversight, QChat was developed, tested and rolled out gradually “within a few departments” before access was expanded, QCDG executive director of data and AI Nathan Bines said.

Moreover, all AI projects are already registered with QGCDG, and government bodies use an assurance profiling tool [pdf] to determine whether only an internal assessment is required or an additional external review. 

The details that the new policy will confirm are the level of assurance checks that government AI and ADM projects will be subject to in future. 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *