ASIC asks for specific laws around AI misuse

By

Tests limits of existing laws against AI-facilitated breaches.

Australia’s financial watchdog is using existing laws to rein in alleged AI misuse, but says reforms are needed to better regulate emerging technologies. 

ASIC asks for specific laws around AI misuse

Australian Securities Investments Commission’s (ASIC) chair Joe Longo yesterday outlined legislative gaps for preventing and responding to harms caused by machine learning and automated decision-making.

He said ASIC was still trying to hold companies accountable with current laws.

“ASIC is already pursuing an action in which AI-related issues arise," he said.

“We’re willing to test the regulatory parameters where they’re unclear or where corporations seek to exploit perceived gaps.”

Longo said even though “a divide exists between our current regulatory environment and the ideal [one]…businesses and individuals who develop and use AI are already subject to various Australian laws.”

These include broad, “principle-based” laws like “current directors’ obligations under the Corporations Act [that] aren’t specific duties" and “laws such as those relating to privacy, online safety, corporations, intellectual property and anti-discrimination, which apply to all sectors of the economy.”

Longo said that because harms caused by “‘opaque’ AI systems” are harder to detect than traditional white-collar crime, regulations tailored to crimes committed through algorithms or AI would be more effective at preventing them. 

“Even if the current laws are sufficient to punish bad actions, their ability to prevent the harm might not be," Longo said.

If an AI facilitated insider trading or market manipulation, for example, ASIC could enforce penalties within the existing regulatory framework, but AI-specific laws would be more likely to prevent and deter the violation, he said.  

“What if a provider lacks adequate governance or supervision of an AI investment manager? 

“When, as a system, it learns to manipulate the market by hitting stop losses, causing market drops and volatility… when there’s a lack of detection systems… yes, our regulations around responsible outsourcing may apply – but have they prevented the harm? 

“Or a provider might use the AI system to carry out some other agenda, like seeking to only support related party product, or share offerings in giving some preference based on historic data.”

"There’s a need for transparency and oversight to prevent unfair practices – accidental or intended. But can our current regulatory framework ensure that happens? I’m not so sure.

“Does it prevent blind reliance on AI risk models without human oversight that can lead to underestimating risks? Does it prevent failure to consider emerging risks that the models may not have encountered during training?”

Longo said that laws to protect consumers against AI-facilitated harms would have to address the current non-transparency of when AI is used, inadvertent bias and the difficulty of appealing an automated decision and establishing who’s liable for damages. 

“It isn’t fanciful to imagine that credit providers using AI systems to identify ‘better’ credit risks could (potentially) unfairly discriminate against vulnerable consumers. 

“In such a case, will that person struggling have recourse for appeal? Will they even know that AI was being used? And if they do, who’s to blame? Is it the developers? The company? 

“And how would the company even go about determining whether the decision was made because of algorithmic bias, as opposed to a calculus based on broader data sets than human modelling?”

The government’s recent response to the review of The Privacy Act agreed “in principle” to enshrine “a right to request meaningful information about how automated decisions are made.”

The European Union’s (EU) General Data Protection Regulation goes much further; it is illegal under Article 22 for an individual to be “subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her.”

Longo said that one solution developers and policymakers had suggested was coding “AI constitutions” into decision-making models and testing whether they could still output violations of the preset rules.

“In response to these various challenges, some may suggest solutions such as red-teaming, or ‘AI constitutions’ – the suggestion that AI can be better understood if it has an in-built constitution which it must follow," Longo said.

“But even these have been shown to be vulnerable, with one team of researchers breaking through the control measures of several AI models simply by adding random characters on the end of their requests.”

Another safeguard that Longo said had been floated was mandating “AI risk assessments,” which is a measure NSW has enforced on government agencies since 2022.

“But even here, questions like those I’ve already asked need to be considered to ensure the risk assessment is actually effective in preventing harm," Longo said.

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

BoM's seven-year technology transformation cost $866m

BoM's seven-year technology transformation cost $866m

Suncorp builds generative AI engine 'SunGPT'

Suncorp builds generative AI engine 'SunGPT'

NAB drives automation deeper into its IT operations

NAB drives automation deeper into its IT operations

Australian Public Service Commission insources employee database uplift

Australian Public Service Commission insources employee database uplift

Log In

  |  Forgot your password?