Suncorp Group is revising its governance and risk management around AI in response to the rise of generative AI and an internal ambition to “do more” with artificial intelligence technology generally.
Executive manager of enterprise and strategic risk Adam Spencer told the Gartner Data & Analytics Summit in Sydney that the insurer had a history with “elements of AI”, and governance and risk settings that had so far been fit-for-purpose.
“The types of AI we’ve done have included machine learning, and RPA [robotic process automation] to automate a lot of our operational processes,” Spencer said.
“We’ve been on a long journey of using AI, and our existing governance frameworks and risk management have been fit-for-purpose.
“But more recently, especially with GenAI and I suppose an increased ambition to do more with AI, governance and risk management has to uplift alongside that to not only ensure we do AI safely and responsibly but also … to make sure we can do it at speed.”
Spencer suggested that so far, GenAI use cases are internally focused but that this is laying important groundwork for other applications of the technology down the track.
“There’s a lot of opportunity [with] … lower inherent risk GenAI use cases,” he said.
“It’s tempting to do the customer-facing stuff but there’s so much more opportunity in the internal or efficiency-based [applications of the technology].
“I know people will tell you the big prize is revenue and all the big ambitious GenAI [uses] but if we start with the lower inherent risk stuff, we’ll get the benefits and we’ll learn how to govern and risk-manage GenAI, which will set us up to go to that next stage.”
The work to upgrade existing governance and risk management structures for AI use is utilising the federal government’s AI ethics principles as a framework to guide internal conversations and risk-mapping exercises.
“We’re operationalising those AI ethics principles,” Spencer said.
“We’d already adopted Australia’s AI ethics principles as our data principles if you like; there’s eight of them and they have very good coverage of all the things we should consider when we do AI, and we’ve taken that as our framework for our AI governance and risk management.
“I think what we’ve done, which is probably taking it that next step, is [we’re] turning what were principles into commitments for a start, but also turning those principles into risks, because each one of those principles is effectively a risk if you don’t do it.
“So, every time we do something with AI, we say, ‘What’s the risk of not being reliable and safe?’ and then come up with controls that mitigate those risks.”
Spencer said the reliability, safety and accuracy were key risks that the insurer hoped to address in its initial work. Contestability options and fairness were also important considerations, he noted.
“I think contestability has been a really interesting discussion because it’s good to automate things … but it’s important especially for our customers that there’s … a path to contest the AI or computer-based outcome, and there’s a manual process [where] you can talk to a person.”
Spencer presented during an IBM session at the summit. IBM indicated it had participated in the risk and governance revision efforts.
Suncorp and IBM have a history of working together on AI use cases for the vendor's Watson technologies.