ASIC is urging financial services and credit licensees to ensure their governance practices keep pace with their accelerating adoption of artificial intelligence.
The commission says the call comes as its first state of the market review of the use and adoption of AI by 23 licensees* found there was potential for governance to lag AI adoption, despite current AI use being relatively cautious. (Click here to see ASIC’s full report).
ASIC Chair Joe Longo says making sure governance frameworks are updated for the planned use of AI is crucial to licensees meeting future challenges posed by the technology.
“Our review shows AI use by the licensees has to date focussed predominantly on supporting human decisions and improving efficiencies. However, the volume of AI use is accelerating rapidly, with around 60% of licensees intending to ramp up AI usage, which could change the way AI impacts consumers,” he says.
…ASIC’s findings reveal nearly half of licensees did not have policies in place that considered consumer fairness or bias…
ASIC’s findings reveal nearly half of licensees did not have policies in place that considered consumer fairness or bias, and even fewer had policies governing the disclosure of AI use to consumers.
“It is clear that work needs to be done—and quickly—to ensure governance is adequate for the potential surge in consumer-facing AI,” Longo says.
He adds that AI could bring significant benefits, but without governance processes keeping pace, “significant risks” could emerge.
“‘When it comes to balancing innovation with the responsible, safe and ethical use of AI, there is the potential for a governance gap – one that risks widening if AI adoption outpaces governance in response to competitive pressures.”
He notes that without appropriate governance “…we risk seeing misinformation, unintended discrimination or bias, manipulation of consumer sentiment and data security and privacy failures, all of which has the potential to cause consumer harm and damage to market confidence.”
…licensees must consider their existing obligations and duties when it comes to the deployment of AI…
Longo says licensees must consider their existing obligations and duties when it comes to the deployment of AI and avoid simply waiting for AI laws and regulations to be introduced.
“Existing consumer protection provisions, director duties and licensee obligations put the onus on institutions to ensure they have appropriate governance frameworks and compliance measures in place to deal with the use of new technologies. This includes proper and ongoing due diligence to mitigate third-party AI supplier risk.”
He says ASIC wants to see licensees harness the potential for AI in a safe and responsible manner—one that benefits consumers and financial markets.
“This can only happen if adequate governance arrangements are in place before AI is deployed.”
The commission adds that understanding and responding to the use of AI by financial firms is a key focus for it.
…It will continue to monitor how licensees use AI as it has the potential to significantly impact not just consumer outcomes, but the safety and integrity of the financial system…
It will continue to monitor how licensees use AI “…as it has the potential to significantly impact not just consumer outcomes, but the safety and integrity of the financial system. Where there is misconduct, ASIC will take enforcement action if appropriate and where necessary.”
*As background ASIC says it reviewed AI use across 23 licensees in the retail banking, credit, general and life insurance and financial advice sectors, where AI interacted with or impacted consumers.
During 2024, it analysed information about 624 AI-use cases that were in use or being developed, as at December 2023, and met with 12 of the 23 licensees to understand their approach to AI use and how they were considering and addressing the associated consumer risks.