Australian superannuation funds are actively seeking opportunities to leverage emerging AI technologies for the benefit of members, with potential benefits including enhanced member segmentation, innovative investment opportunities, and efficiencies in existing processes. But with the transformative benefits enabled through AI comes increased risk to members and investments.
While there is currently no single law or regulation in Australia which holistically governs the use of AI, existing obligations arising under sector-specific and cross-economy legislation and regulations will apply. Potential plans by the Australian Government for an AI Act mandating guardrails for high-risk AI technologies would complement, rather than replace, the existing legislative and regulatory framework.
ASIC-regulated organisations recently received a timely reminder of the need for robust AI governance arrangements with the release of Report 798 Beware the gap: Governance arrangements in the face of AI innovation. The Report found that some AFS and credit licensees in the banking, credit, insurance and financial advice sectors, while predominantly applying a cautious approach to the use of AI, were slow to uplift governance and risk frameworks to ensure adequate positioning to manage the challenges and risks.
Recent developments in Australia
As AI technology continues to reshape the superannuation industry, the conversation around its regulation in Australia is gaining momentum as the Government actively consults stakeholders on its commitment to develop an environment for the safe and responsible development and deployment of AI. This has culminated in the recent publication of a proposals paper (Proposals Paper) seeking feedback on the introduction of Mandatory Guardrails for AI in High-Risk Settings (Mandatory Guardrails), alongside the release of a Voluntary AI Safety Standard (Voluntary Standard).
Through the Proposals Paper, the Australian Government outlines proposals for the regulation of the development and deployment of AI in high-risk settings through the establishment of proposed Mandatory Guardrails. The regulatory options to operationalise the proposed Mandatory Guardrails comprise:
- a domain-specific approach: adapting existing regulatory frameworks;
- a framework approach: introducing framework legislation with associated amendments to existing laws; or
- a whole of economy approach: through the introduction of a new cross-economy Australian AI Act.
The Voluntary Standard, although currently non-enforceable, largely mirrors the ten guardrails outlined in the proposed Mandatory Guardrails. It offers practical guidance on the expected controls, policies and processes that will support safe and responsible AI across all risk settings.
Prior to the introduction of the proposed Mandatory Guardrails and the Voluntary Standard, the Australian Government operationalised Australia’s AI Ethics Principles (Ethics Principles), designed to promote safety, security and reliability in AI applications, across its AI policy initiatives. The Ethics Principles sought to protect individuals and support responsible innovation by providing values-based guidance to help organisations implement ethical standards and reinforce public trust. While the adoption of the Ethics Principles can help improve safe and responsible practices, the Australia Government determined through consultation that voluntary compliance with the Ethics Principles is no longer enough in high-risk settings leading to the proposed introduction of the Mandatory Guardrails.
In parallel, while there is currently no dedicated legislation regulating the use of AI, the Australian Government has reinforced the need for robust legislative measures through the Privacy and Other Legislation Amendment Bill 2024. The Bill introduces key provisions for enhancing transparency and accountability by requiring entities to include information about automated decision-making (including decisions driven by AI and machine learning) in privacy policies when such processing activities could reasonably be expected to significantly affect the rights or interests of an individual. Such measures aim to help individuals better understand how their personal information is used and for what purposes in an AI setting. By building on the foundational intent of the Privacy Act, the Bill seeks to reinforce individuals’ control over their personal information in an increasingly fast-paced digital environment.
The intersection between AI and privacy has been further reinforced through the Office of the Australian Information Commissioner’s release of two new guides addressing privacy obligations when using commercially available AI products and in the development of generative AI models. The guides clearly articulate how Australia’s privacy laws apply to artificial intelligence and the privacy regulator’s expectations of what good AI governance looks like from a privacy perspective.
Examining the frameworks endorsed by the Australian Government, it is clear that responsible AI transcends mere regulatory compliance. At its core, it involves a human-centered approach through commitment to transparency for individuals potentially impacted by AI, building resilience within the business, and striking a balance between innovation and ethical responsibility.
The importance of a human-centered approach
Arguably, at the heart of the superannuation industry’s purpose is to support the financial well-being and dignity of members into their retirement. For AI to truly help unlock this purpose, the creation of holistic value for the industry must be central to AI adoption. Funds must not only seize the promised gains in efficiency and productivity to deliver members timely, accurate, and personalised experiences, but also ensure the amplified risks associated with the latest innovations in AI are understood.
For the superannuation industry, human-centricity becomes essential to this value creation and should be treated as more than just a concept. It offers a practical way for superannuation funds to manage the risks, impacts, and opportunities to members and broader stakeholders across society by integrating known harms to people as part of broader AI governance and risk management. As articulated by ASIC, the Voluntary Standard provides practical guidance toward enabling a human-centred approach, preparing organisations for the continued adoption of AI.
Adopting a human-centred approach ensures that trust, social licence, and harm prevention are foundational to harnessing AI’s potential for productivity and efficiency gains. Similar to the superannuation industry’s ongoing efforts in response to Environmental, Social, and Governance considerations, there are deep parallels that can and should be drawn in the approach taken.
What does good AI governance look like?
A safe and responsible AI framework can be built from foundational pillars that assist the superannuation industry to strive towards a better and responsible AI governance. Such pillars may comprise:
- Leadership and Accountability: Effective AI governance requires that senior leadership sets the direction and defines core values for AI deployment.
- Principles, Policies, and Standards: Comprehensive policies, processes, and guidelines are designed, socialised, and operationalised to align with legal and regulatory compliance requirements, together with non-binding guidance such as the Voluntary Standard.
- Risk Management Processes: Risk management protocols are uplifted to identify evolving risks relating to the development and deployment of AI, with mitigation strategies to support the safe and responsible use of AI.
- Data Governance Framework: Plays a pivotal role in ensuring data quality, security and ethical use throughout the AI lifecycle.
- Cross-Functional Collaboration: Diverse expertise is integrated to address the complex challenges associated with AI, enabling human control or intervention in AI systems to achieve meaningful human oversight across the data lifecycle.
- Supporting Infrastructure: Data attributes, technology platforms, and related systems such as cyber security are evaluated to ensure resilience and robustness.
- Culture, Training and Awareness: An organisational culture committed to ethics, transparency, and continuous improvement is key to creating an environment conducive to the sustainable adoption of AI innovations.
- Continuous Evaluation and Improvements: Regular assessment ensures that AI governance measures remain compliant with evolving regulatory requirements and best practice guidance.
In addition, for superannuation funds guided by a member-centric approach towards delivering results, incorporating governance measures aligning with human-centred AI principles offers a clear pathway to unlocking the deep value promised by AI.