While artificial intelligence (AI) promises faster and smarter decision making, the Actuaries Institute and the Australian Human Rights Commission (AHRC) worry about potential discrimination and highlight the need to prevent this.
To address the issue, they created a Guidance Resource designed to help insurers and actuaries to comply with the federal anti-discrimination legislation when AI is used in pricing or underwriting insurance products.
The guidance was developed after a 2021 report by the AHRC that looked at the human rights impacts of new and emerging technologies, including AI-informed decision making.
The Actuaries Institute strongly supported the report’s recommendations to develop a set guidelines for use by the government and non-government organisations on complying with federal antidiscrimination laws when AI has been used in decision making. It approached the AHRC with a collaboration offer and together they developed these guidelines.
The Guidance Resource lists some strategies for insurers in relation to data used by AI systems to address algorithmic bias and avoid discriminatory outcomes, actuary Chris Dolman highlighted.
Dolman led the Institute’s contribution to the preparation of the Guidance Resource as a representative of the Data Science Practice Committee.
These strategies include rigorous design, regular testing and monitoring of AI systems. The guidance also provides several practical tips for insurers to help minimise the risks of a successful discrimination claim arising from the use of AI for pricing risk.
Dolman said: “In the insurance context, AI may be used in a wide range of different ways, including in relation to pricing, underwriting, marketing, customer service, including claims management, or internal operations.”
He added: “This Guidance Resource focuses on the use of AI in pricing and underwriting decisions, as these decisions are already likely to use AI and by their nature will have a financial impact which may be significant for an individual. Such decisions may also be more likely to give rise to discrimination complaints from customers. However, many of the general principles outlined may also apply to the use of AI-informed decision making in other contexts.”
According to a survey of Actuaries Institute members this year, at least 70% indicated the need for further guidance to comply in the emerging area / wider use of AI.
Elayne Grace, Chief Executive of the Actuaries Institute, said there was an urgent need for guidance to assist actuaries in the exercise of their professional duties, noting this Resource should also provide comfort to consumers that their rights were being protected.
“Australia’s anti-discrimination laws are long standing but there is limited guidance and case law available to practitioners,” Grace said. “The complexity arising from differing antidiscrimination legislation in Australia at the federal, State and Territory levels, compounds the challenges facing Actuaries, and may reflect an opportunity for reform.”
She also noted that several intersecting megatrends – including explosive growth of ‘big data’, increased use and power of artificial intelligence and algorithmic decision-making and growing and changing consumer awareness and expectations about what is ‘fair’ – made the lack of guidance more problematic for actuaries.
Grace said: “This collaboration demonstrates the complex nature of the issues facing society, and the need for a multi-disciplinary approach, particularly where data and technology are used to shape the provision of fundamental services such as insurance.”