Guide on How to Implement Responsible AI

Practices such as impact assessments, data curation, fairness measures, pilot studies and organisational training are some of the simple but effective approaches described in a new report that can help Australian businesses develop robust and responsible artificial intelligence (AI).

Released today, the new AI report for Australia’s National AI Centre, coordinated by CSIRO, explores practical steps to implement the Australian Government’s eight AI ethics principles.

The report, 'Implementing Australia's AI Ethics Principles: A selection of Responsible AI practices and resources’, was developed by Gradient Institute.

It comes as the recent Australian Responsible AI Index found that despite 82 per cent of businesses believing they were practicing AI responsibly, less than 24 per cent had actual measures in place to ensure they were aligned with responsible AI practices.

National AI Centre Director, Stela Solar said that although businesses acknowledge the commercial opportunities of AI, many don’t know how to responsibly navigate the fast-paced environment and meet customer expectations.

“This report is designed to help businesses build responsible AI approaches in their organisation,” said Ms Solar.

“We hear from businesses, that their ability to innovate with AI is directly correlated with their ability to earn trust from the communities they serve.

This is also reflected in Australia’s AI Ecosystem Momentum report which found that trust, privacy, security, data quality and skills were the top blockers to AI projects.

“AI systems that are developed without appropriate checks and balances can have unintended consequences that can significantly damage company reputation and customer loyalty.” she said.

Bill Simpson-Young, CEO of Gradient Institute said he hoped the report would encourage more businesses to start the journey towards responsible AI practices.

"Even though Responsible AI practices, resources and standards will keep evolving at a fast pace, this should not distract organisations from implementing practices that are known to be effective today,” Mr Simpson-Young said.
 
“For example, when an AI system is engaging with people, informing users of an AI’s operation builds trust and empowers them to make informed decisions. Transparency for impacted individuals could be as simple as informing the user when they are interacting with an AI system.
 
“While it is broadly accepted that fairness is important, what constitutes fair outcomes or fair treatment is open to interpretation and highly contextual. What constitutes a fair outcome can depend on the harms and benefits of the system and how impactful they are.

It is the role of the system owner to consult relevant affected parties, domain and legal experts and system stakeholders to determine how to contextualise fairness to their specific AI use case. The report helps organisations address these challenges." he said. 

The report is the first major publication developed through National AI Centre’s recently announced Responsible AI Network - a pioneering cross-ecosystem collaboration aimed at uplifting responsible AI practice and sovereign capability across Australia.

It is being released under a creative commons licence by the National AI Centre and Gradient Institute to invite organisations to actively share their experiences and develop practices of implementing AI responsibly.

The National AI Centre was established to further develop Australia's AI and digital ecosystem. It is coordinated by CSIRO, Australia’s National Science Agency.

Join the Responsible AI Network to access more expert tools and guidance from the National AI Centre and its expert partners.

Download the report, 'Implementing Australia's AI Ethics Principles: A selection of Responsible AI practices and resources’.