Victorian agency breaches privacy through use of ChatGPT
The Office of the Victorian Information Commissioner (OVIC) has found that the Victorian Department of Families, Fairness and Housing failed to take reasonable steps to ensure the accuracy of personal information and to protect personal information from unauthorised disclosure.
OVIC has published an investigation report into the use of ChatGPT by a child protection worker at the Department of Families, Fairness and Housing (DFFH). In this case, the worker used ChatGPT, the generative artificial intelligence (GenAI) tool, when drafting a Protection Application Report (PA Report) – a report that is submitted to the Children’s Court to inform decisions about whether a child requires protection.
The investigation found that:
- The content generated by ChatGPT and then used by the Child Protection worker when drafting the PA report contained inaccurate personal information – which downplayed risks to the child in the case.
- The Child Protection worker entered a significant amount of personal and delicate information into ChatGPT, including names and information about risk assessments relating to the child. By doing so, they disclosed this information to OpenAI, an overseas company, and released it outside the control of DFFH
Deputy Commissioner Penny Eastman found that the controls DFFH had in place were insufficient to manage the risks associated with the use of GenAI tools in a child protection context. She concluded that DFFH contravened OVIC Information Privacy Principles (IPPs) 3.1 and 4.1 by failing to take reasonable steps to ensure the accuracy of personal information and to protect personal information from unauthorised disclosure.
DFFH accepted the findings of the investigation report, and is now required to implement the remedial actions it contains.
The Deputy Commissioner has issued a compliance notice on DFFH to ensure it complies with IPP 3.1 and 4.1. The notice outlines six specific actions, including a requirement that DFFH blocks the use of ChatGPT and other similar tools by child protection workers.
Victorian Information Commissioner, Sean Morrison, said “This case demonstrates that while some uses of GenAI may be beneficial, there are currently circumstances where the privacy risks involved are simply too great – such as where highly delicate information is involved.
“I therefore encourage all organisations to assess the risks involved in their employees’ use of GenAI across their different functions and activities. In line with their obligations under the IPPs, organisations must put in place appropriate controls to mitigate these risks.”
The full investigation report is available to view here.