The NSW Reconstruction Authority has confirmed that a data breach in March 2025 may have exposed the personal and health information of up to 3,000 people connected to its flood recovery program.
According to the authority, a third-party contractor inadvertently uploaded names, contact details, and sensitive personal data to ChatGPT, the AI chatbot developed by OpenAI, while using it for work-related purposes.
Officials said the incident did not appear to be malicious but acknowledged the seriousness of uploading confidential information to a public AI model.
“This appears to be an accidental exposure rather than a deliberate attack,” said Dr Aaron Snoswell, Senior Research Fellow in AI Accountability at Queensland University of Technology. “But once data enters an AI model, tracing or removing it becomes extremely difficult.”
Can Uploaded Data Be Retrieved by Other ChatGPT Users? Experts Weigh In
While there is no evidence yet that the exposed data has been accessed or misused, experts caution that the risk cannot be dismissed.
AI systems like ChatGPT rely on vast datasets to improve performance. Unless users opt out, their input may be stored and in some cases — used for future model training.
“If personal information was inadvertently included in training data, there’s a small chance it could influence the system,” explained Dr M.A.P. Chamikara, a senior research scientist at CSIRO’s Data61.
FCRF Launches CCLP Program to Train India’s Next Generation of Cyber Law Practitioners
He warned that “prompt injection” attacks where users manipulate AI prompts to extract hidden or unintended information could potentially expose fragments of private data.
However, Dr Snoswell clarified that ChatGPT does not store information like a traditional database:
“It’s more like a statistical soup of approximations. Even if asked directly, the system might generate something that looks like the data — but it may not be accurate or real.”
This phenomenon, known as AI “sycophancy,” makes models eager to please users by fabricating plausible but false responses, increasing confusion about what is authentic.
Broader Concerns: AI Privacy, Legal Obligations, and Data Governance
The NSW incident has reopened a critical question how much personal information entered into generative AI platforms remains private?
Australia has witnessed several high-profile data breaches in recent years, including the Optus and Medibank cyber incidents that exposed millions of citizens’ data. However, experts warn that AI-driven breaches are fundamentally different because they blur the line between data misuse and data learning.
“When you upload to AI platforms, it’s like posting in a public forum,” said Dr Chamikara. “People should never treat these services like private tools.”
He advised users to always assume that anything entered into a chatbot could be stored or reused, urging companies to implement strict AI use policies and avoid feeding confidential data into generative systems.
Australia’s current data protection laws, such as the Privacy Act 1988, do not yet provide comprehensive guidance on AI model governance — leaving room for ambiguity over liability and the right to erasure.
Government Response and Steps for Affected Individuals
The NSW Reconstruction Authority said it will contact affected individuals and is working with Cyber Security NSW to monitor the internet and dark web for signs of the leaked data.
At this stage, the authority stated there is no evidence that the data has been accessed by a third party.
The agency ID Support NSW has also been activated to provide personalized guidance to affected individuals.
Experts have urged anyone impacted to take the following precautions:
- Change all relevant passwords and enable two-factor authentication.
- Monitor bank accounts and credit statements for suspicious activity.
- Avoid clicking links in unsolicited emails or texts.
- Report scams immediately to Cyber Security NSW or the Australian Cyber Security Centre (ACSC).
“Those steps go a long way toward staying safe,” said Dr Chamikara.
For users concerned that their information might have been fed into ChatGPT, OpenAI provides a personal data removal tool through its Privacy Center, though its effectiveness is limited by jurisdiction.
“OpenAI may remove the data, but Australia’s legal framework doesn’t yet require deletion like in the EU,” noted Dr Snoswell. “Our laws still lag behind the pace of AI development.”