Upholding privacy principles in the era of AI-powered medicine.

Technology has done wonders for healthcare, and artificial intelligence (AI) is no exception. However, the advantages of AI also come with complex challenges for data privacy and governance. 

The recent OAIC ruling on I-MED’s use of patient imaging data with Harrison.ai provides a textbook example of the value of good governance. The value of an AI model capable of diagnosing patients from their radiology images is clear. It is also clear that AI models can leak sensitive patient data, even when it has been anonymised. 

Health organisations must find the balance between technological progress and their duty to safeguard sensitive information. 

Background of the I-MED and Harrison.ai case

In September 2024, media reports revealed that Australia’s largest diagnostic imaging provider had shared patient data without their knowledge or consent. 

Specifically, I‑MED Radiology Network, had contracted Harrison.ai to develop a diagnostic AI model. To train the AI model, I-MED shared a large number (later disclosed as less than 30 million) of patient chest X-rays, CT images and related reports with Harrison.ai.

I‑MED claimed the data was “anonymised” or “de‑identified,” while Harrison.ai distanced itself from responsibility, stating consent and privacy oversight were I‑MED’s obligation. The image depicts Artificial Intelligence (AI) as a microchip within a brain-like structure, symbolizing the integration of AI technology with human-like intelligence.

The OAIC (Office of the Australian Information Commissioner) opened preliminary inquiries to assess whether this transfer breached the Australian Privacy Principles (APPs). Particularly, they were concerned with the secondary use of personal or sensitive health information.

On 1 August 2025, the OAIC ruled that this incident did not represent a breach.

What are the governance risks of using Personally Identifiable Information and Protected Health Information to train AI?

The reason this case drew attention from the OAIC was the concern that I-MED had breached Australia’s privacy laws. There were several concerns here, which we will look at it more detail. 

Health information is sensitive

Under Privacy Act 1988, health information is classified as sensitive personal information. This affords Protected Health Information (PHI) and Personally Identifiable Information (PII) higher protection under the Australian Privacy Principles (APPs.) 

This case related to APP 6, which outlines how and when PHI and PII may be used or disclosed for a secondary purpose. Specifically, that secondary purpose must be directly related to the main reason the data was initially collected. 

In this case, I-MED disclosed PII and PHI for the secondary purpose of training a diagnostic tool. 

Consent vs reasonable expectation

When disclosing PII and PHI, best practice is to have the person’s consent. Without express consent, APP 6 considers whether the purpose for sharing this information would be within the scope of what individuals would reasonably expect. 

The question was whether a reasonable person who had been properly informed, would expect their data to be used for AI model training.

Re‑identification risk

Even de‑identified health data can be re‑identified, particularly in high-dimensional AI contexts  that have multiple data variables and can lead to multiple data sets being correlated to infer the original data, in this case the identity of the subject. This means that individuals could potentially be re-identified from the de-identified data, leading to privacy violations. 

The OAIC has warned that de‑identification is context dependent. This means it may be challenging to achieve with sufficient rigor.

Community trust

Even when the disclosure of PHI is deemed lawful, there are risks. It can erode patient trust and damage the provider’s reputation. In this case, the lack of transparency led to a sensational news story that generated negative public sentiment and regulatory scrutiny.

What was the rationale for the OAIC’s ruling on the I-MED case?

On 1 August 2025, Privacy Commissioner Carly Kind announced the closure of preliminary inquiries, concluding that no regulatory action was necessary at this stage.

There were several circumstances that led to this decision.

De‑identification

The OAIC found that the data I-MED shared was sufficiently de‑identified. They had used a process endorsed by National Institute of Standards and Technology (NIST) and guided by formal de‑identification policy. This meant that individuals were no longer reasonably identifiable through accessing the shared data. As such, the data could no longer be defined as ‘personal information’ under the Privacy Act.

Scale of breach and proactive corrections

Between 2020–2022, I‑MED shared less than 30 million patient studies with Harrison.ai. Only a very small number of these records inadvertently included personal identifiers. I-MED proactively identified and rectified those records. This indicated that I-MED did take reasonable steps to protect patient data. 

Governance recognised

The Commissioner described this case as “a case study of good privacy practice,” noting the significance of privacy governance and planning at the outset of adopting new technologies. 

Not an endorsement

The OAIC emphasised this was not an endorsement of I‑MED’s broader compliance. They noted that further investigations remain possible if concerns arise.

Steps organisations can take to manage privacy in AI training 

If you are currently training AI models, or using PHI/PII with AI, you may be concerned about your own data governance. 
Noted below are some steps you can take to manage this risk.  

1. Implement robust de‑identification governance

Develop and document a Data De‑identification Policy, aligned with frameworks like NIST or CSIRO. This should be overseen by designated privacy/data protection officers.

Use technical controls to remove identifiers, metadata, embedded text, and any linkage fields.

2. Conduct risk assessments & validate your procedures

Assess the probability of re‑identification of PHI/PII. AI carries significant re‑ingestion or channel mixing risk.

It is also good practice to periodically audit sample datasets. This can help identify errant exposures and mirrors I-MED’s proactive error correction. 

3. Use contractual safeguards

When sharing data with third-party AI developers, contractual obligations offer another layer of protection. These should:

  • Lay out any limitations on how the data can be used.
  • Prohibit re-identification of private data.
  • Ensure data is deleted in case of errors (as occurred with Harrison.ai and I‑MED)

4. Ensure transparency & patient communication

Even if de‑identification is robust, it is better to be upfront and transparent. Inform your patients that you may use their (anonymised) data for research and AI training. You can do this through privacy notices or consent forms.

If it’s feasible, adopt an opt-in or opt-out approach to gain consent. This is a practice I‑MED has adopted in the wake of this inquiry.

5. Integrate AI privacy into your broader data governance

Treat AI development as a high‑risk data activity. This means it should come under privacy-by-design governance principles. Minimise the use of test data, have a defined and specific purpose and clear oversight over development practices to ensure adherence to relevant regulations.

6. Monitor evolving guidance

Keep up to date with OAIC’s published guidance documents. There are currently two that explicitly reference this case study as a practice benchmark. They are Guidance on developing and training generative AI models, and Guidance on privacy and the use of commercially available AI products.

What lessons should health organisations take from this case?

The I‑MED / Harrison.ai case illustrates both the legal grey‑area and the opportunity for organisations in Australia to responsibly harness AI in healthcare. 

Although this case did not result in regulatory action, it should serve as a warning. Using PHI without proper consent can raise legal, reputational and trust issues. This should come as no surprise given the sensitive nature of health information. There are significant privacy risks, even with de-identified data. Take this opportunity to reflect on your privacy controls and data governance. 

Adopt robust de‑identification frameworks, conduct re‑identification assessments, use strong contractual controls, ensure patient transparency, and treat AI training as a high‑privacy-risk process with privacy-by-design and governance throughout. 

 

If you need help implementing any of these governance controls, or have questions about this case, please reach out to RSM’s cyber security and privacy team.

HAVE A QUESTION?

  GET IN TOUCH