Manufacturing is a risky industry with health and safety dangers for both workers and company leaders.

In 2022-23, the manufacturing industry accounted for 10% of all serious workers' compensation claims in Australia. In 2024, there were 65 criminal Work Health and Safety (WHS) cases in the Manufacturing industry. This made up 21% of all cases, making it the second highest industry sector.

Generative Artificial Intelligence (AI) can help manufacturers identify hazards earlier, learn from past events, and build safer workplaces. This can boost workplace productivity, lower a company's risk profile and lead to lower insurance premiums.

Manufacturers and safety professionals can apply AI in many ways across the sector. This article looks at how generative AI can improve safety analytics. More specifically, foundational applications of large language models (LLMs) for extracting insights from archives and siloed systems.

We will consider how to prepare you for using more advanced AI, including regulatory and compliance issues.

From archives to insights

One of the biggest challenges in safety management is learning from the past. Manufacturers often have many years of incident reports, inspection records, and safety documents. Many of these are still on paper or spread across different systems.

Near-miss events can be leading indicators of serious incidents. They happen more often, but they are less detailed and easy to miss. Generative AI, particularly language models, can help analyse this historical data while protecting privacy and confidentiality by following responsible AI principles.
 

Using modern optical character recognition (OCR) tools converts scanned reports into machine readable text, including previously inaccessible hand-written documents, forms and tabular reports.

Across organisations from maintenance logs, safety databases, email communications, to network drives. Generative AI can synthesise data from multiple systems stored in various formats bringing together data silos and creating consistency.

Can be used to summarise and categorise the digitised documents and data. They can extract meaning to identify common themes (e.g. causes or locations of incidents) as well as identify points of failure and undertake root cause analysis.

Play an important role of keeping the AI in check, verifying the information it produces and tuning the process. Training should help safety professionals interpret AI outputs effectively, while encouraging critical thinking and collaboration with employees.

Can turn your newly organised data into clear visuals making it easier to spot patterns to identify points of failure and root cause. Safety officers can use these insights to focus on the areas with the highest risks.

Can be used to analyse clusters of incidents with common attributes across different sites. Correlating near miss events and incidents to derive leading indicators.

Modern organisations now manage many types of data, from web logs and sensor data. With generative AI becoming more important, a wider range of data including business documents, audio, and video becomes essential—especially for safety. However, these newer types of data have often been poorly managed in the past.

When organisations try to use this data for AI, they often face the classic problem of “garbage in, garbage out.” Poor data quality leads to poor results. If your AI strategy doesn’t include strong data governance, it’s time to reconsider. Effective data governance should deliver real business value and be led by the business to address key needs, such as safety.

Generative AI turns unstructured and historical safety information into useful insights. This helps manufacturers create safety programs based on data and evidence.Lessons from past incidents no longer need to be lost in filing cabinets.

Navigating the regulatory and compliance landscape

Manufacturers are using generative AI tools and autonomous agents for safety. It is important to follow Australia’s rules and regulations. 

The legal landscape is evolving. Regulators want to promote safety innovation, however, they also worry about using AI responsibly. They aim to avoid new risks and ethical problems.

The Australian Government is taking a measured approach to regulating AI, while continuing to consult with industry. It is considering legislation that is necessary to prevent specific harms from AI. 

The Productivity Commission has warned the government against taking a heavy-handed approach. For the most part, the government relies on existing laws about privacy, human rights, and intellectual property, before creating new regulations.

In Australia, privacy laws are applied when personal information is collected. For example, medical reports or training records, must be handled following the Australian Privacy Principles (secure storage, use only for the purpose collected). Noting that disclosure of personal information to commercially available Generative AI platforms may be a violation of privacy.

Using enterprise AI within your company’s network helps keep sensitive data safe, but it still needs to be used responsibly. If the AI handles incident reports with personal details (e.g. names with injury descriptions), ensure you have consent or remove the personal information.  Enterprise AI together with privacy preserving guardrails can remove personal information from inputs and detect it in outputs before showing results.

In 2024 the Australian Government released the Voluntary AI Safety Standard. The standard outlines 10 guardrails organisations can use to ensure an ethical use of AI. Dest practise guardrails include: accountability, ensuring effective human oversight and intervention, risk management and transparency. 

Leading organisations are choosing to adopt the Voluntary AI Saferty Standard as indicated by the latest Responsible AI Index 2025 sponsored by the National AI Centre (NAIC).

Manufacturers using AI should consider adopting guardrails from the voluntary standard now while monitoring legal developments. Early adoption of the standard can help stay ahead of regulatory changes, while demonstrating leadership.

Unlocking business benefits

By reducing operational risks, companies can lower insurance premiums and cut costs. The benefits go beyond the bottom line: safer workplaces lead to fewer disruptions, less absenteeism, and higher productivity. They also strengthen your organisation’s reputation as an employer of choice, fostering greater trust with customers and regulators. Ultimately helping to build a strong foundation for long-term success.

 

To learn how AI can support safety and innovation in your manufacturing operations, contact your local RSM office today.

HAVE A QUESTION?

Get in touch