By Ben Snyman, Cofounder and CEO, AuditSoft
With 30+ years of experience in risk management and a legal background (B.Comm, LLB, MBA), Ben is a recognized authority in OHS compliance, committed to advancing health and safety and pioneering industry-leading solutions that strengthen due diligence and risk mitigation. Connect with Ben on LinkedIn.
Artificial intelligence is no longer a future concept. It’s already embedded in how many professionals write, analyse, and make decisions. Naturally, this raises an important question for health and safety:
What role, if any, should AI play in COR and OHS auditing?
It’s a question we take seriously at AuditSoft, because in health and safety, accuracy, consistency, and accountability matter more than speed alone.
Last year, we asked a simple question on LinkedIn:
Can auditors use AI tools in COR and ISO 45001 audit report writing, in the absence of official guidelines?
The response was telling:
While not definitive, the results reflect a growing consensus: AI can add value when it supports professional judgement, but not when it replaces it. That distinction matters.
The use of AI in COR and OHS auditing accelerated rapidly through 2025, and AI-assisted content is now present in a growing share of audit-related work. Ignoring its use is no longer realistic.
AI is a powerful productivity tool. At the same time, if it is used to fabricate, exaggerate, or replace evidence or professional judgement, it risks undermining trust in the auditing process and the profession itself.
Some have called for banning AI or attempting to police its use through detection tools. In practice, this approach is not feasible. It is now widely acknowledged that AI-generated content cannot be reliably detected. OpenAI has publicly stated that it is “impossible to reliably detect all AI-written text,” and academic research from Stanford University has shown that AI detection tools are “not particularly reliable.”
For this reason, the focus should not be on whether AI is used, but on how it is used.
In health and safety, AI should never be the decision-maker.
Auditors, reviewers, and safety professionals are accountable for their assessments. They understand context, nuance, site conditions, and regulatory intent in ways no model can fully replicate. Any use of AI that obscures authorship, judgement, or traceability risks eroding trust in the audit process.
At the same time, it’s increasingly difficult to justify manual, repetitive work that adds little value, consumes resources, and introduces avoidable inconsistency.
This is where we believe AI can play a responsible role.
AI will be a meaningful part of our R&D work in 2026, but to be clear: we are not interested in “AI auditors” or black-box automation.
We are interested in tools that:
To formalize this stance, we’ve published a Policy for Ethical AI-Assisted COR & OHS Auditing, designed for certification bodies, safety associations, and auditors.
The policy establishes clear requirements for:
AI may assist with tasks like document review, data organization, or drafting support, but auditors remain fully responsible for evidence interpretation, scoring, conclusions, and recommendations.
You can download the full policy here: Policy for Ethical AI-Assisted COR & OHS Auditing
The absence of formal regulatory guidance on AI does not mean inaction is the safest option. It means intentional design, clear boundaries, and human oversight are essential.
Our direction is consistent:
AI is not a shortcut to better safety outcomes. But when applied carefully, it can remove friction and give professionals more time to focus on what truly matters: informed decisions that protect people.
In 2026, we’ll begin introducing AI enhancements grounded in these principles, and we’ll share more as that work progresses.