Auditing AI systems attracts much attention from both the organizations utilizing AI, as well as policymakers.
The use of auditing procedures assists in identifying the risks that AI systems pose and also helps the development of appropriate AI mitigation controls.
But what is actually meant by the term AI auditing?
Is it internal auditing, or an independent examination conducted by a third-party external to an entity, which conducts an audit with a view to provide assurance, as to validity and compliance, along with the appropriate level of security of an AI system?
Or does it go further than that and look more holistically, in terms of the governance employed within an organization, in respect of the deployed AI systems?
IT practitioners may claim that nothing much has changed. Alignment of technology with an organization’s mission, objectives and risk appetite, has long-standing. With alignment frameworks such as ISACA’s CGEIT and COBIT, along with Treadway Commission recommendations, they state that AI is simply another emerging and emergent technology to be accounted for.
As such, AI auditing provides the very governance system for AI that can be used by organizations regulators and any other interested parties.
The standardized structured auditing processes is very capable of being adapted for today’s AI environment, allied to many new regulations, including the EU AI Act.
Despite the American National Artificial Intelligence Initiative Act of 2020 having been revoked under the current administration, Federal AI laws are being subjugated by various emerging State laws such as those in Colorado, Illinois and California.
The regulation of artificial intelligence continues unabated, even with the divergence between the path taken by the US, versus Europe, with Congress currently reviewing the “Regulating Artificial Intelligence US and International Approaches and Considerations” paper of June 2025, which may lead to another route.
The reality is that AI auditing needs a diverse approach, due to the intersection of AI with software development, product testing, ethical considerations, model security, data compromise, and continuously emerging attack capabilities of mal-actors.
In other words, the term AI audit has become somewhat ambiguous, making it difficult for organizations to understand what their AI audit will actually entail.
Non-compliance breaches have a significant potential impact upon organizations falling under the EU’s AI Act, with this being a prime example of the urgent prioritization required of being capable of undertaking internal AI audits, in order to identify compliance gaps.
However, how regulators will provide the relative enforcement mechanisms that ensure that entities are indeed compliant with the laws remains unclear at this juncture.
How then can organizations best prepare themselves for an AI audit, when a broad view is taken on the domains which such audits will focus on, especially in the absence of international standards with ISO’s forthcoming 27090 “Cyber Security Guidance for Addressing Security Threats to AI” remaining as a draft?
The issue is that auditing always requires a pre-defined baseline, against which the subject matter is evaluated.
But in the case of AI, this may cover technical specifications, legal requirements, ethics principles, as well as corporate governance relating to the use of AI.
At present then, there remains a gap between the principles and the practices within AI auditing that can only be addressed by taking a holistic view and engaging in the audit process with a multi-disciplinary approach, with differing methodologies being combined, to mutually reinforce each other in order to meet current and emerging regulatory compliance requirements.
For more information on how to prepare for your AI audit, contact us: