Contact us

Deloitte AI Institute

AI and audit

An auditor’s mindset in an AI-driven world

An auditor’s view on AI governance and risk management within organizations

AI is an emerging area that is having a transformational impact on business operations and how organizations achieve their mission, strategy, and objectives.

As this transformation occurs and AI scales within the enterprise, organizations should adapt governance, oversight structures, and processes to promote trust and transparency in AI models. How organizations use the underlying data and resulting outputs can be a game changer, but the reality is that AI models are only as good as the data that feeds them. If the data itself is flawed or changes over time, even subtly, outputs can shift. AI models create predictions, classifications, or new data based on observable inputs and outputs as opposed to pre-programmed rules. In other words, AI models infer the rules or decision weighting they apply to data. This distinction means that use of AI models can lead to unintended outcomes (e.g., bias, inaccurate decisions or recommendations, etc.).

In response, organizations should adapt their capabilities by testing, interpreting and monitoring both AI models and data to verify that deployed models are operating as intended.

Lady with glasses
robot arm

How auditors can help: An independent mindset

One of an auditor’s core missions is to help enhance trust and transparency through providing assurance on a variety of different subject matters, from financial statements to regulatory compliance. The foundation for auditors to deliver assurance includes evaluating governance, risks, and processes that are relevant to the selected subject matter. An auditor’s independent mindset and focus on risk assessment are core underlying concepts for evaluating oversight and effectiveness of AI models.

Like many other stakeholders, auditors are adapting their tried and tested approach to an AI-enabled world. Example considerations include but are not limited to:

  • Are an organization's AI objectives consistent with its mission and strategy?
  • How does an organization assess risk and impact of its AI applications as well as consider effects from related core areas (e.g., data management, cybersecurity, etc.)?
  • Does an organization's structure and controls lend itself to an effective framework for governance and oversight over its AI model population?
  • How does an organization design and implement an effective testing regime for its AI models?
  • How does an organization interpret the results of its testing regime and respond to findings or exceptions?

Three lines of AI defense

First line: Management (process/model owners) has the primary responsibility to own and manage risks associated with development and day-to-day operational activities. Management should have a baseline understanding of risks in AI applications and where they manifest themselves in the specific models and data relevant to the organization’s use cases.

Second line: Risk management provides oversight in the form of frameworks, policies, procedures, methodologies, and tools. The second-line function should have a deep understanding of the AI-specific risks and related controls and mitigation.

Third line: Internal audit assesses the first-line and second-line functions and reports on its design and operational effectiveness to the board and audit committee. In assessing the first-line functions, internal audit should assess whether AI development and monitoring adheres to the organization’s policies, best practices for model development and relevant regulations.

An auditor’s role

Auditors currently work with each line of defense, senior leadership, and those charged with governance (e.g., board of directors) to assess the organization’s control environment. As organizations adopt and expand their use of AI, auditors can have a key role to help organizations identify and address AI-specific risks.

In addition to considering the impact of AI applications within the enterprise, organizations should consider external stakeholders, including regulators and investors.

As a result of being regulated and helping organizations with compliance, auditors have extensive compliance experience and maintain a close dialogue with a variety of regulators. Additionally, auditors play a critical role in bolstering investor confidence and trust through helping to address investors’ expectations for transparency. An auditor’s experience with regulators and understanding of regulatory intent and investor expectations, coupled with their independent mindset, can provide valuable insight to organizations as they respond to increased public concern and scrutiny regarding transparency and functionality of AI applications.

magnify glass
padlock

Auditors are responding to AI’s increasing impact to the business environment. A key aspect of auditors’ response is adapting existing and developing new capabilities to work with organizations to promote trust and transparency in their use of AI which can support increased and accelerated the adoption of AI to help achieve an organization’s strategic objectives. People, process, and technology are key elements for both auditors and organizations to support trust and transparency in use of AI models.

People – Organizations should leverage existing skill sets in risk management and controls, and model risk management, and then augment those functionswith AI specialists (e.g., model owners, data scientists, and developers, etc.).

Process – Organizations should adapt a governance model to include leading practices, relevant frameworks and regulations that incorporate AI-specific considerations. When considering risks, organizations should have a position that’s proactive on preparedness – taking steps to mitigate risks proactively as opposed to reactively. The adoption of an appropriate framework, such as Trustworthy AI™, is important in addressing AI-specific risk considerations like fairness, bias, and explainability.

Technology – Organizations should use cutting-edge tools, including AI and data science platforms, to facilitate controlled processes for model development, deployment and ongoing monitoring of performance.

MLOps is an important component of the foundation for organizations to integrate supporting tools and functionality to build trust and transparency for their AI applications. Omnia’s Trustworthy AI Module showcases how certain tools and functionality can be used to support testing for bias, resilience and reliability, transparency and other aspects of the Deloitte Trustworthy AI framework.

planet

Let's talk

 
 
 
 
 
 
  Yes         No

Insert Custom CSS fragment. Do not delete! This box/component contains code needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++