Chief audit executives expect audit coverage of artificial intelligence-related risks will grow as organisations race to adopt the technological advancement, according to a Gartner study.
The survey, which polled 102 chief audit executives, found that rapid growth and adoption of generative AI (GenAI) has resulted in a scramble to provide audit coverage over potential risks arising from use of the technology.
The study reveals that top six risks with the greatest potential audit coverage increases include strategic change management, diversity equity and inclusion, organisational culture, AI-enabled cyberthreats, AI control failures and unreliable outputs from AI models.
"As organisations increase their use of new AI technology, many internal auditors are looking to expand their coverage in this area," says Thomas Teravainen, research specialist with the Gartner for Legal, Risk & Compliance Leaders practice.
"There are a range of AI-related risks that organisations face from control failures and unreliable outputs to advanced cyberthreats," Teravainen points out. "Half of the top six risks with the greatest increase in audit coverage are AI-related."
Greatest Potential Increases in Audit Coverage
Source: Gartner (March 2024)
Large confidence gaps for AI risks
"Perhaps the most striking finding from this data is the degree to which internal auditors lack confidence in their ability to provide effective oversight on AI risks," says Teravainen. "No more than 11% of respondents rating one of the aforementioned three top AI-related risks as very important considered themselves very confident in providing assurance over it."
Publicly available GenAI applications, and those built in-house, create new and heightened risks for data and information security, privacy, IP protection and copyright infringement, as well as trust and reliability of outputs.
Many enterprise GenAI initiatives are in customer-facing business units, and the proliferation of GenAI makes increasing coverage of unreliable outputs from AI models, including biased or inaccurate information and hallucinations from AI models, a priority to protect the organisation from reputational damage or potential legal action.
"With such a broad array of potential risks coming from all over the business, it’s easy to understand why auditors aren’t confident about their ability to apply assurance," says Teravainen.
"However, with CEOs and CFOs rating AI as the technology that will most significantly impact their organisations in the next three years, continued gaps in confidence will undermine CAEs’ ability to meet stakeholder expectations."