Organisations in Southeast Asia cited security vulnerabilities, including cyber or hacking risks, as a top concern associated with the risk of using artificial intelligence.
This is the findings of a recent Deloitte study, which polled nearly 900 senior leaders across 13 Asia Pacific geographies, including six Southeast Asia (SEA) geographies – namely, Indonesia, Malaysia, Philippines, Singapore, Thailand, and Vietnam.
The report, titled AI at a crossroads: Building trust as the path to scale, reveals critical insights for C-suite leaders on how they can develop effective AI governance amidst accelerating adoption and growing risk management challenges.
According to the study, among the other top concerns for Southeast Asian organisations include those pertaining to privacy, such as confidential or personal data breaches and the invasion of privacy due to pervasive surveillance.
As investments in AI are expected to reach US$110 billion by 2028 in the Asia Pacific region alone, Deloitte says this emphasises the need for robust governance frameworks to enable businesses to adopt AI more effectively, build customer trust, and create paths to value and scale.
"Effective AI governance is not just a compliance issue; it is essential for unlocking the full potential of AI technologies," says Dr. Elea Wurth, lead partner, Trustworthy AI Strategy, Risk & Transactions at Deloitte Asia Pacific and Australia.
"Our findings reveal that organisations with robust governance frameworks are not only better equipped to manage risks but also experience greater trust in their AI outputs, increased operational efficiency and ultimately greater value and scale."
Deloitte says developing trustworthy AI solutions is essential for senior leaders to successfully navigate the risks of rapid AI adoption and fully embrace and integrate this transformative technology.
The survey also reveals that across Asia Pacific, organisations with mature AI governance frameworks report a 28% increase in staff using AI solutions, and have deployed AI in three additional areas of the business. These businesses achieve nearly 5 percentage points higher revenue growth compared to those with less established governance.
Key recommendations from the report include:
- Prioritise AI governance to realise returns from AI: Continuous evaluation of AI governance is required across the organisation’s policies, principles, procedures, and controls. This includes monitoring changing regulations for specific locations and industries to remain at the forefront of AI governance standards.
- Understand and leverage the broader AI supply chain: Organisations need to understand their own use of AI as well as interactions with the broader ‘AI supply chain’ − including developers, deployers, regulators, platform providers, end users, and customers − and perform regular audits throughout the AI solution lifecycle.
- Build risk managers, not risk avoiders: Developing employees’ skills and capabilities can help organisations better identify, assess, and manage potential risks, thereby preventing or mitigating issues rather than avoiding them altogether.
- Communicate and ensure AI transformation readiness across the business: Organisations should be transparent about their long-term AI strategy, the associated benefits and risks, and provide training for teams on using AI models while reskilling those whose roles may be affected by AI.