Agentic AI, while a relatively new paradigm, is recognised to have the potential to significantly improve operating efficiencies and enhance decision-making in financial markets.
According to a report by S&P Global, organisations must be on the lookout for this technological advancement as they look to grow their capabilities in digital assets and private credit.
But as with most technological breakthroughs, agentic AI does not come without its challenges.
S&P enumerates the following expected hurdles for agentic AI in finance:
Financial stability risks: AI agents' capacity to increase the complexity and opacity of workflows if not correctly managed, as well as their ability to execute transactions at high speed, means they can amplify systemic risks. Agentic AI systems interacting at scale may multiply the speed of execution and spread of contagion in situations of high volatility, disinformation, cyberattacks or market turmoil.
Regulatory concerns: For use cases involving trading and investment advice, new products and systems can potentially affect financial stability, including certain risk management and transaction optimisation systems. These would most likely fall under the high-risk category of the EU AI Act and may be highly scrutinised by other regulators to ensure investor protection, transparency in decision-making and market integrity. Reporting requirements for counterparty credit risk exposures may need to become real-time instead of daily or weekly.
AI governance challenges: The autonomous nature of agentic AI systems in handling confidential information and their potential to make mistakes, make unethical decisions and cause harm, such as through hacker AI agents, pose considerable risks in accountability and liability for market participants. AI governance is crucial to mitigate these risks through conscious design and oversight. We expect that financial players will be cautious when scaling agentic AI, as they are ultimately liable for their agents' misbehavior.