APAC organisations are expecting agentic AI to significantly disrupt their business models within the coming months.
This is especially true to the banking sector where agentic AI's impact on bank productivity and efficiency is actually likely to ‘exceed expectations’ over the coming three to five years.
In a press release by Moneythor, their chief product officer, Vivek Seetharaman, says agentic AI will help banks realise the true potential of what they call ‘Deep Banking’; personalised (individualised) experiences, that anticipate customer requirements, and have the potential to extend beyond the domain of traditional banking services.
However, he warns that there are risks involved in the usage of this technological advancement, ranging from data integrity (mis-classified data; an account overdraft mis-presented as a balance, for instance), to heightened risk of security breaches (resulting from AI agents autonomously interacting with multiple proprietary and third-party data services).
Seetharaman highlights the following principles as crucial to ensuring the Agentic AI’s full potential is realised, rather than its risks:
- The establishment of guardrails around specific language and contexts that could signal risk or potential incoherencies. Importantly, these guardrails should be appropriate and applicable irrespective of the LLM – by their very nature, LLM-specific rules can become entirely ineffective in the event of an update or LLM change (an inevitability across banks with multiple systems or the result of acquisitions
- Fully documented governance procedures for the treatment and use of all data (including third party). These should be relevant and updated for whatever jurisdiction the bank is operating (or aspiring to operate) in.
- Crucially, the incorporation of human judgement and control at regular intervals, throughout the process.