As generative AI continues at a breakneck pace, organisations need to act now to formulate an enterprise-wide strategy for AI trust, risk and security management (AI TRiSM), said Gartner recently.
There is a pressing need for a new class of AI TRiSM tools to manage data and process flows between users and companies who host generative AI foundation models, the advisory firm noted.Â
There are currently no off-the-shelf tools on the market that give users systematic privacy assurances or effective content filtering of their engagements with these models, for example, filtering out factual errors, hallucinations, copyrighted materials or confidential information, said Avivah Litan, VP Analyst at Gartner.
AI developers must urgently work with policymakers, including new regulatory authorities that may emerge, to establish policies and practices for generative AI oversight and risk management, she added.
In addition, enterprise leaders need to know that there are two general approaches to leveraging ChatGPT and similar applications, Gartner said.
Out-of-the-box model usage leverages these services as-is, with no direct customisation. While a prompt engineering approach uses tools to create, tune and evaluate prompt inputs and outputs, the firm observed.
For out-of-the-box usage, organisations must implement manual reviews of all model output to detect incorrect, misinformed or biased results, Litan said.
Enterprises need to establish a governance and compliance framework for their uses of these solutions, including clear policies that prohibit employees from asking questions that expose sensitive organisational or personal data, she advised.
In addition, organisations should monitor unsanctioned uses of ChatGPT and similar solutions with existing security controls and dashboards to catch policy violations, she added.
For example, firewalls can block enterprise user access, security information and event management systems can monitor event logs for violations, and secure web gateways can monitor disallowed API calls, Litan said.
For prompt engineering usage, all of these risk mitigation measures apply, she noted.
Steps should be taken to protect internal and other sensitive data used to engineer prompts on third-party infrastructure, she advised, adding that enterprises need to create and store engineered prompts as immutable assets.
These assets can represent vetted engineered prompts that can be safely used, she pointed out.
They can also represent a corpus of fine-tuned and highly developed prompts that can be more easily reused, shared or sold, she added.