While AI has an estimated $15.7 trillion economic potential, immature understanding of ethical AI practices might hinder its realization, said PwC recently.
According to PwC, about 250 respondents participated in its diagnostic survey that aim to assess organisation’s understanding and application of responsible and ethical AI practices.the survey in May and June 2019.
Survey result highlights:
- Only 25% of respondents said they would prioritise a consideration of the ethical implications of an AI solution before implementing it.
- One in five (20%) have clearly defined processes for identifying risks associated with AI. More than 60% rely on developers, informal processes, or have no documented procedures.
- Ethical AI frameworks or considerations existed, but enforcement was not consistent.
- 56% said they would find it difficult to articulate the cause if their organisation’s AI did something wrong.
- More than half of respondents have not formalised their approach to assessing AI for bias, citing a lack of knowledge, tools, and ad hoc evaluations.
- 39% of respondents with AI applied at scale were only “somewhat” sure they know how to stop their AI if it goes wrong.
There is a clear need for those in the C-suite to review the current and future AI practices within their organisation, asking questions to not just tackle potential risks, but also to identify whether adequate strategy, controls and processes are in place, said Anand Rao, Global AI Leader, PwC US.
AI decisions are not unlike those made by humans, he noted.
“In each case, you need to be able to explain your choices, and understand the associated costs and impacts," Rao observed. "That’s not just about technology solutions for bias detection, correction, explanation and building safe and secure systems. It necessitates a new level of holistic leadership that considers the ethical and responsible dimensions of technology’s impact on business, starting on day one.”