To be able to harness technology's full potential, leaders must be able to fully understand how it works. Alongside understanding comes identifying issues and potential problems.
In knowing the risks that comes with a particular advancement, leaders can anticipate what can happen, thereby be able to concoct a good solution to combat any harm it can cause to the organisation.
Generative artificial intelligence tools are demonstrating incredible potential, but addressing some concerns regarding it will prove to be useful in the long run.
1. Bias In, Bias Out. One of the critical issues with generative AI lies in its tendency to reproduce biases present in the data it has been trained on. Rather than mitigating biases, these tools often magnify or perpetuate them, raising questions about the accuracy of their applications—which could lead to much bigger problems around ethics.
2. The Black Box Problem. Another significant hurdle in embracing generative AI is the lack of transparency in its decision-making processes. With thought processes that are often uninterpretable, these AI systems face challenges in explaining their decisions, especially when errors occur on critical matters. It is worth noting that this is a broader problem with AI systems and not just generative tools.
3. High Cost to Train and Maintain. Training generative AI models like large language model (LLM) ChatGPT is extremely expensive, with costs often reaching millions of dollars due to the computational power and infrastructure required.
4. Mindless Parroting. Despite their advanced capabilities, generative AI is constrained by the data and patterns they were trained on. This limitation results in outputs that may not encompass the breadth of human knowledge or address diverse scenarios.
5. Alignment with Human Values. Unlike humans, generative AI lacks the capacity to consider the consequences of the actions taken in alignment with human values. It is important to recognise that deepfakes could be employed for more harmful purposes, such as spreading false information in the face of a public health crises, highlighting the need for more frameworks that ensure these systems operate within ethical boundaries.
6. Power Hungry. The environmental impact of generative AI cannot be overlooked. With processing units consuming substantial power, models like ChatGPT cost as much as powering 33,000 U.S. households, with just one inquiry being 10 to 100 times more power hungry than one email.
7. Hallucinations. Generative AI models have been known to create fabricated statements or images when faced with data gaps, raising concerns about the reliability of their output and potential consequences.
8. Copyright & IP infringement. The ethical use of data becomes paramount when considering that several generative AI tools appropriate copyrighted work without consent, credit, or compensation, infringing upon the rights of artists and creators.
9. Static Information. Keeping generative AI models up to date requires substantial computational resources and time, presenting a formidable technical challenge. Some models, however, are designed for incremental updates, offering a potential solution to this complex issue.