In November the UK Prime Minister hosted The AI Safety Summit[1], arguably convening the world’s most influential community to consider the threats and opportunities posed by generative AI. Discussions ranged from substantial productivity gains to existential threat to human employment, and it was clear that whatever attitude you have towards generative AI, it will have profound implications. As a keen amateur observer, it is difficult to make sense of this future landscape. Can we take lessons from experience of technological advancements as a guide to the future of generative AI for our business?
We have all witnessed technological revolutions, which in the early development stage are typically accompanied by grand claims. So far, the truth is that while technology has substantially improved efficiency, people retain a critical directing role. While at times, it may seem we are slaves to technology, we have learned to adapt and harness it to enhance our professional and personal lives. People remain vital, applying judgement and exercising control.
The potential of generative AI is immense. It can be deployed to boost efficiency through automation and optimisation, enhance predictive capabilities by running simulations and model scenarios and unleash creativity. As an industry, we must seek to exploit this potential.
The insurance industry’s challenges in fully mastering the fundamentals of our data make an ideal case for generative AI. We have access to vast amounts of data but have not used it to the fullest extent within our businesses. One of the biggest issues is the lack of data consistency, often at the most basic level. We ask our brightest talent to spend vast amounts of their time manipulating, copying and pasting data. Rather than hoping the industry could agree on a standard, could generative AI be deployed instead to interpret data and conform it to individual business needs? What would take days and weeks could now take minutes.
Having that capability would be invaluable but how much confidence can we place on the data produced in this manner? Experience has shown that generative AI can generate biases and misinformation commonly referred to as “hallucinations”. This risk becomes even more critical when generative AI is deployed for data analysis, interpretation, and predictions. While causes of these “hallucinations” are not fully understood, they are likely to be caused by limitations in the training data, errors in encoding of the training data or mis-training the machine itself. Another critical limitation is that its predictive performance is likely to be weak in highly complex, evolving real-world environments[2] for which such machines will not have reference data.
The question remains whether we will ever be confident enough to deploy generative AI without expert oversight? Experience shows us that technology, rather than reducing employment, creates a bigger demand for new skills. A personal view is that generative AI will follow a similar trajectory and its true value is to augment our existing knowledge and critical thinking abilities. Harnessing this technology requires us to expand AI knowledge, understand model limitations and exercise judgement to take corrective actions where necessary. As we explore this technological revolution, those who can deploy generative AI to enrich our expertise and experience will have the winning hand.
[1] https://www.gov.uk/government/topical-events/ai-safety-summit-2023
[2] Harvard Business Review - Why AI Failed to Live Up to Its Potential During the Pandemic