GlobalDataFinancialServices
Mon, May 19, 2025, 10:00 AM 3 min read
In This Article:
The growth of AI is awakening interest in insurance policies that cover liabilities arising from dysfunctional AI outputs, as per a GlobalData poll. As adoption of AI advances across industries, demand for cover for AI-related liabilities will naturally expand; driving product development.
A GlobalData poll conducted on Life Insurance International in Q1/Q2 2024 explored the appetite among business executives for several insurance products. While personal cyber insurance was the most-desirable product, attracting 61.3% of responses, cover for cryptocurrencies (44%) and liability due to dysfunctional AI output (40%) garnered significant interest. Yet demand for coverage for erroneous AI output is expected to increase as enterprises race to implement AI and automation; embedding them into their core operations.
Lured by the potential time efficiencies and cost savings, enterprises across all industries are increasingly adopting AI into their strategies. While recent advances in the technology have improved AI’s overall performance, and subsequent interest in its adoption, the technology remains imperfect. Flaws include algorithmic bias, privacy breaches, ‘hallucinations’ (made up information by AI tools), and erroneous outputs, which can have unintended consequences for businesses, such as financial losses and reputational damage.
Cover options for dysfunctional AI outputs remain limited, but the number of providers offering such a product is bound to rise as demand increases. Lloyd’s of London is the latest provider debuting an insurance product intended for such a purpose. Lloyd’s product is specifically designed to cover losses arising from AI chatbots and is offered through the startup Armilla. The policy is intended to cover the cost of court claims against a business should a customer or a third party suffer harm because of an AI tool’s underperformance. As such, a mistake by the AI tool would not trigger a payout as it would need to be established that the tool had performed below the initial expectations.
Examples of AI underperformance across several industries have appeared in the media and these could have potentially been covered by an AI insurance policy. At the start of 2025, Virgin Money had to offer a public apology after its chatbot reprimanded a customer for using the word ‘Virgin’ when asking details on how to merge two of the bank’s accounts. In 2024, a tribunal ordered Air Canada to honour the discount that its chatbot had incorrectly offered to a traveller, as well as paying for the legal expenses he had incurred. That same year, courier business DPD was forced to disable part of its bot after it swore to a customer and branded the business as the worst delivery company in the world.
Comments