Artificial Intelligence and benefits fraud: A double-edged sword

AI has numerous applications – and implications – for group benefits providers and their clients

Artificial Intelligence and benefits fraud: A double-edged sword

Artificial Intelligence has already begun to transform the way many of us live and work. Many industries will undergo significant changes as AI technologies become more pervasive, and the insurance industry is no exception. AI has numerous applications – and implications – for group benefits providers and their clients.

Before we dive in, let’s start with a definition. AI is basically a machine’s ability to perform the cognitive functions we would normally associate with the human mind, including perceiving, reasoning, learning, problem solving, and exercising creativity.

Many of us interact every day with AI applications such as Siri, Alexa, and Google Assistant. These applications can take information in the form of audio or text, interpret the information, and either provide a response or complete an action.

This is all done through a combination of pre-programmed scripts and machine learning algorithms. Machine learning is a form of AI that allows software to become more accurate at predicting outcomes without being explicitly programmed to do so.

One application of machine learning in our industry is in fraud detection – identifying patterns in data to make predictions in relation to known fraudulent patterns.

The Canadian Life and Health Insurance Association recently established a cooperative data-pooling initiative. Each month, anonymized claims are pooled together from member carriers and run through an AI tool to generate alerts for potentially fraudulent activities. Increasing the size of the data set being analyzed to a pool that contains millions of records can help identify situations and patterns in a way that wouldn’t otherwise be possible.

Think about the impossible day scenario – one carrier may only see a few claims from a particular health-care provider on a particular day. But by combining data across the industry, it allows us to see when that provider actually billed for 20 hours in a single day.

We’ve had AI capabilities for a few years now – so what’s new? The biggest difference is that the new AI systems are capable of deep learning, a subset of machine learning. With deep learning, an application need only input raw data and the AI network will derive the analysis by itself, learn more independently, and apply what it learns.

Credit card companies use deep learning as part of their fraud-detection programs. Within seconds they can evaluate more than 500 data elements to determine whether a transaction is suspicious, looking at things like payment method, time, location, item purchased, and amount spent to identify any deviations from the norm.

Another application of deep learning is generative AI. It identifies patterns and structures to create new, original content in a range of formats, including text, audio, images, and video. It can even be used to create synthetic data – data that doesn’t already exist, such as a training environment for autonomous vehicles or even music and art.

ChatGPT is probably the best-known example of generative AI. The app was introduced in November 2022 and set the record for reaching 100 million active users – in just two months.

The opportunities to apply deep learning and generative AI to augment our capabilities are exciting and attractive. But there is another side to AI. The same technologies that can be used to detect and control fraud can also be used to perpetrate it.

For example, generative AI can be used to duplicate your voice after only a few minutes of listening in. That has huge implications for privacy and data protection, particularly for any financial services provider that uses your voice as authentication to access services.

From a benefits plan perspective, it wouldn’t be difficult to use a generative AI application such as ChatGPT to create a fake business with a fake balance sheet and fake employees. This fake business could then apply for coverage with a group insurer. Once implemented, how much harder would it be to generate fabricated claims for the fictitious employees of this bogus company?

So, while new technologies evolve and bring exciting changes that make our lives easier, they come with risks. As insurers, we will need to exercise great care as we look to leverage the new AI. At the same time, we will need to adapt and keep pace with these changes to identify and counter new risks.

Jon Sider is the director of group health and dental claims at Equitable.