The rapid emergence and development of Artificial Intelligence (“AI”) systems has become so prevalent a subject, it dominates discussions and dialogue across a broad spectrum of sectors that encompass the commercial value chain.
Prior to the introduction of AI, there have been numerous technological tools that have enabled people and industries across the board to carry out tasks with greater efficacy and efficiency, but very rarely has there been a technological tool that can simulate and improve iterative human learning and problem-solving processes as effectively as AI. This has invariably presented a plethora of opportunities for businesses across the value chain to increase productivity and efficiency, all of which could result in significantly improved profitability on the part of such businesses.
Despite all of the benefits associated with the multitude of use cases that apply to AI systems, such benefits cannot address the shortcomings associated with the human condition, as AI can equally benefit threat actors whose operations are founded on illicit activities. To this end, the efforts of threat actors have presented a unique challenge for the insurance industry as a result of the emergence, production and use of “deep fake” techniques and technologies. This arises from the fact that the use of the said deep fake techniques has come at a time where the insurance industry is undergoing a fundamental shift in the claims process, in that insurance companies are increasingly using digital self-service user platforms for the submission and processing of insurance claims. It is for this reason that insurance companies must rely on pragmatism and innovation to navigate the unchartered territory that is deep fake models and algorithms, as discussed in greater detail below.
What is a deep fake?
The concept of a deep fake within the context of AI, primarily emanates from a function of generative AI, commonly referred to as “deep learning”. In simple terms, deep learning is a sub-category of machine learning which describes the process by which human beings teach computers to process complex data sets for purposes of recognising patterns in pictures, text, sounds and other related data. Similarly, “deep fakes” refer to images, videos or audio which are augmented through the use of generative AI tools for purposes of producing and depicting fake images or events.
In practical terms, threat actors use deep learning models to teach two separate AI algorithms to each carry out their own, individual and relatively simple task i.e. one AI algorithm is taught to produce the most accurate fake output (such as an image, video and/or audio etc.) while the other AI algorithm is trained to simultaneously identify when such output is authentic and when it is fake. When used contemporaneously, the competing functions of the aforesaid AI algorithms result in a system that can produce augmented/fake output that human beings can rarely identify.
How do deep fake scams affect the insurance industry?
Within the context of the insurance industry, threat actors make use of AI deep fake techniques to digitally augment/manipulate images and documentation for purposes of misleading claim handlers during the claim assessment and validation process. The consequence of the rising prevalence of deep fake scams in the insurance industry is that insurance companies are exposed to significant risks, which include, amongst others:
- financial losses - this arises from the payment of claims in which digital augmentation/manipulation has been utilised to circumvent the claim assessment and validation process; and
- regulatory risk and reputational harm – this arises from a multi-layer system for the perpetration of fraud, in which threat actors use AI to not only digitally manipulate imagery, video or audio, but also to impersonate customers to gain access to their personal information for purposes of perpetrating further crimes and/or fraud.
How can insurance companies combat the emergence of deep fake scams
The emergence of deep fake scams within the insurance industry poses a significant challenge for insurance companies, as it is becoming increasingly more difficult for claim handlers to carry out a fundamental aspect of their functions; which is determining when a claim should be paid and when it should not.
Insurance companies, are, however, not left without meaningful solutions in curbing the emergence of deep fake scams, as the very same technology used by threat actors can be used to combat the prevalence of deep fake scams. This, however, cannot be achieved without a dynamic and robust policy environment which enables claim handlers to carry out their functions with greater efficacy.
In this regard, insurance companies can rely on the following methods (amongst others) to combat the increasing prevalence of deep fake scams:
- deploying AI systems designed to detect and identify digitally augmented imagery, audio or video content, for purposes of assisting claim handlers during the claim validation process. This can act as a first line of defence in the claim assessment process;
- updating the policy environment to encompass risk rating systems for claims and additional validation procedures that can better equip claim handlers with the appropriate tools to identify fraudulent claims; and
- providing frequent fraud detection training to claim handlers to ensure that the detection and monitoring systems deployed by a given business remain effective.
The methods referred to above represent a first step for insurers to mitigate the risks occasioned by deep fake scams, but the technological development curve of modern AI systems will continue to pose unprecedented challenges to the insurance industry. This is only exacerbated by the rapid digitalisation of the modern consumer marketplace, which will necessitate a dynamic, forward-thinking approach to the introduction of systems and innovations within the realm of insurance fraud monitoring and detection.
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our Privacy Notice.