Frequently Asked Questions

    alt txt

    properties.trackTitle

    properties.trackSubtitle

    0:00
    0:00

    Since issuing our first AI policy for anti-fraud model in 2018, Munich Re has written aiSureTM insurance solutions for AI companies across a diverse range of industries, including:

    We are confident that we can provide significant risk transfer capacity across any industry use case where the accuracy and reliability of AI performance is critical to financial results. Please reach out to us to discuss your particular use case and requirements.

    We define artificial intelligence (“AI”) broadly as any form of statistical machine learning method based on data. This definition includes machine learning methods, deep learning models, reinforcement learning models, ensemble models, and others. In fact, aiSureTM is model-agnostic, so any type of model, including GenAI, is insurable. The quality of the model and its performance stability determine the premium, enabling us to provide a wide variety of Insurance options for AI and machine learning firms. For more information on our model agnostic risk management and assessment approach, click here.
    Yes. We offer tailored liability coverage that protects against risks specifically inherent to GenAI models, such as hallucinations and copyright infringement. For more information on how we insure GenAI models, click here.

    Munich Re follows a proven risk assessment process for ensuring performance reliability in AI systems through insurance:

    • The first step is to evaluate the model development pipelines and identify potential risk scenarios (unrepresentative training data, data drift, updating and monitoring processes, etc.). A thorough, quantitative analysis of data inputs and outputs is essential for gaining an understanding AI liability and performance
    • The second step is to derive a valid risk estimator. Based on historical performance data, we can estimate the future likelihood of model underperformance. Please refer to our whitepaper.

    We are actively contributing to statistical and machine learning research in the area of uncertainty quantification critical to protecting AI systems with insurance. We have also developed methods that enable us to automatically price the model performance risk of AI in real time. Please refer to our research papers [An In-Depth Examination of Risk Assessment in Multi-Class Classification Algorithms and Distribution-free risk assessment of regression-based machine learning algorithms] and follow our LinkedIn feeds for future updates on our research on how to insure AI products and technologies.

    The performance of AI relates to the uncertainty of error. No model can ever be perfect even with access to all information and having the best data science team available with state of the art governance policies and processes in place. No matter how good the performance of an AI model is, there will be instances where it makes mistakes.

    For example, a GenAI model might hallucinate and provide factually wrong answers to user queries. A financial fraud detection model might fail to detect costly fraud events. A predictive maintenance model might fail to recognise an equipment breakdown event.

    An AI model built to make inferences under conditions of uncertainty which means that even if you design it to the most exacting standards, it will make errors. Indeed, in the famous formulation of UCLA Professor John Villasenor, “The laws of statistics ensure that – even if AI does the right thing nearly all the time – there will be instances where it fails.” In all such cases, AI and GenAI models make mistakes, which can cause financial losses and create liabilities, such as:

    • Property damage and bodily injury
    • Compliance fines and penalties
    • Privacy violations
    • Data leaks
    • Intellectual property infringement
    • Pure financial losses
    • Discrimination

    AI insurance is designed to mitigate the financial impact of AI model error uncertainty. The more you use and depend on AI, the more costly the losses can be. 

    Insurance can help AI providers to increase their customers’ trust in the performance of the AI model and reduces the sales cycle time. For businesses integrating AI technologies, AI insurance accelerates confident AI adoption by providing a financial safety net.

    Among the benefits of insurance for companies developing AI solutions is its ability to alleviate customer concerns about AI reliabilty. Potential clients may be skeptical about the real-world performance of AI technology. Thus AI providers often struggle to address potential clients’ concerns regarding the uncertainty of AI predictions, leading to lengthy proof-of-concept deployments or extended due diligence processes for each client.

    Munich Re’s aiSureTM which covers AI system failures, instils confidence and trust in the solution. This enhanced reliability not only boosts customer satisfaction and retention but also strengthens the provider’s reputation and brand image. Ultimately, offering an AI system performance guarantee with clear, well-defined and significant compensation for model errors and their consequences can significantly shorten sales cycles by eliminating the need for extended proof-of-concept phases – why perform lengthy POCs when the outcome is insured? Guaranteeing the performance of AI puts AI users at ease knowing that potential financial losses from relying on the AI’s performance are covered. With aiSureTM, AI providers can inspire trust in their solutions.

    Companies strive to leverage AI in order to automate tasks, optimise efficiency and boost productivity. Despite these benefits, companies face challenges in transitioning AI initiatives from innovation labs to operations. Executive management is rightfully concerned about the financial risks associated with entrusting operations to AI models, including the potential for accruing materially significant compliance fines under new regulatory frameworks. Transferring the risk of AI underperformance to an insurer provides peace of mind to company boards and investors, assuring them that model performance issues will not lead to financial events that could impact stock performance or pose reputational risks while ensuring compliance for AI technologies through insurance.

    Traditional insurance policies are generally made to cover traditional perils. AI, in its different forms (from random forests to Generative AI), with its different uses (from chatbots to medical instruments) and its different risks (prediction errors to discrimination) will test the limits of traditional insurance in the years to come.

    At Munich Re, we are aware of the limitations of traditional policies. This is why we offer tailored insurance solutions with flexible limits and payouts, specifically designed for AI risks. We cover all types of damages required, enable different payout triggers, emphasise low coverage thresholds, and require no legal liability element. Our AI insurance solutions therefore address an insurance gap and create legal certainty and peace of mind for AI users in a constantly changing environment.

    Lawsuits involving unauthorized use of copyrighted images or text as training material. If an AI plagiarizes content, the creator, developer, or user of the AI could be sued for copyright infringement, as they could all be held responsible for the AI's actions.

    Yes, there are a few reasons that GenAI  models can create “substantially similar” images to their training data:

    1. Training data diversity: Generative AI models, especially those used for image generation,  are trained on vast datasets that can include billions of images and texts. If many images in the model training dataset share similar features, the model may learn these common patterns and produce similar outputs.
    2. Model overfitting: for the large models with billions of parameters and iterative training steps, it’s possible that model memorizes specific details of the training data, generating outputs that closely resemble some training data.
    3. Detailed Prompts: when using text-to-image models, users can create highly detailed prompts that specify particular styles or compositions. If these prompts are similar to descriptions of any images (such as artworks) that are used in model training, the generated images may resemble those images. The prompt similarity might not be intentional by a user and just the product of chance.

    If an AI generated image, audio, or text is found by courts to be “substantially similar” to that of an original image, the creator of the image can be found to have infringed on someone’s copyright, leading to fines of up to 30k dollars per infringement besides high legal defense costs and additional damages.

    GenAI providers could be held liable for IP infringement in two ways:

    1. Copyright infringement in training data: AI providers can be (and are being) sued for utilizing copyrighted artists’ work to train their Generative AI models.
    2. Secondary liability due to copyright infringement of the GenAI’s output: If GenAI providers know that their GenAI models provide IP-infringing content, profit from it and do nothing to stop it, they could be assigned secondary liability by the courts.

    If the image created is substantially similar to an existing, protected image, and the image is subsequently published, the creator of the image  could sue the user in court for IP infringement, resulting in significant statutory damages.

    As a wide variety and quantity of content on the internet has been scraped to be used as training data for GenAI models, one must assume that a lot of protected images are part of the GenAI’s dataset. It is an impossible task for a GenAI user to check, whether the image created using GenAI is similar to a protected one that is part of the training data.

    Yes.  Munich Re’s aiSureTM IP Liability policy protects both GenAI providers and GenAI users both from lawsuits for IP infringement of “substantially similar” GenAI output, and  covers legal costs that stem from an alleged IP infringement by the created image.

    We are happy to talk to you about the potential risks, your existing coverage and how we can help.

    Yes, there have been lawsuits related to AI bias and discrimination, particularly in hiring practices and lending. One notable case of AI bias and discrimination is the EEOC's September 2023 settlement with a tutoring provider. The company used AI in hiring decisions, which led to discriminatory practices against certain groups. The EEOC ordered the company to pay $365,000 as part of the settlement.

    AI-generated content can result in defamation lawsuits if it produces false statements that harm someone’s reputation.

    AI itself cannot be held liable. However, both the developers as well as the users of the AI could face legal consequences if the AI creates and disseminates fake news or misleading information. In understanding liability in AI product development, it is critical to note that who ends up bearing the liability is currently not clearly established in case law and remains an area of legal uncertainty.

    The importance of insurance for AI technology providers derives from the ability to prove the quality of your AI model and assure your customers that their AI tool will perform as expected.  With aiSureTM - Contractual Liabilities, you can guarantee the performance of your AI tool and if the AI fails to deliver as promised, we back your performance guarantee and compensate your customers for the losses incurred.

    As an example: aiSureTM allows you, for instance, to guarantee that your fraud detection model will catch all fraudulent transactions. If your AI fails to catch a fraud event, we provide a payout amounting to the losses incurred. This insurance-backed performance guarantee increases trust in your AI solution and shortens sales cycles, while our strong balance sheet carries the underperformance risk.

    Even by implementing the best AI governance process, you cannot adopt AI without residual risk in performance and, depending on the use case, residual discrimination, IP infringement, data reconstruction, and other risks.

    We enable corporates adopting AI by insuring the performance of your own AI (e.g. self-built, purchased, fine-tuned), with aiSureTM - Own Damages, supporting you in implementing AI solutions for your critical operational tasks, such as in manufacturing or agriculture.

    Take the case of an automotive manufacturer turning to AI for the final quality control before distributing cars to their sales locations. aiSureTM enables the manufacturer to use AI in quality control in manufacturing without bearing the financial losses which might come with performance risk. Insuring the performance of the AI model protects the manufacturer against distributing sub‑par cars due to the error rate of their AI drifting beyond the desired threshold.

    When your models underperform, you know that their financial downside is covered by us. Our aiSureTM  insurance solution enables worry-free implementation of AI models for vital parts of your operations.  For more information on best practices for insuring AI systems and applications and the role of insurance in complementing an overall risk mitigation and governance process, download the whitepaper.

    With aiSureTM - General Liability, you can protect yourself against damages and financial losses arising from lawsuits alleging that AI-made decisions were biased and discriminated against protected groups, or alleging other liabilities arising from AI use and creation.

    As an example: aiSureTM protects you against lawsuits for alleged discrimination against protected groups when, for instance, you use black-box AI to screen job applications or prioritize patient intake in a healthcare setting. This insurance solution promotes the equitable and responsible use of AI and shields you from expensive and far‑reaching lawsuits alleging disparate impact discrimination.

    Contact the team

    Michael von Gablenz
    Michael von Gablenz
    Head of Insure AI
    Palo Alto
    Susana Latorre Bojanini
    Susana Latorre Bojanini
    Market Lead Europe & Middle East