For several years, experts have touted artificial intelligence and the closely related field of machine learning as the latest solutions for the complex needs of the insurance industry. While AI promises a wide range of applications, it also raises a host of concerns. Some of these fears—for example, the displacement of human workers—are common to many innovations. Others, such as the perpetuation of discrimination via biased training data sets, are unique to this technology.
The debate among the general public, insurance professionals, and now regulators continues as AI moves into ever more areas. So how can we as an industry thoughtfully address these concerns and develop best practices to maximize benefits and minimize harm?
AI in the Insurance Industry
AI-powered applications for our industry generally take one of two forms. First, they offer the ability to shift a multitude of mundane interactions in the insurance ecosystem from people to automation. The appeal here is obvious. An increasingly competitive—and therefore expensive—labor market no longer constrains the ability to provide 24/7 access to expert help. Customer service chatbots are the most widely known example of this use. These applications vary in sophistication. Some are little more than automated FAQs and price comparison tools. Other, more complex algorithms vet sales prospects and even handle routine claims.
The second use group relies on AI to manage and analyze the flood of data now available to inform decision-making. Notably, many of these solutions promise to avoid human cognitive biases by replacing intuition with data-driven insights. Examples range from resume scanners and video interview analysis tools to telematics and algorithms that offer highly granular, real-time risk assessment to guide underwriting, pricing, and loss control decisions.
Looking for more information on the current uses of artificial intelligence applications in the insurance industry?
Mike Thomas shares 25 AI Insurance Examples to Know.
The Concerns About AI
As mentioned previously, concerns about the displacement of human workers by technology are nothing new. In the past, however, the displaced could re-train to work with the new technologies and move into more lucrative careers. AI directly targets these knowledge workers, however. While AI will likely create new jobs, the tasks being automated mean that the educational investment needed to secure these jobs is much higher. This, in turn, amplifies the disparity of opportunity between traditionally advantaged and disadvantaged populations. Additionally, as Harry J. Holzer recently pointed out, “In general, automation also shifts compensation from workers to business owners, who enjoy higher profits with less need for labor.”
De-Personalization
A second concern is the loss of the “human touch” as automation becomes more deeply enmeshed in insurance processes. While insureds surely value the accessibility that automation provides in response to “routine” claims; when faced with major claims such as those arising from traumatic situations such as CAT events, the human ability to understand the emotions at work and respond appropriately vastly improves the customer experience. While artificial empathy does exist, its effectiveness is still being evaluated. Additionally, AI systems may be more constrained in their choice of responses than their human counterparts who can understand when it is appropriate to seek novel solutions to complex problems.
Transparency
Perhaps the greatest concern is a perceived (or actual) lack of transparency for AI systems. Beyond overt biases built into AI’s decision-making processes, those affected by their choices often have a limited understanding of the criteria used to make them. As machine learning advances, even application developers may not fully understand how “evolved” algorithms operate. Add to this the fact that humans often make mistakes about which factors lead to the best decisions, and the gap between organic and synthetic thought patterns could expand into a chasm. Without transparency, individuals negatively impacted by AI choices may feel less able to challenge them—or even be unaware of the need to do so.
Developing Best Practices
Do these concerns mean that the insurance industry should abandon the potential of AI? Of course not! It does mean, however, that we must work with all stakeholders to develop best practices for using these powerful tools. In a recent article on EEOC concerns about the use of AI for hiring, Kevin D. Finlay listed a number of best practices that scale to the larger discussion of the effective and ethical use of artificial intelligence:
Know Your Data
As insurance professionals, we appreciate the impact that incomplete or inaccurate data has on effective decision-making. Finley points out that the same vetting of information—in particular the data sets used to train machine learning systems—is a must. That’s true each time the system undergoes a significant update.
Be Transparent
Organizations using AI and similar automated systems for communication, customer service, and especially decision-making need to disclose what systems are in use and why their decisions are beneficial to internal and external customers. Company representatives need to work with developers to ensure that they understand and can effectively communicate to others how their automated processes work.
Check for Bias
Automated systems are the products of human minds, so the conscious or unconscious biases of those creators can manifest in them. Conducting an internal audit to identify discriminatory outcomes is a great starting place, but an impartial review by external auditors increases credibility. Additionally, keep in mind that machine learning systems evolve over time. Consequently, organizations need to repeat audits periodically.
Implement Human Oversight
AI is not intended to replace human interaction or decision-making. Companies utilizing AI systems need to identify key decision-making points and assign qualified individuals to monitor the processes used by automated systems and the results generated. This oversight can ensure that AI is functioning as anticipated and enable swift intervention when needed.
Provide Robust Vendor Oversight
For the time being, the development of artificial intelligence and machine learning systems remains the domain of experts. This means that businesses are more likely to use applications created specifically for them by vendors (second-party applications) or purchased from commercial vendors (third-party applications). Careful vetting of vendors before and after the sale can identify issues. Additionally, vendor agreements should include sections that attest to the fairness and integrity of the tool.
The Responsible Use of AI
There’s an old expression, “When all you have is a hammer, everything looks like a nail.” It’s easy to get caught up in the hype and promise of new technologies. But as insurance professionals, we must remember that artificial intelligence is one tool among many at our disposal. We need to combine our understanding of its benefits with a healthy respect for the potential for damage if it is misused. In the end, nothing can replace human intelligence, empathy, and good judgment in meeting and exceeding our clients’ needs and expectations.
Find out how ReSource Pro Compliance harnesses the power of technology to help insurance agencies and producers manage their licensing and regulatory compliance needs by visiting our compliance page.