Michael Blätter, Senior IT Architect at Munich Re, discusses the concept of artificial intelligence and offers a realistic look at the digital advances.
Michael, you sometimes seem a little uncomfortable about the topic of artificial intelligence.
Yes, that always happens when I talk about the sheer pace of digital progress. 15 years ago I would probably have laughed out loud if someone had told me that, within a few years, I would need the GPS on my smartphone when jogging in the woods. We often seriously underestimate the short time it takes nowadays for some exotic-sounding technology to become perfectly mainstream.
What is artificial intelligence exactly?
There is no completely unequivocal definition; even just defining the concept of intelligence is difficult. Right now, a lot of things are being included in the category of AI, especially because it is such a fashionable topic. Many technical solutions in robotics and Industry 4.0 “merely” automate production and work processes, and are not truly AI.
Scientists have traditionally differentiated between strong AI, applied AI, and cognitive simulation; where strong AI means machines that are capable of really understanding and thinking, and whose intellectual capabilities cannot be definitively distinguished from that of a human being. Such categories do not play a significant role in practice, however, and are being increasingly seen as purely academic. But AI’s moral and ethical aspects and the question of how to appropriately manage its risks, issues which have been addressed by Stanford graduate Sam Harris , are exceedingly relevant.
Let’s concentrate on the business sector: Where and how have AI-based solutions become established?
Expert systems are a significant field of application, for example algorithmic securities-trading systems or medical diagnostics. Researchers are also currently investigating AI’s potential in the legal system. Why should routine, open-and-shut cases with clear legal precedents continue to be tried before highly qualified and expensive judges, when resources for handling the really complex suits are scarce?
Everything that falls under the broad category of health will also be equipped with the most up-to-date artificial intelligence. This includes everything from “wearables”, to diagnostics, to individualised therapy.
And then there is the broad field of smart and autonomous machines, such as autonomous vehicles, self-learning robots in industrial production, or the first wave of home care robots. Then comes mobility, i.e. self-driving vehicles that can transport people or goods on land or on water. Or the Internet of Things (IoT), which includes smart homes, smart cities and industrial robotics.
A further aspect of artificial intelligence is digitally automated perception and pattern recognition, i.e. the digitised processing of information from thermal, tactile or acoustic sensors, or via cameras and microphones.
Cognitive simulation includes mechanical voice processing (natural language processing or NLP), which is widely used on the consumer market, for example in smartphone assistants such as Siri, Cortana or Google Now. In the call centres of the future, we will be talking mostly to chat bots, but they will be a far cry from the taped announcements and robotic voices of today.
How do you estimate the risks of artificial intelligence?
Algorithms are pushing us towards a herd mentality that is potentially dangerous. Digital business models tend to create monopolies: just consider Google, Facebook and Amazon in your core business. What will happen in the future if there is only a single expert opinion on many questions, instead of a diversity of opinions? A technical monoculture of artificial intelligence systems, mixed with strong political or economic interests, is something that I do worry about.
Another aspect is the ability of AI systems to learn by themselves. We cannot fully predict how such machines will react in the future. And time is also a factor: How much time will we have to change a faulty algorithm? What will be the time window for making human corrections? That is a systemic risk that cannot be easily dismissed.
... and what about the opportunities?
I personally see great opportunities. Take what Erik Brynjolfsson and Andrew McAfee said in their prize-winning book “The Second Machine Age – Work, Progress, and Prosperity in a Time of Brilliant Technologies”. The authors describe how we are at a crossroads – at the beginning of an upheaval as great as the Industrial Revolution. New technologies are not only exponential and digital, but they can be combined, and their usefulness has only just begun to unfold.
Artificial intelligence systems will be very beneficial for humanity. Such systems make decisions rationally, without getting excited, stressed or tired. This means they make fewer mistakes, cause fewer accidents, and are safer than systems operated by man.
Brynjolfsson and McAfee again: “The technologies we are creating provide vastly more power to change the world, but with that power comes greater responsibility. [...] But in the long run, the real questions will go beyond economic growth. As more and more work is done by machines, people can spend more time on other activities. [...] As we have fewer constraints on what we can do, it is then inevitable that our values will matter more than ever.”
What does AI have to do with the insurance industry?
It is important for the insurance industry to understand that it must be a part of this digital progress – in every respect. The adjustments to technological progress will happen within the companies themselves. We have already experienced this with big data and the findings from data analytics. Will we still have traditional underwriters in ten years, or just “smart” algorithms? And yet the digitisation of our traditional core business, and the application of AI to underwriting and claims management, are merely minor aspects.
For me, the product side is much more interesting. International insurance companies are insuring technologies and services that are based on artificial intelligence, and we have to understand what implications this will have. For example, self-driving cars raise the issue of whether vehicle-holder liability is still an appropriate model. Or what are the liability issues, today and in the future, when a medical diagnosis based on AI systems proves to be wrong?
And risk is not the only relevant aspect. The use of AI instruments, in Industry 4.0 and on cyber tools, allows us to implement holistic risk management solutions for our clients. This will lead us away from the purely reactive financial compensation of damages, and towards loss prevention for our clients, as well as offering them significantly improved services when a loss does occur. There is enormous business potential just waiting to be used in the decades of loss and risk management experience that the insurance industry has accumulated.