By Christian Prokopp on 2022-06-20
Insurance works because it shares costs in the face of uncertainty. What happens when Tesla removes uncertainty and distributes cost seemingly more fairly? First partially and eventually wholly? Will insurance fail, doing more harm than good?
An Insurance Journal article on Tesla's insurance program talks about how Tesla collects a safety score,
"a score that captures how safely the policyholder drives by monitoring forward collision warnings per 1,000 miles, hard braking, aggressive turning, unsafe following time and forced Autopilot disengagement [...]".In simple terms, Tesla is collecting Data to predict how likely you are to have an accident to adjust your rate accordingly. This is not a new concept. You may have heard of the black boxes some insurances ask policyholders to install for the same purpose. It is seemingly fair. But a leading AI company like Tesla could ultimately take this to another level.
Tesla could improve forecasts with more data and models about location, traffic and weather. Then personalise the predictions further with increasingly advanced machine learning algorithms, enhancing them with cues about emotions from car cameras and your social media. Eventually, creating digital twins, individual models, per customer with a highly accurate forecast on who and when they will have an accident. A bit of an uncomfortable thought, but we are happily sharing our shopping and viewing behaviour today. That seemed invasive and awkward a decade ago, but no more.
What will the outcome be? Will your car or insurance deny you its service because you are likely to have an accident in an individual trip or defined period? Or will it rack up your premium by thousands of percentage points in anticipation of a crash?
Imagine monthly adjusting personal car insurance that employs your digital twin and highly accurate forecasting. Say it is offered at $10 for a few months because you have no incidents forecast, and it is a great way to sell the insurance. Nice, right? Next, imagine it jumps to $10,000 in a subsequent month because your insurance predicts an incident with $6,000 to $10,000 costs. Why have insurance at all?
While this exaggerates the development, it shows a trend towards a problematic outcome. Other insurances like health or home insurance may follow similarly. How would insurance products have to adjust to deal with this? This problem is not theoretical, and the development is underway. Insurance markets that have the flexibility push 'risky' groups and individuals into higher rate plans or exclude them from insurance completely.
You could argue it helps encourage better behaviour, like better driving. But what if you are poor, living in riskier areas, or disabled, having some impairment. Is it fair for you to pay more? How much predictability is desirable for customers to encourage better behaviour and fairer pricing? At what point does it negate insurance's purpose and become a social problem or discrimination?
Christian Prokopp, PhD, is an experienced data and AI advisor and founder who has worked with Cloud Computing, Data and AI for decades, from hands-on engineering in startups to senior executive positions in global corporations. You can contact him at email@example.com for inquiries.
Large-language models (LLMs) are great generalists, but modifications are required for optimisation or specialist tasks. The easiest choice is Retr...
Over four months, I created a working retrieval-augmented generation (RAG) product prototype for a sizeable potential customer using a Large-Langua...
Learn to harness the potential of ChatGPT4, your virtual programming partner, with nine prompting tips. Improve your programming skills by communic...
Microsoft could follow Google's $100bn loss. I tried the new Bing Chat (ChatGPT) feature, which was great until it went disastrously wrong. It even...
Programming with ChatGPT using an iterative approach is difficult, as I have demonstrated previously. Maybe ChatGPT can benefit from Test-driven de...