An Insurance Journal article on Tesla's insurance program talks about how Tesla collects a safety score,
"a score that captures how safely the policyholder drives by monitoring forward collision warnings per 1,000 miles, hard braking, aggressive turning, unsafe following time and forced Autopilot disengagement [...]".In simple terms, Tesla is collecting data to predict how likely you are to have an accident to adjust your rate accordingly. This is not a new concept. You may have heard of the black boxes some insurances ask policyholders to install for the same purpose. It is seemingly fair. But a leading AI company like Tesla could ultimately take this to another level.
Tesla could improve forecasts with more data and models about location, traffic and weather. Then personalise the predictions further with increasingly advanced machine learning algorithms, enhancing them with cues about emotions from car cameras and your social media. Eventually, creating digital twins, individual models, per customer with a highly accurate forecast on who and when they will have an accident. A bit of an uncomfortable thought, but we are happily sharing our shopping and viewing behaviour today. That seemed invasive and awkward a decade ago, but no more.
What will the outcome be? Will your car or insurance deny you its service because you are likely to have an accident in an individual trip or defined period? Or will it rack up your premium by thousands of percentage points in anticipation of a crash?
Imagine monthly adjusting personal car insurance that employs your digital twin and highly accurate forecasting. Say it is offered at $10 for a few months because you have no incidents forecast, and it is a great way to sell the insurance. Nice, right? Next, imagine it jumps to $10,000 in a subsequent month because your insurance predicts an incident with $6,000 to $10,000 costs. Why have insurance at all?
While this exaggerates the development, it shows a trend towards a problematic outcome. Other insurances like health or home insurance may follow similarly. How would insurance products have to adjust to deal with this? This problem is not theoretical, and the development is underway. Insurance markets that have the flexibility push 'risky' groups and individuals into higher rate plans or exclude them from insurance completely.
You could argue it helps encourage better behaviour, like better driving. But what if you are poor, living in riskier areas, or disabled, having some impairment. Is it fair for you to pay more? How much predictability is desirable for customers to encourage better behaviour and fairer pricing? At what point does it negate insurance's purpose and become a social problem or discrimination?