Comment by jasoncartwright
Comment by jasoncartwright 2 days ago
If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?
Comment by jasoncartwright 2 days ago
If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?
Or at some point subscribing to a service may be easier than owning the damn thing.
It already doesn't make sense to own a car for me. It's cheaper to just call an Uber.
If you take off the conspiracy hat, you will see that there are many advantages to not owning a product. Such as that the vendor's incentives are better aligned with yours. For example, if the thing breaks, it is in __their__ best interest to fix it (or to not let it break in the first place). This also has positive implications for sustainability.
All Tesla vehicles require the person behind the steering wheel to supervise the operations of the vehicle and avoid accidents at all times.
Also, even if a system is fully automated, that doesn’t necessarily legally isolate the person who owns it or set it into motion from liability. Vehicle law would generally need to be updated to change this.
But that might be considered a legal trick. Suppose that, when you pay for a taxi, the standard conditions of carriage would make it your responsibility to supervise the vehicle operation and alert the driver so as to avoid accidents. Would the taxi driver and taxi company be able to eschew liability through that formalism? Probably not. The fact that Tesla makes you sign something does not automatically make the signed document valid and enforceable.
It may be that it is; but then, if you are required to be watchful at all time, and be able to take over from the autonomous vehicle at all times, then - the autonomy doesn't really help you all that much, does it?
They say they will, but until relevant laws are updated, this is mostly contractual and not a change to legal liability. It is similar to how an insurance company takes responsibility for the way you operate your car.
If your local legal system does not absolve you from liability when operating an autonomous vehicle, you can still be sued, and Mercedes has no say in this… even though they could reimburse you.
No. They don’t. It was vaporware made to fool people including you. You could never actually order it and it’s canceled now in favor of an L2 system.
Because that's the law of the land currently.
The product you buy is called "FSD Supervised". It clearly states you're liable and must supervise the system.
I don't think there's law that would allow Tesla (or anyone else) to sell a passenger car with unsupervised system.
If you take Waymo or Tesla Robotaxi in Austin, you are not liable for accidents, Google or Tesla is.
That's because they operate on limited state laws that allow them to provide such service but the law doesn't allow selling such cars to people.
That's changing. Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
> Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
You raise an important point here. Is it economically feasible for system makers to bear the responsibility of self-driving car accidents? It seems impossible, unless the cars are much more expensive to cover the potential future costs. I'm very curious how Waymo insures their cars today. I assume they have a bespoke insurance contract negotiated with a major insurer. Also, do we know the initial cost of each Waymo car (to say nothing of ongoing costs from compute/mapping/etc.)? It must be very high (2x?) given all of the special navigation equipment that is added to each car.> If the car that did a hit-and-run was operated autonomously the insurance of the maker of that car should pay
Why? That's not their fault. If a car hits and runs my uninsured bicycle, the manufacturer isn't liable. (My personal umbrella or other insurance, on the other hand, may cover it.)
They're describing a situation of liability, not mere damage. If yor bicycle is hit you didn't do anything wrong.
If you run into someone on your bike and are at fault then you generally would be liable.
They're talking about the hypothetical where you're on your bike, which was sold as an autobomous bike and the bike manufacturer's software fully drives the bike, and it runs into someone and is at fault.
Tacking "Supervised" on the end of "Full Self Driving" is just contradictory. Perhaps if it was "Partial Self Driving" then it wouldn't be so confusing.
That is redundant and doesn't make the other any less contradictory
You can sell autonomous vehicles to consumers all day long. There's no US federal law prohibiting that, as long as they're compliant with FMVSS as all consumer vehicles are required to be.
> Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
This is news to me. This context seems important to understanding Tesla's decision to stop selling FSD. If they're on the hook for insurance, then they will need to dynamically adjust what they charge to reflect insurance costs.
I see. So not Tesla's product they are using to sell insurance around isn't "Full Self-Driving" or "Autonomous" like the page says.
My current FSD usage is 90% over ~2000 miles (since v14.x). Besides driving everywhere, everyday with FSD, I have driven 4 hours garage to hotel valet without intervention. It is absolutely "Full Self-Driving" and "Autonomous".
FSD isn't perfect, but it is everyday amazing and useful.
> My current FSD usage is 90% over ~2000 miles
I'd guess my Subaru's lane-keeping utilisation is in the same ballpark. (By miles, not minutes. And yes, I'm safer when it and I are watching the road than when I'm watching the road alone.)
Yet still on relying you to cover it with your insurance. Again, clearly not autonomous.
Without LIDAR and/or additional sensors, Tesla will never be able to provide "real" FSD, no matter how wonderful their software controlling the car is.
Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.
Waymo and others are providing a taxi service where the driver is not a human. You don't pay insurance when you ride Uber or Bolt or any other regular taxi service.
> Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.
Well practically speaking, there’s nothing stopping anyone from voluntarily assuming liability for arbitrary things. If Tesla assumes the liability for my car, then even if I still require my “own” insurance for legal purposes, the marginal cost of covering the remaining risk is going to be close to zero.
They literally just (in the last few days) started unsupervised robotaxis in Austin.
They are as self-driving as a car can be.
This is different than the one where they had a human supervisor in passenger seat (which they still do elsewhere).
And different than the one where they didn't have human supervisor but did have a follow car.
Now they have a few robotaxis that are self driving.
If your minor child breaks something, or your pet bites someone, you are liable.
This analogy may be more apt than Tesla would like to admit, but from a liability perspective it makes sense.
You could in turn try to sue Tesla for defective FSD, but the now-clearly-advertised "(supervised)" caveat, plus the lengthy agreement you clicked through, plus lots of lawyers, makes you unlikely to win.
Can a third party reprogram my dog or child at any moment? Or even take over and control them?
Seems like the role of the human operator in the age of AI is to be the entity they can throw in jail if the machine fails (e.g. driver, pilot)
> Surely if it's Tesla making the decisions, they need the insurance?
Why surely? Turning on cruise control doesn't absolve motorists of their insurance requirement.
And the premise is false. While Tesla does "not maintain as much insurance coverage as many other companies do," there are "policies that [they] do have" [1]. (What it insures is a separate question.)
[1] https://www.sec.gov/ix?doc=/Archives/edgar/data/0001318605/0...
Cruise control is hardly relevant to a discussion of liability for autonomous vehicle operation.
Risk gets passed along until someone accepts it, usually an insurance company or the operator. If the risk was accepted and paid for by Tesla, then the cost would simply be passed down to consumers. All consumers, including those that want to accept the risk themselves. In particular, if you have a fleet of cars it can be cheaper to accept the risk and only pay for mandatory insurance, because not all of your cars are going to crash at the same time, and even if they did, not all in the worst way possible. This is how insurance works, by amortizing lots of risk to make it highly improbable to make a loss in the long run.
I think there is an even bigger insurance problem to worry about: if autonomous vehicles become common and are a lot safer than manual driven vehicles, insurance rates for human driven cars could wind up exploding as the risk pool becomes much smaller and statistically riskier. We could go from paying $200/month to $2000/month if robo taxis start dominating cities.
> if autonomous vehicles become common and are a lot safer than manual driven vehicles, insurance rates for human driven cars could wind up exploding as the risk pool becomes much smaller and statistically riskier.
The assumption there is that the remaining human drivers would be the higher risk ones, but why would that be the case?
One of the primary movers of high risk driving is that someone goes to the bar, has too many drinks, then needs both themselves and their car to get home. Autonomous vehicles can obviously improve this by getting them home in their car without them driving it, but if they do, the risk profile of the remaining human drivers improves. At worst they're less likely to be hit by a drunk driver, at best the drunk drivers are the early adopters of autonomous vehicles and opt themselves out of the human drivers pool.
Drunk driving isn't the primary mover of high risk driving. Rather you have:
1. People who can't afford self driving cars (now the insurance industry has a good proxy for income that they couldn't tap into before)
2. Enthusiasts who like driving their cars (cruisers, racers, Helcat revving, people who like doing donuts, etc...)
3. Older people who don't trust technology.
None of those are good risk pools to be in. Also, if self driving cars go mainstream, they are bound to include the safest drivers overnight, so whatever accidents/crashes happen afterwards are covered by a much smaller and "active" risk pool. Oh, and those self driving cars are expensive:
* If you hit one and are at fault, you might pay out 1-200k, most states only require 25k-50k of coverage...so you need more coverage or expect to pay more for incident.
* Self driving cars have a lot of sensors/recorders. While this could work to your advantage (proving that you aren't at fault), it often isn't (they have evidence that you were at fault). Whereas before fault might have been much more hazy (both at fault, or both no fault).
The biggest factor comes if self driving cars really are much safer than human drivers. They will basically disappear from the insurance market, or somehow be covered by product liability instead of insurance...and the remaining drivers will be in a pool of the remaining accidents that they will have to cover on their own.
Classic car insurance is dirt cheap, even for daily driven stuff. Removing people who don't want to drive and don't care to not suck at it hugely improves the risk pool.
If there's only a small minority of human drivers people like you will have bigger fish to screech about there will be substantially less political will to perpetuate the system and it'll probably go away in favor of a far simpler and cheaper "post up a bond" type thing and much of the expensive mechanisms for grading drivers will be dismantled.
> Drunk driving isn't the primary mover of high risk driving.
It kind of is. They're responsible for something like 30% of traffic fatalities despite being a far smaller percentage of drivers.
> People who can't afford self driving cars (now the insurance industry has a good proxy for income that they couldn't tap into before)
https://pubmed.ncbi.nlm.nih.gov/30172108/
But also, wouldn't they already have this by using the vehicle model and year?
> Enthusiasts who like driving their cars (cruisers, racers, Helcat revving, people who like doing donuts, etc...)
Again something that seems like it would already be accounted for by vehicle model.
> Older people who don't trust technology.
How sure are we that the people who don't trust technology are older? And again, the insurance company already knows your age.
> Also, if self driving cars go mainstream, they are bound to include the safest drivers overnight
Are they? They're more likely to include the people who spend the most time in cars, which is another higher risk pool, because it allows those people to spend the time on a phone/laptop instead of driving the car, which is worth more to people the more time they spend doing it and so justifies the cost of a newer vehicle more easily.
> Oh, and those self driving cars are expensive
Isn't that more of a problem for the self-driving pool? Also, isn't most of the cost that the sensors aren't as common and they'd end up costing less as a result of volume production anyway?
> Self driving cars have a lot of sensors/recorders. While this could work to your advantage (proving that you aren't at fault), it often isn't (they have evidence that you were at fault). Whereas before fault might have been much more hazy (both at fault, or both no fault).
Which is only a problem for the worse drivers who are actually at fault, which makes them more likely to move into the self-driving car pool.
> The biggest factor comes if self driving cars really are much safer than human drivers.
The biggest factor is which drivers switch to self-driving cars. If half of human drivers switched to self-driving cars but they were chosen completely at random then the insurance rates for the remaining drivers would be essentially unaffected. How safe they are is only relevant insofar as it affects your chances of getting into a collision with another vehicle, and if they're safer then it would make that chance go down to have more of them on the road.
Haha, yes, today already sucks badly in many US markets. Imagine what will happen when the only people driving cars manually are "enthusiasts".
I'm guessing that other developed countries don't need 6-7 figure injury coverage.
That's probably the future; Mercedes currently does do this in limited form:
https://www.roadandtrack.com/news/a39481699/what-happens-if-...
Not "currently," "used to": https://www.theverge.com/transportation/860935/mercedes-driv...
It was way too limited to be useful to anyone.
Because the operator is liable? Tesla as a company isn't driving the car, it's a ML model running on something like HW4 on bare metal in the car itself. Would that make the silicon die legally liable?
Sounds like it's neither self-driving, nor autonomous, if I'm on the hook if it goes wrong.
Yeah, Tesla gets to blame the “driver”, and has a history of releasing partial and carefully curated subsets of data from crashes to try to shift as much blame onto the driver as possible.
And the system is designed to set up drivers for failure.
An HCI challenge with mostly autonomous systems is that operators lose their awareness of the system, and when things go wrong you can easily get worse outcomes than if the system was fully manual with an engaged operator.
This is a well known challenge in the nuclear energy sector and airline industry (Air France 447) - how do you keep operators fully engaged even though they almost never need to intervene, because otherwise they’re likely to be missing critical context and make wrong decisions. These days you could probably argue the same is true of software engineers reviewing LLM code that’s often - but not always - correct.
> has a history of releasing partial and carefully curated subsets of data from crashes to try to shift as much blame onto the driver as possible
Really? Thats crazy.
Its neither self-driving, nor autonomous, eventually not even a car! (as Tesla slowly exits the car business). It will be 'insurance' on Speculation as a service, as Tesla skyrockets to $20T market cap. Tesla will successfully transition from a small revenue to pre-revenue company: https://www.youtube.com/watch?v=SYJdKW-UnFQ
The last few years of Tesla 'growth' show how this transition is unfolding. S and X production is shutdown, just a few more models to shutdown.
Especially since they can push regressions over the air and you could be lulled into a sense of safety and robustness that isn’t there and bam you pay the costs of the regressions, not Tesla.
Who’s the “operator” of an “autonomous” car? If I sit in it and it drives me around, how am I an “operator”?
The point is if the liability is always exclusively with the human driver then any system in that car is at best a "driver assist". Claims that "it drives itself" or "it's autonomous" are just varying degrees of lying. I call it a partial lie rather than a partial truth because the result more often than not is that the customer is tricked into thinking the system is more capable than it is, and because that outcome is more dangerous than the opposite.
Any car has varying degrees of autonomy, even the ones with no assists (it will safely self-drive you all the way to the accident site, as they say). But the car is either driven by the human with the system's help, or is driven by the system with or without the human's help.
A car can't have 2 drivers. The only real one is the one the law holds responsible.
> If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?
Suppose ACME Corporation produces millions of self-driving cars and then goes out of business because the CEO was embezzling. They no longer exist. But the cars do. They work fine. Who insures them? The person who wants to keep operating them.
Which is the same as it is now. It's your car so you pay to insure it.
I mean think about it. If you buy an autonomous car, would the manufacturer have to keep paying to insure it forever as long as you can keep it on the road? The only real options for making the manufacturer carry the insurance are that the answer is no and then they turn off your car after e.g. 10 years, which is quite objectionable, or that the answer is "yes" but then you have to pay a "subscription fee" to the manufacturer which is really the insurance premium, which is also quite objectionable because then you're then locked into the OEM instead of having a competitive insurance market.
I like your thesis, but what about this: all this self driving debate is nonsense if you require Tesla to pay all damages plus additional damages, "because you were hit by a robot!". That should make sure Tesla improves the system, and that it operates above human safety levels. Then one can forget about legislation and Tesla can do its job.
So to circle back to your thesis: when the car is operating autonomously, the manufacturer is responsible. If it goes broke then what? Then the owner will need to insure the car privately. So Tesla insurance might have to continue to operate (and be profitable).
The question this raises is if Tesla should sell any self-driving cars at all, or instead it should just drive them itself.
> That should make sure Tesla improves the system, and that it operates above human safety levels.
There are two problems with this.
The first is that insurance covers things that weren't really anyone's fault, or that it's not clear whose fault it was. For example, the most direct and preventable cause of many car crashes is poorly designed intersections, but then the city exempts itself from liability and people still expect someone to pay so it falls to insurance. There isn't really much the OEM can do about the poorly designed intersection or the improperly banked curve or snowy roads etc.
The second is that you would then need to front-load a vehicle-lifetime's worth of car insurance into the purchase price of the car, which significantly raises the cost to the consumer over paying as you go because of the time value of money. It also compounds the cost of insurance, because if the price of the car includes the cost of insurance and then the car gets totaled, the insurance would have to pay out the now-higher cost of the car.
> The question this raises is if Tesla should sell any self-driving cars at all, or instead it should just drive them itself.
This is precisely the argument for not doing it that way. Why should we want the destruction of ownership in lieu of pushing everyone to a subscription service? What happens to poor people who could have had a used car but now all the older cars go to the crusher because it allows the OEMs to sustain artificial scarcity for the service?
Not all insurance claims are based off of the choices of the driver.
It’s because you bought it. Don’t buy it if you don’t want to insure.
Yep, you bought it, you own it, you choose to operate it on the public roads. Therefore your liability.
well it's the risk, the combination ..
it's why young drivers pay more for insurance
The coder and sensor manufacturers need the insurance for wrongful death lawsuits
and Musk for removing lidar so it keeps jumping across high speed traffic at shadows because the visual cameras can't see true depth
99% of the people on this website are coders and know how even one small typo can cause random fails, yet you trust them to make you an alpha/beta tester at high speed?
That is the case everywhere. It is common when buying a product for the contract to include who has liability for various things. The price often changes by a lot depending on who has liability.
Cars are traditionally sold as the customer has liability. Nothing stops a car maker (or even an individual dealer) from selling cars today taking all the insurance liability in any country I know of - they don't for what I hope are obvious reasons (bad drivers will be sure to buy those cars since it is a better deal for them an in turn a worse deal for good drivers), but they could.
Self driving is currently sold as customers has liability because that is how it has always been done. I doubt it will change, but it is only because I doubt there will ever be enough advantage as to be worth it for someone else to take on the liability - but I could be wrong.
It isn't fully autonomous yet. For any future system sold as level 5 (or level 4?), I agree with your contention -- the manufacturer of the level 5 autonomous system is the one who bears primary liability and therefore should insure. "FSD" isn't even level 3.
(Though, there is still an element of owner/operator maintenance for level 4/5 vehicles -- e.g., if the owner fails to replace tires below 4/32", continues to operate the vehicle, and it causes an injury, that is partially the owner/operator's fault.)
Wouldn't that requirement completely kill any chance of a L5 system being profitable? If company X is making tons of self-driving cars, and now has to pay insurance for every single one, that's a mountain of cash. They'd go broke immediately.
I realize it would suck to be blamed for something the car did when you weren't driving it, but I'm not sure how else it could be financially feasible.
The way it works in states like California currently is that the permit holder has to post an insurance bond that accidents and judgements are taken out against. It's a fixed overhead.
Generally speaking, liability for a thing falls on the owner/operator. That person can sue the manufacturer to recover the damages if they want. At some point, I expect it to become somewhat routine for insurures to pay out, then sue the manufacturer to recover.