Self Driving Car Insurance
(lemonade.com)142 points by KellyCriterion 2 days ago
142 points by KellyCriterion 2 days ago
One thing that was unclear to me from the stats cited on the website is whether the quoted 52% reduction in crashes is when FSD is in use, or overall. This matters because people are much more likely to use FSD in situations where driving is easier. So, if the reduction is just during those times, I'm not even sure that would be better than a human driver.
As an example, let's say most people use FSD on straight US Interstate driving, which is very easy. That could artificially make FSD seem safer than it really is.
My prior on this is supervised FSD ought to be safer, so the 52% number kind of surprised me, however it's computed. I would have expected more like a 90-95% reduction in accidents.
I think this might be right, but it does two interesting things:
1) it let's lemonade reward you for taking safer driving routes (or living in a safer area to drive, whatever that means)
2) it (for better or worse) encourages drivers to use it more. This will improve Tesla's training data but also might negatively impact the fsd safety record (an interesting experiment!)
> ...but also might negatively impact the fsd safety record (an interesting experiment!)
As a father of kids in a neighborhood with a lot of Teslas, how do I opt out of this experiment?
The insurance industry is a commercial prediction market.
It is often an indicator of true honesty, providing there is no government intervention. Governments intervene in insurance/risk markets when they do not like the truth.
I tried to arrange insurance for an obese western expatriate several years ago in an Asian country, and the (western) insurance company wrote a letter back saying the client was morbidly obese and statistically likely to die within 10 years, and they should lose x weight before they could consider having insurance.
> providing there is no government intervention.
You mean like forcing people to buy it ad then shaping what product can ad cant be offered with a spiderweb of complex rules?
> quite skeptical of Tesla's reliability claims
I'm sceptical of Robotaxi/Cybercab. I'm less sceptical that FSD, supervised, is safer than fully-manual control.
Where I live isn't particularly challenging to drive (rural Washington), but I'm constantly disengaging FSD for doing silly and dangerous things.
Most notably my driveway meets the road at a blind y intersection, and my Model 3 just blasts out into the road even though you cannot see cross traffic.
FSD stresses me out. It's like I'm monitoring a teenager with their learners permit. I can probably count the number trips where I haven't had to take over on one hand.
I use it for 90% of my driving in Austin and it’s incredible
Having handed over control of my vehicles to FSD many times, I’ve yet to come away from the experience feeling that my vehicle was operating in a safer regime for the general public than within my own control.
Keeping a 1-2 car's length stopping distance is likely over a 50% reduction in at fault damages.
You can get this with just a fairly dumb radar cruise control system, though.
> I'm less sceptical that FSD, supervised, is safer than fully-manual control.
I'm very skeptical that the average human driver properly supervises FSD or any other "full" self driving system.
Supervised FSD — automating 99.9% of driving and expecting drivers to be fully alert for the other .1% — appears to go against everything we know about human attention.
Do you drive a HW4? I’m 90% FSD on my total car miles
They don’t bet money on just “I’m quite skeptical because I hate the man”, but on actual data provided by the company.
That’s the difference.
The skepticism and hate is based on observing decades of shameless dishonesty, which is itself a form of data provided by the company: https://motherfrunker.ca/fsd/
Still doesn’t change my point: as of today being skeptic because relying on outdated data or historical series is just nonsense. I mean, insurance quotes work in a totally different way.
> betting actual money on those claims
Insurance companies can let marketing influence rates to some degree, with programs that tend to be tacked on after the initial rate is set. This self driving car program sounds an awful lot like safe driver programs like GEICO Clean Driving Record, State Farm Good Driver Discount, and Progressive Safe Driver, Progressive Snapshot, and Allstate Drivewise. The risk assessment seems to be less thorough than the general underwriting process, and to fall within some sort of risk margin, so to me it seems gimmicky and not a true innovation at this point.
If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?
Generally speaking, liability for a thing falls on the owner/operator. That person can sue the manufacturer to recover the damages if they want. At some point, I expect it to become somewhat routine for insurures to pay out, then sue the manufacturer to recover.
Or at some point subscribing to a service may be easier than owning the damn thing.
All Tesla vehicles require the person behind the steering wheel to supervise the operations of the vehicle and avoid accidents at all times.
Also, even if a system is fully automated, that doesn’t necessarily legally isolate the person who owns it or set it into motion from liability. Vehicle law would generally need to be updated to change this.
Because that's the law of the land currently.
The product you buy is called "FSD Supervised". It clearly states you're liable and must supervise the system.
I don't think there's law that would allow Tesla (or anyone else) to sell a passenger car with unsupervised system.
If you take Waymo or Tesla Robotaxi in Austin, you are not liable for accidents, Google or Tesla is.
That's because they operate on limited state laws that allow them to provide such service but the law doesn't allow selling such cars to people.
That's changing. Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
> Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
You raise an important point here. Is it economically feasible for system makers to bear the responsibility of self-driving car accidents? It seems impossible, unless the cars are much more expensive to cover the potential future costs. I'm very curious how Waymo insures their cars today. I assume they have a bespoke insurance contract negotiated with a major insurer. Also, do we know the initial cost of each Waymo car (to say nothing of ongoing costs from compute/mapping/etc.)? It must be very high (2x?) given all of the special navigation equipment that is added to each car.Tacking "Supervised" on the end of "Full Self Driving" is just contradictory. Perhaps if it was "Partial Self Driving" then it wouldn't be so confusing.
You can sell autonomous vehicles to consumers all day long. There's no US federal law prohibiting that, as long as they're compliant with FMVSS as all consumer vehicles are required to be.
> Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
This is news to me. This context seems important to understanding Tesla's decision to stop selling FSD. If they're on the hook for insurance, then they will need to dynamically adjust what they charge to reflect insurance costs.
I see. So not Tesla's product they are using to sell insurance around isn't "Full Self-Driving" or "Autonomous" like the page says.
My current FSD usage is 90% over ~2000 miles (since v14.x). Besides driving everywhere, everyday with FSD, I have driven 4 hours garage to hotel valet without intervention. It is absolutely "Full Self-Driving" and "Autonomous".
FSD isn't perfect, but it is everyday amazing and useful.
Without LIDAR and/or additional sensors, Tesla will never be able to provide "real" FSD, no matter how wonderful their software controlling the car is.
Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.
Waymo and others are providing a taxi service where the driver is not a human. You don't pay insurance when you ride Uber or Bolt or any other regular taxi service.
> Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.
Well practically speaking, there’s nothing stopping anyone from voluntarily assuming liability for arbitrary things. If Tesla assumes the liability for my car, then even if I still require my “own” insurance for legal purposes, the marginal cost of covering the remaining risk is going to be close to zero.
If your minor child breaks something, or your pet bites someone, you are liable.
This analogy may be more apt than Tesla would like to admit, but from a liability perspective it makes sense.
You could in turn try to sue Tesla for defective FSD, but the now-clearly-advertised "(supervised)" caveat, plus the lengthy agreement you clicked through, plus lots of lawyers, makes you unlikely to win.
Can a third party reprogram my dog or child at any moment? Or even take over and control them?
Seems like the role of the human operator in the age of AI is to be the entity they can throw in jail if the machine fails (e.g. driver, pilot)
> Surely if it's Tesla making the decisions, they need the insurance?
Why surely? Turning on cruise control doesn't absolve motorists of their insurance requirement.
And the premise is false. While Tesla does "not maintain as much insurance coverage as many other companies do," there are "policies that [they] do have" [1]. (What it insures is a separate question.)
[1] https://www.sec.gov/ix?doc=/Archives/edgar/data/0001318605/0...
Cruise control is hardly relevant to a discussion of liability for autonomous vehicle operation.
Risk gets passed along until someone accepts it, usually an insurance company or the operator. If the risk was accepted and paid for by Tesla, then the cost would simply be passed down to consumers. All consumers, including those that want to accept the risk themselves. In particular, if you have a fleet of cars it can be cheaper to accept the risk and only pay for mandatory insurance, because not all of your cars are going to crash at the same time, and even if they did, not all in the worst way possible. This is how insurance works, by amortizing lots of risk to make it highly improbable to make a loss in the long run.
I think there is an even bigger insurance problem to worry about: if autonomous vehicles become common and are a lot safer than manual driven vehicles, insurance rates for human driven cars could wind up exploding as the risk pool becomes much smaller and statistically riskier. We could go from paying $200/month to $2000/month if robo taxis start dominating cities.
> if autonomous vehicles become common and are a lot safer than manual driven vehicles, insurance rates for human driven cars could wind up exploding as the risk pool becomes much smaller and statistically riskier.
The assumption there is that the remaining human drivers would be the higher risk ones, but why would that be the case?
One of the primary movers of high risk driving is that someone goes to the bar, has too many drinks, then needs both themselves and their car to get home. Autonomous vehicles can obviously improve this by getting them home in their car without them driving it, but if they do, the risk profile of the remaining human drivers improves. At worst they're less likely to be hit by a drunk driver, at best the drunk drivers are the early adopters of autonomous vehicles and opt themselves out of the human drivers pool.
Drunk driving isn't the primary mover of high risk driving. Rather you have:
1. People who can't afford self driving cars (now the insurance industry has a good proxy for income that they couldn't tap into before)
2. Enthusiasts who like driving their cars (cruisers, racers, Helcat revving, people who like doing donuts, etc...)
3. Older people who don't trust technology.
None of those are good risk pools to be in. Also, if self driving cars go mainstream, they are bound to include the safest drivers overnight, so whatever accidents/crashes happen afterwards are covered by a much smaller and "active" risk pool. Oh, and those self driving cars are expensive:
* If you hit one and are at fault, you might pay out 1-200k, most states only require 25k-50k of coverage...so you need more coverage or expect to pay more for incident.
* Self driving cars have a lot of sensors/recorders. While this could work to your advantage (proving that you aren't at fault), it often isn't (they have evidence that you were at fault). Whereas before fault might have been much more hazy (both at fault, or both no fault).
The biggest factor comes if self driving cars really are much safer than human drivers. They will basically disappear from the insurance market, or somehow be covered by product liability instead of insurance...and the remaining drivers will be in a pool of the remaining accidents that they will have to cover on their own.
Haha, yes, today already sucks badly in many US markets. Imagine what will happen when the only people driving cars manually are "enthusiasts".
I'm guessing that other developed countries don't need 6-7 figure injury coverage.
That's probably the future; Mercedes currently does do this in limited form:
https://www.roadandtrack.com/news/a39481699/what-happens-if-...
Not "currently," "used to": https://www.theverge.com/transportation/860935/mercedes-driv...
It was way too limited to be useful to anyone.
Because the operator is liable? Tesla as a company isn't driving the car, it's a ML model running on something like HW4 on bare metal in the car itself. Would that make the silicon die legally liable?
Sounds like it's neither self-driving, nor autonomous, if I'm on the hook if it goes wrong.
Yeah, Tesla gets to blame the “driver”, and has a history of releasing partial and carefully curated subsets of data from crashes to try to shift as much blame onto the driver as possible.
And the system is designed to set up drivers for failure.
An HCI challenge with mostly autonomous systems is that operators lose their awareness of the system, and when things go wrong you can easily get worse outcomes than if the system was fully manual with an engaged operator.
This is a well known challenge in the nuclear energy sector and airline industry (Air France 447) - how do you keep operators fully engaged even though they almost never need to intervene, because otherwise they’re likely to be missing critical context and make wrong decisions. These days you could probably argue the same is true of software engineers reviewing LLM code that’s often - but not always - correct.
> has a history of releasing partial and carefully curated subsets of data from crashes to try to shift as much blame onto the driver as possible
Really? Thats crazy.
Its neither self-driving, nor autonomous, eventually not even a car! (as Tesla slowly exits the car business). It will be 'insurance' on Speculation as a service, as Tesla skyrockets to $20T market cap. Tesla will successfully transition from a small revenue to pre-revenue company: https://www.youtube.com/watch?v=SYJdKW-UnFQ
The last few years of Tesla 'growth' show how this transition is unfolding. S and X production is shutdown, just a few more models to shutdown.
Especially since they can push regressions over the air and you could be lulled into a sense of safety and robustness that isn’t there and bam you pay the costs of the regressions, not Tesla.
Who’s the “operator” of an “autonomous” car? If I sit in it and it drives me around, how am I an “operator”?
The point is if the liability is always exclusively with the human driver then any system in that car is at best a "driver assist". Claims that "it drives itself" or "it's autonomous" are just varying degrees of lying. I call it a partial lie rather than a partial truth because the result more often than not is that the customer is tricked into thinking the system is more capable than it is, and because that outcome is more dangerous than the opposite.
Any car has varying degrees of autonomy, even the ones with no assists (it will safely self-drive you all the way to the accident site, as they say). But the car is either driven by the human with the system's help, or is driven by the system with or without the human's help.
A car can't have 2 drivers. The only real one is the one the law holds responsible.
> If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?
Suppose ACME Corporation produces millions of self-driving cars and then goes out of business because the CEO was embezzling. They no longer exist. But the cars do. They work fine. Who insures them? The person who wants to keep operating them.
Which is the same as it is now. It's your car so you pay to insure it.
I mean think about it. If you buy an autonomous car, would the manufacturer have to keep paying to insure it forever as long as you can keep it on the road? The only real options for making the manufacturer carry the insurance are that the answer is no and then they turn off your car after e.g. 10 years, which is quite objectionable, or that the answer is "yes" but then you have to pay a "subscription fee" to the manufacturer which is really the insurance premium, which is also quite objectionable because then you're then locked into the OEM instead of having a competitive insurance market.
I like your thesis, but what about this: all this self driving debate is nonsense if you require Tesla to pay all damages plus additional damages, "because you were hit by a robot!". That should make sure Tesla improves the system, and that it operates above human safety levels. Then one can forget about legislation and Tesla can do its job.
So to circle back to your thesis: when the car is operating autonomously, the manufacturer is responsible. If it goes broke then what? Then the owner will need to insure the car privately. So Tesla insurance might have to continue to operate (and be profitable).
The question this raises is if Tesla should sell any self-driving cars at all, or instead it should just drive them itself.
> That should make sure Tesla improves the system, and that it operates above human safety levels.
There are two problems with this.
The first is that insurance covers things that weren't really anyone's fault, or that it's not clear whose fault it was. For example, the most direct and preventable cause of many car crashes is poorly designed intersections, but then the city exempts itself from liability and people still expect someone to pay so it falls to insurance. There isn't really much the OEM can do about the poorly designed intersection or the improperly banked curve or snowy roads etc.
The second is that you would then need to front-load a vehicle-lifetime's worth of car insurance into the purchase price of the car, which significantly raises the cost to the consumer over paying as you go because of the time value of money. It also compounds the cost of insurance, because if the price of the car includes the cost of insurance and then the car gets totaled, the insurance would have to pay out the now-higher cost of the car.
> The question this raises is if Tesla should sell any self-driving cars at all, or instead it should just drive them itself.
This is precisely the argument for not doing it that way. Why should we want the destruction of ownership in lieu of pushing everyone to a subscription service? What happens to poor people who could have had a used car but now all the older cars go to the crusher because it allows the OEMs to sustain artificial scarcity for the service?
Not all insurance claims are based off of the choices of the driver.
It’s because you bought it. Don’t buy it if you don’t want to insure.
Yep, you bought it, you own it, you choose to operate it on the public roads. Therefore your liability.
If they don’t let you buy, you don’t own. If you don’t own, how is that insurance even available to you?
well it's the risk, the combination ..
it's why young drivers pay more for insurance
The coder and sensor manufacturers need the insurance for wrongful death lawsuits
and Musk for removing lidar so it keeps jumping across high speed traffic at shadows because the visual cameras can't see true depth
99% of the people on this website are coders and know how even one small typo can cause random fails, yet you trust them to make you an alpha/beta tester at high speed?
That is the case everywhere. It is common when buying a product for the contract to include who has liability for various things. The price often changes by a lot depending on who has liability.
Cars are traditionally sold as the customer has liability. Nothing stops a car maker (or even an individual dealer) from selling cars today taking all the insurance liability in any country I know of - they don't for what I hope are obvious reasons (bad drivers will be sure to buy those cars since it is a better deal for them an in turn a worse deal for good drivers), but they could.
Self driving is currently sold as customers has liability because that is how it has always been done. I doubt it will change, but it is only because I doubt there will ever be enough advantage as to be worth it for someone else to take on the liability - but I could be wrong.
It isn't fully autonomous yet. For any future system sold as level 5 (or level 4?), I agree with your contention -- the manufacturer of the level 5 autonomous system is the one who bears primary liability and therefore should insure. "FSD" isn't even level 3.
(Though, there is still an element of owner/operator maintenance for level 4/5 vehicles -- e.g., if the owner fails to replace tires below 4/32", continues to operate the vehicle, and it causes an injury, that is partially the owner/operator's fault.)
Wouldn't that requirement completely kill any chance of a L5 system being profitable? If company X is making tons of self-driving cars, and now has to pay insurance for every single one, that's a mountain of cash. They'd go broke immediately.
I realize it would suck to be blamed for something the car did when you weren't driving it, but I'm not sure how else it could be financially feasible.
The way it works in states like California currently is that the permit holder has to post an insurance bond that accidents and judgements are taken out against. It's a fixed overhead.
I own a Model Y with hardware version 4. FSD prevented my from getting in an accident with a drunk driver. It reacted much faster to the situation than I could have. Ever since, I’m sold that in a lot of circumstances, machines can drive better than humans.
Tesla fans have not realized that every car made since 2021ish can do this.
It does more than AEB. It also knows to swerve out of the way during E: https://www.youtube.com/watch?v=c1MWml-81e0
About once a month my car makes me look like a piece of shit because the AEB gets confused by lane changes when you maintain speed coming up to slow traffic in order to wait for a good spot to move over. As you go to move over it'll flip out and brake as you slide left and no amount of gas pedal will override it so you wind up moving over a lane only to brake check that lane. Thankfully it doesn't do a full stop, just brakes for long enough to realize there's nothing there.
0/10. Someone is gonna cause a multi-car pile up with this.
I'm sure it would work great to prevent me from texting my way into the back of stopped traffic though.
My 2016 Honda Civic has automatic braking (and it has lanekeep assist, so it's technologically superior to a 2026 Tesla).
> Politics is not allowed on HN
Nothing in the guidelines says this. What it does require is "thoughtful and substantive" comments, particularly "as a topic gets more divisive."
This is ridiculous wrong and demonstrates a profound lack of insight into both the history of economics[1] and the current political calculus.
Please don't use rules as a cudgel or at least have more tact doing so.
Hacker News likes to keep conversations focused on the topic at hand. I doubt anyone here thinks politics are irrelevant. We just understand basic courtesy. If your goal is indeed to influence change, you do a massive disservice to the cause by acting immature and injecting your politics into other conversations.
Well, as everyone points out: Musk uses Tesla’s stock to fund things and Tesla’s stock is decoupled from fundamentals like revenue so that means that buying his car is decoupled from funding things. Practically a syllogism.
Great, I’m glad your dictionary is happy about deporting 5 year olds.
“Uhm aktually it’s not a genocide it’s just a fascist police state”
Multiple humanitarian organizations define mass displacement as genocide and/or ethnic cleansing.
The holocaust literally started with mass deportations/detentions. Then the nazis figured out that it was easier to kill detainees.
Tesla have their own Insurance product which is already very competitive compared to other providers. Not sure if lemonade can beat them . Tesla's insurance product has similar objective in place already where it rewards self driving over manual driving.
Tesla is cooperating with Lemonade on this by providing them necessary user driving data.
If Tesla didn't want Lemonade to provide this, they could block them.
Strategically, Tesla doesn't want to be an insurer. They started the insurance product years ago, before Lemonade also offered this, to make FSD more attractive to buyers.
But the expansion stalled, maybe because the state bureaucracy or maybe because Tesla shifted priority to other things.
In conclusion: Tesla is happy that Lemonade offers this. It makes Tesla cars more attractive to buyers without Tesla doing the work of starting an insurance company in every state.
> But the expansion stalled, maybe because the state bureaucracy or maybe because Tesla shifted priority to other things.
If the math was mathing, it would be malpractice not to expand it. I'm betting that their scheme simply wasn't workable, given the extremely high costs of claims (Tesla repairs aren't cheap) relative to the low rates that they were collecting on premiums. The cheap premiums are probably a form of market dumping to get people to buy their FSD product, the sales of which boosts their share price.
It was not workable. They have a loss ratio of >100% [1], as in they paid out more in claims than received in premiums before even accounting for literally any other costs. Industry average is ~60-80% to stay profitable when including other costs.
They released the Tesla Insurance product because their cars were excessively expensive to insure, increasing ownership costs, which was impacting sales. By releasing the unprofitable Tesla Insurance product, they could subsidize ownership costs making the cars more attractive to buy right now which pumped revenues immediately in return for a "accidental" write-down in the future.
[1] https://peakd.com/tesla/@newageinv/teslas-push-into-insuranc...
The math should've mathed. Better data === lower losses right? They probably weren't able to get it to work quite right on the tech side and were eating fat losses during an already bad time in the market.
It'll come back.
Lemonade or Tesla if you find this, let's pilot, i'm a founder in sunnyvale, insurtech vertical at pnp
You'd be very surprised. Distribution works wonders. You could have a large carrier taking over Tesla's own vehicles in markets they care about. The difference then would be loss ratios on the data collection, like does LIDAR data really beat Progressive Snapshot?
The two are measuring data for different sources of losses for carriers.
I was curious what the break-even is where the insurance discount covers the $99/mo FSD subscription. I got a Lemonade quote around $240/mo (12k mi/yr lease on a Model 3), so 50% off would save ~$120/mo - i.e. it would cover FSD and still leave ~$21/mo net. Or, "free FSD is you use it".
I believe, at the end of the day, insurance companies will be the ones driving FSD adoption. The media will sensationalize the outlier issues of FSD software, but insurance companies will set the incentives for humans to stop driving.
> $240 per month?
Are Teslas still ridiculously-expensive to repair? (I pay $1,100 a year (~$92/month) to insure my Subaru, which costs more than a Model 3.)
I don't have a car so I don't know what is normal. i just went through the lemonade quote process. (I have a license and my record is clean, though - so there shouldn't be any high-risk flags.)
Yep, also people who will spend thousands of dollars to get a tiny scratch repaired because for some reaosn in the US everyone expects cars to be utterly perfect.
Yep - the way to get adoption, whilst the bar is too high for self-driving cars, the bar should be safer than the average person. An old greying socialist - saying that capitalism drive the right outcomes. Same with low-carbon, insurance will help with climate change mitigation.
Hmmm. The source for the "FSD is safer" claim might not be wholly independent: "Tesla’s data shows that Full Self-Driving miles are twice as safe as manual driving"
I would be surprised if that was what they were actually looking at. They are an established insurance company with their own data and the actuaries to analyze it. I can't imagine them doing this without at least validating a substantial drop in claims relating to FSD capable cars.
Now that they are offering this program, they should start getting much better data by being able to correlate claims with actual FSD usage. They might be viewing this program partially as a data acquisition project to help them insure autonomous vehicles more broadly in the future.
They are a grossly unprofitable insurance company. Your actuaries can undervalue risk to the point you are losing money on every claim and still achieve that.
In fact, Tesla Insurance, the people who already have direct access to the data already loses money on every claim [1].
[1] https://peakd.com/tesla/@newageinv/teslas-push-into-insuranc...
Tesla expanding into insurance actuarial science, isn't it a conflict of interest if they offer it for their own cars?
> They might be viewing this program partially as a data acquisition project to help them insure autonomous vehicles more broadly in the future
What do you mean?
It doesn't really matter because the insurance company itself will learn if that is correct or not when the claims start coming in
Its their own bet to make
> "Tesla’s data shows that Full Self-Driving miles are twice as safe as manual driving"
Teslas only do FSD on motorways where you tend to have far fewer accidents per mile.
Also, they switch to manual driving if they can't cope, and because the driver isn't paying attention this usually results in a crash. But hey, it's in manual driving, not FSD, so they get to claim FSD is safer.
FSD is not and never will be safer than a human driver.
Wrong in every count. You’re embarrassing yourself. Your identity is so tied up in this haha. Yes, it’s safer, no its not just freeways, and crashes after disengagements still count.
A 50% discount is pretty damning empirical evidence for FSD being better at driving your Tesla than you are.
A discount they get to set on a subset of miles of their choice may just be a marketing expense for an insurance startup which makes losses and relies on VC capital and needs growth: https://en.wikipedia.org/wiki/Lemonade,_Inc.#2015%E2%80%9320... I was impressed by this until I looked Lemonade up.
No it does not. A 50% discount and the insurance still having industry average profit, or at least being profitable at all, would tell you that. Selling at a loss does not indicate your costs are actually lower. You need to wait until we learn if it is actually at a loss.
Sir, your bias is extreme.
Ah yes, posting well documented video evidence of reality is bias. How silly of me. The only unbiased take is to ignore my lying eyes and make logically unsound arguments in favor of endangering the public. That is what unbiased people do.
I also like how you completely avoided addressing my argument in favor of a attempted ad hominem.
Yeah I'm actually very curious about this, it's the first I've heard.
I'd like to know what data this is based on, and if Tesla is providing any kind of subsidy or guarantee.
There's also a big difference between the value of car damages and, well, death. E.g. what if FSD is much less likely to get into otherwise common fender benders that don't harm you, but more likely to occasionally accidentally drive you straight into a divider, killing you?
A 50% discount when using FSD or just doubling insurance company profits when when not using FSD. The only evidence that actually matters is cost in comparison to other insurance companies. If this product is cheaper for you, then it probably does indicate FSD is better at driving than you (well, than the average driver in your demographic). Maybe this is damning with faint praise.
Oh, you've totally forgotten about selling to third parties and making tons of money off of what you do and where you go.
Lemonade purchased Metromile and significantly increased prices. 2.5x if I recall correctly. This has forced me to move to Geico. Now, since prices have increased and new self driving car insurance is giving a discount, are you effectively paying same old rate?
Just curious about this, this was Lemonade's integrated insurance to the Tesla right? How's Geico like for you? Probably just fine right? Any differences?
The whole point of self-driving cars (to me) is I don't have to own or insure it, someone else deals with that and I just make it show up with my phone when I need it.
Imagine this for a whole neighborhood! Maybe it'd be more efficient for the transport to come at regular intervals though. And while we're at it, let's pick up other people along the way, you'll need a bigger vehicle though, perhaps bus-sized...
Half-jokes aside, if you don't own it, you'll end up paying more to the robotaxi company than you would have paid to own the car. This is all but guaranteed based on all SaaS services so far.
This only works in neighborhoods that are veritable city blocks, with buildings several stories tall standing close by. Not something like northern Houston, TX; it barely works for places like Palo Alto, CA. You cannot run buses on every lane, at a reasonable distance from every house.
The point of a car is takes you door to door. There's no expectation to walk three blocks from a stop; many US places are not intended for waking anyway. Consider heavy bags from grocery shopping, or similar.
Public transit works in proper cities, those that became cities before the advent of the car, and were not kept in the shape of large suburban sprawls by zoning. Most US cities only qualify in their downtowns.
Elsewhere, rented / hailed self-driving cars would be best. First of all, fewer of them would be needed.
> if you don't own it, you'll end up paying more to the robotaxi company than you would have paid to own the car
Maybe for you, I already don't own it and have not found that to be true. I pretty much order an uber whenever I don't feel like riding my bike or the bus, and that costs <$300 most months. Less than the average used car payment in the US before you even consider insurance, fuel, storage, maintenance, etc.
I also rent a car now and then for weekend trips, that also is a few hundred bucks at most.
I would be surprised if robotaxis were more expensive long term.
> Maybe it'd be more efficient for the transport to come at regular intervals though
Efficient for who, is the problem
Focusing only on price, renting a beafy shared "cloud" computer is cheaper than buying one and changing every 5 years. It's not always an issue for idle hardware.
Cars are mostly idle and could be cheaper if shared. But why make them significantly cheaper when you can match the price and extract more profits?
Cars and personal computers have advantages over shared resources that often make them worth the cost. If you want your transport/compute in busy times you may find limitations. (ever got on the train and had to stand because there are no seats? Every had to wait for your compute job to start because they are all busy? Both of these have happened to me).
> Cars are mostly idle and could be cheaper if shared. But why make them significantly cheaper when you can match the price and extract more profits?
Yeah, this would rely on robust competition.
For the vast majority of people who own a car, continuing to own the car will remain the better deal. Most people need their car during "rush hour", so there isn't any savings from sharing, and worse some people have "high standards" and so will demand the rental be a clean car nicer than you would accept - thus raising the costs (particularly if you drive used cars) Any remaining argument for a shared car dies when you realize that you can leave your things in the car, and you never have to wait.
For the rest - many of them live in a place where not enough others will follow the same system and so they will be forced to own a car just like today. If you live in a not dense area but still manage to walk/bike almost everywhere (as I do), renting a car is on paper cheaper the few times when you need a car - but in practice you don't know about that need several weeks in advance and so they don't have one they can rent to you. Even if you know you will need the car weeks in advance, sometimes they don't have one when you arrive.
If you live in a very dense area such that you almost regularly use transit (but sometimes walk, bike), but need a car for something a few times per year, then not owning a car makes sense. In this case the density means shared cars can be a viable business model despite not being used very much.
In short what you say sound insightful, but reality of how cars are used means it won't happen for most car owners.
> sometimes they don't have one when you arrive.
Or, if they are Hertz, they might have one but refuse to give it to you. This happened to my wife. In spite of payment already being made to Hertz corporate online, the local agent wouldn't give up a car for a one-way rental. Hertz corporate was less than useless, telling us their system said was a car available, and suggesting we pay them hundreds of dollars again and go pick it up. When I asked the woman from corporate whether she could actually guarantee we would be given a car, she said she couldn't. When I suggested she call the local agent, she said she had no way to call the local office. Unbelievable.
Since it was last minute, there were... as you said, no cars available at any of the other rental companies. So we had to drive 8 hours to pick her up. Then 8 hours back, which was the drive she was going to make in the rental car in the first place.
Hertz will hurts you.
I think this is purely psychological. The notion of paying for usage of some resource that you don't own is really rather mundane when you get down to it.
If OSM is up to date - many places it is very outdated. (others it is very good).
Law - when a government changes the driving laws. Government can be federal (I have driven to both Canada and Mexico. Getting to Argentina is possible though I don't think it has ever been safe. Likewise it is possible to drive over the North Pole to Europe), state (or whatever the country calls their equivalent). When a city changes the law they put up signs, but if a state passes a law I'm expected to know even if I have never driven in that state before. Right turn on red laws are the only ones I can think of where states are different - but they are likely others.
Laws also cover new traffic control systems that may not have been in the original program. If the self driving system can't figure out the next one (think roundabout) then it needs to be updated.
That's the point of self-driving fleets. Or maybe a special category of leased vehicles.
This is about a self-driving car you own.
Yes, this is giving away everything about your vehicles driving to a third party for sale or, manufacture. I don't like this personally and I don't like it for my vehicle either. Where I go in my vehicle and when I do it is my business. With vehicles being IoT connected, we are forced to surrender that data as there is no opt-out except for disconnecting the antenna. Not to mention going in to be serviced what kind of data is pulled off.
Not directly related to the topic, I spose, but I have a Model 3, and absolutely love it, but the Smart Cruise Control/Driver Assist is, I hate to admit it, pretty annoying (I think it's gotten worse, too). It's incredibly "jumpy" and over-cautious. A car could pull out in your way 300m ahead of you, totally safely, and the car will shit itself and slam on the brakes to be over-cautious. Same thing with pedestrians who are walking alongside the road, posing no risk.
It's so jarring at times that I'll often omit to use the Cruise Control if I have my wife in the car (so as not to give her car sickness) or other passengers (so as not to make them think I'm a terrible driver!).
I now have developed a totally new skill which is to temporarily disengage it when I see a mistake incoming, then re-engaging it immediately after the moment passes.
NB I am in Australia and don't have FSD so this is all just using Adaptive Cruise Control. Perhaps the much harder challenge of FSD (or near-FSD) is executed a lot better, but you wouldn't assume so.
Apologies for my novice question, does deep learning neural network give rise to hallucinated brakes and accelerations?
I haven't noticed inappropriate acceleration (i.e.: driving at something too fast), but "phantom braking" is real. I'm not sure "hallucinated" is the right term for it, since it's not an LLM, but it definitely can get tricked by shadows or bridges in certain circumstances and start slowing down.
So, here's a thought...
If FSD is going to be a subscription and you will never own our fancy autopilot feature. Why should the user pay for insurance?
The user is paying for a service that they do not control and which workings are completely opaque. How can responsibility ever lie with the user in such a situation?
If you buy an autonomous killing robot and ask it to kill someone, who's responsible?
And how much money will Lemonade make from tracking & selling your exact location?
I'm quite skeptical of Tesla's reliability claims. But for exactly that reason, I welcome a company like Lemonade betting actual money on those claims. Either way, this is bound to generate some visibility into the actual accident rates.