Follow TV Tropes
Airliner autopilots are also considered to be "mature" technology. Level 2 ADAS, which Tesla's Basic Autopilot is considered to be, must be supervised by drivers at all times. What I would want to see is evidence that Tesla's branding leads to increased driver inattentiveness compared to other ADAS.
Would a typical consumer really look at GM SuperCruise or Subaru EyeSight and think, "I have to supervise these," but look at Tesla Autopilot and think, "Well that obviously drives itself so I can take a nap?"
Should Tesla start actively advertising to counter misunderstandings and misinformation among the general public?
Edited by Fighteer on Sep 22nd 2021 at 1:25:53 PM
Unless you have a staggering amount of faith in human intelligence, yes. Consumers are more likely to think something does what its name says.
Tesla's gone for a phrase in common usage and a descriptive term. These are both loaded with preconceptions about lack of human involvement needed.
It could wastefully spend on trying to re-educate humanity to get what it "really means". Or it could change the bloody name. Or just keep getting criticised because it loves its branding more than it cares about avoidable problems.
Or, you know, try to solve those problems.
For example, the latest over-the-air update adds detection for emergency vehicle lights to the Autosteer system. When these are detected, the vehicle will slow down and sound a chime to alert the driver.
That's just the third choice.
They’d look at the first two and go “That’s a nonsense phrase that means nothing, what does it actually do?” while with the third they go “Oh nice, autopilot, like planes and the cars in MIB, the Bond films and all that shit”.
As annoying as nonsense names generated by a marketing department are they at least avoid the pitfall of associating the product with a cultural/technological concept that the product is very much not.
And keep the name until the problems are solved?
One day the Autopilot/FSD name will make total sense, but we’re not there yet and Tesla is perfectly capable of backpocketing the name until the technology (and law) is there.
Edited by Silasw on Sep 22nd 2021 at 6:48:41 PM
Okay, so what makes these hypothetical consumers more likely to read and understand the manual and on-screen warnings for GM's and Subaru's systems, but completely ignore the exact same materials for Tesla's? Meanwhile, Tesla's system is objectively far more capable than either of those other two, but all three require driver attentiveness.
Let's hypothesize that Tesla changes the name to "Flarflebloogie". Would we be able to have a conversation about the technology at that point?
Edited by Fighteer on Sep 22nd 2021 at 2:56:33 PM
If they don’t read the manual with such systems they won’t have any idea what the system does. Tesla autopilot gives the consumed just enough (mis)information to make them dangerous, while other systems are so blandly named that a consumer who doesn’t read the manual won’t even know what they’ve purchased.
People think they know what Autopilot is because they know what autopilot is. People have no reference point for the names of other systems.
I can do multiple conversations at once, but yes, I’d certainly remove my objection if they changed it to even something like “Drive assist” or “Copilot”.
Edited by Silasw on Sep 22nd 2021 at 8:04:09 PM
Well, I'm obviously not a "typical consumer", but when my wife bought her Subaru the salesperson very carefully explained to both of us what EyeSight is, how it works, and that it needs to be supervised. As far as I know, Tesla customers receive similar instruction, whether in-person or via training videos that can be played on the in-car display.
The availability of consumer education is not the problem, although if you are suggesting that most people who buy cars with ADAS don't know they exist or how to use them, that speaks rather poorly of the marketing of those products, does it not?
And that doesn't even cover Ford, which is selling a "handsfree" driving product that is clearly not capable of performing even as advertised, never mind unsupervised.
Edited by Fighteer on Sep 22nd 2021 at 3:08:32 PM
I’m not saying that at all.
I’m saying that there are some pretty dumb consumers out there, and unless we revoke all their licences we should be selling cars with features that said dumb consumers can understand safely.
That is the other options, if we get stuck with driver supervised systems for a while we should consider including how to safely use them into driving tests.
It could even be a licence category, like being able to drive an automatic or a manual.
They’re calling something handsfree where you have to use your hands? If so that’s some pretty textbook false advertising.
Edited by Silasw on Sep 22nd 2021 at 8:13:22 PM
You're also fighting the bias of everyone not paying perfect attention, or who just... forgets. As many steps as possible between person and "assume something dangerous" is good.
And if Ford's advertising something as hands free when it isn't, that's also a problem.
Teslarati: Tesla “Request FSD Beta” button formally gets released
Last night, Tesla formally opened the Full Self-Driving Beta program to all Tesla owners who have purchased or subscribed to FSD and who have the most recent core software update (2021.32.22). Per Elon Musk, the actual rollout of the Beta 10.1 software is delayed until the weekend, but owners may begin the process of requesting access now.
When you tap The Button, you will be asked to permit your vehicle to transmit information about your driving habits to Tesla, which will feed into the company's insurance calculator (the same one that will be used to grade your driving for Tesla Insurance). It will then provide direct feedback in terms of a daily "Safety Score" to the user. On a scale of 1 to 100, drivers are scored on the following basic criteria:
Using Autopilot will not trigger any of the above negative scoring events even if the car itself encounters them, but will count towards miles driven for purposes of your Safety Score. In other words, the more time you spend on Autopilot, the less bad driving will count against you.
Maintenance of a high Safety Score for a seven-day period is required to gain access to FSD Beta — the exact requirement is not disclosed. Tesla believes that a typical driver should score at least 80 out of 100.
As should be clear, the point is that Tesla wants safe, attentive drivers to be using the beta software to ensure the lowest possible risk of crashes while it is still imperfect.
Edited by Fighteer on Sep 26th 2021 at 8:51:37 AM
A Tesla FSD Beta program member just completed a coast-to-coast road trip from Los Angeles to Miami completely on FSD Beta 10.0.1. Their tally: 16 disengagements, not counting Supercharger stops, or one every 169 miles.
The software is clearly not perfect, but it is improving at a remarkable pace.
At CodeCon today, Elon Musk said something very relevant to the future of self-driving. It applies whether you are talking about Tesla, Waymo, MobilEye, or any other technology:
Can we as a species handle the cognitive biases that are going to arise around self-driving — indeed, are already present? When a human driver makes a mistake and crashes, possibly killing someone, we just sort of brush that off. When a self-driving car does something correctly that a human driver wouldn't, we barely notice, but when a self-driving car crashes, we freak out as if Skynet had come to life.
We insist on driving ourselves even if that carries a 10x or 100x higher probability of crashing because our brains are terrible at statistics and because we tend to dramatically overestimate our own competence.
How do we deal with these biases? They will hold back autonomy for years if not longer if we don't find a solution, and every year that happens another million-plus people die unnecessarily.
Edited by Fighteer on Sep 28th 2021 at 7:57:34 AM
Thing is, we can handle surrendering control. We do it every time we use any form of mass-transit, where either a human that isn’t us or an automated system is in control.
That may cause issues however, as it makes the US (with its historic dislike of mass-transit and obsession with rugged individualism) a very poor choice of country for trying to roll out self-driving cars.
But if EU regulators had their way, self-driving wouldn't come into use for decades. How is it supposed to be trained and get better at driving if it isn't allowed to operate to collect that data?
Wasn't most of the data they wanted to collect about recognising what was going on in the road?
And a small sample size under limited conditions got them to this point anyway, so you're just being hyperbolic.
Are they forbidding data collection or are they forbidding an early-access style system being used for data collection?
Because those two aren’t the same thing, even if Tesla finds them the same thing because of its business model.
Hmm. Maybe someone with more information on the precise EU regulatory environment should contribute. I do know that Elon Musk has discussed the difficulty of getting any sort of self-driving technology approved by regulators. That could be specifically about Tesla's methodology but it didn't sound that way.
Are ADAS-equipped cars allowed to operate in Europe in general?
I don't see why "data gathering" would inherently be a problem, regardless of whether Tesla is collecting data via its fleet of consumer cars or MobilEye is equipping vehicles with lidar and high-resolution cameras, if neither of those is actively driving the cars at the moment.
Edited by Fighteer on Sep 29th 2021 at 3:45:16 PM
Acording to Wikipedia’s source 8% of new cars sold in Europe in Q2 2019 we’re sold with level 2 autonomous driving features.
So it looks like this is a Tesla specific problem.
However, Tesla Autopilot seems to be among the brands permitted to operate. So that's not the problem; it's the Full Self-Driving suite. If Tesla wanted, it could leave Basic Autopilot in place and call it a day, but that's not the agenda.
Edited by Fighteer on Sep 29th 2021 at 4:59:34 AM
As Full Self-Driving isn’t yet a fully developed self-driving system I can see why European regulators are going to be hesitant. By Tesla’s own metrics the system is still in beta testing.
I will admit though, as the driver has to be providing constant supervision to the system I don’t actually understand what FSD currently provides to consumers. I get what it will eventually provide (a self-driving car) and I get what people running to software provides to Tesla (a cheap way to test the software), but what does FSD actually give a consumer right now?
Edited by Silasw on Sep 29th 2021 at 10:11:59 AM
This is a very good question, and it's one that I haven't satisfactorily answered for myself. FSD in its current state (not counting the beta) provides a number of very powerful features above Basic Autopilot. The ones that I would consider "complete" in that they can be used without extraordinary supervision include:
Use of these features can save stress and preserve attention for important events while driving. The main issue with long-distance driving is that humans become tired and/or bored, and NOA in particular helps solve that problem.
I think that many Tesla owners would agree that a price tag of $10,000 (or $199 per month) is steep for that sort of functionality. People may joke about "Elon time", but I've spoken to a few who feel that the wait for a "feature complete" system hasn't been worth it.
Others, however, feel that it is worthwhile to participate in the development of a technology that can revolutionize driving and save lives. So there is that to consider. Judging by the interest in FSD Beta based on how many drivers have pressed "the button" to request it, there is an enormous pent-up demand for delivery on Elon's promises.
And, of course, there's the FOMO aspect: buy into FSD now and you are set when the robo-taxi revolution comes and your car starts making money for you as a side gig. This one feels a bit more speculative and I'm not sure that I would pay for it on that promise alone. I might have felt differently a few years ago, but I see too many obstacles to feel completely confident in the outcome.
Edited by Fighteer on Sep 29th 2021 at 5:32:47 AM
So if the first one of your list is feature complete then it should be able to meet the relevant UN regulation (and thus the EU regulation that is built upon the UN stuff).
Now that’s regulation only came into force in January, so it’s possible that beforehand there wasn’t clear criteria and that’s what Musk has been complaining about.
With the other two you’re dealing with parking situations, I’d guess that parking in the US is very different from parking in Europe, as we’ve got lots of semi-pedestrianised areas in Europe, which introduces a lot of additional elements that systems trained on Us streets won’t be prepared for.
As for FOMA, if a regulatory authority is doing tis job than a bunch of people wanting to buy something purely due to FOMA should be an argument against authorisation, not an argument in favour of authorised.
Edited by Silasw on Sep 29th 2021 at 10:49:50 AM
When I say "feature complete" I mean that it is capable of end-to-end (on highways) driving as advertised, but it's still considered a Level 2 ADAS in terms of the need for supervision.
It's hard to describe if you haven't personally experienced it, but even in my own limited experience with a very basic ADAS (Subaru EyeSight), I feel less stressed when I don't have to constantly maintain speed and steering and can watch for potential problems that the system can't handle. It would be even better if it could handle lane changes and navigation.
None of this liberates you from monitoring it and taking over if necessary, and any company advertising this (or implying it) would be in major trouble with regulators. That said, we have statistics to prove that ADAS increases overall road safety.
The critical distinction to keep in mind is that most Level 2 ADAS manufacturers are not trying to make their systems Level 5 capable. There is a sharp line between the two, except for Tesla and a few others (MobilEye being one of them). Tesla is trying to take a Level 2 system and teach it to be Level 5 capable. This means it has to go through the growing pains between those stages.
This is hurting my head. If it’s a Level 2 system (even on the highway on a pre-planned route) then it’s not capable of end-to-end driving, as if it was it would be a level of 3 or maybe even a level 4.
Are you trying to restart the naming argument?
Sure, but where it undergoes the growing pains is a matter for debate. You’ve got everything from secure courses with highly trained drivers to public roads with Joe Blogs behind the wheel of the test vehicle.
So I think I’ve found the EU law that draws from the UN regulation, it comes into force in 2022 and supposedly lays out regulatory requirements for automated vehicles. I’ve not given it a read yet, but if the EU is looking to have some standardised criteria that automakers can start meeting from 2022 then the bar has been said and the manufacturers just have to reach it.
Community Showcase More
How well does it match the trope?