I don't see the article, or quoted sources, even trying to make that comparison - so this is really only a half-story, compared to what's relevant.
Further, given the remote-guidance possibilities with autonomous cars, it's plausible to think they'll eventually be far, far better than human-driven cars at making-way for higher-priority traffic.
Human drivers sometimes fail-to-notice sirens or other high-priority demands on road capacity. But, an automated system could broadcast the planned-routes of dispatched priority vehicles to every autonomous car in the city, allowing the autonomous cars to preemptively clear paths, before it even becomes an issue of local-reasoning about an exceptional-situation.
So yes, you might argue that for this particular situation, you "just" need to put in the proper programming and AI/ML training and then "maybe the car will notice more often than a human" as long as the situation is within very specific bounds. At least now that somebody made an article about it.
But it does not change that the autonomous machine does not understand the complex world it is driving in in the slightest, and that, for example, swerving into an unmanned fruit stand without being able to brake is much better than swerving into an unmanned gas pump.
This is just ridiculous.
If you believe that humans are making rational decisions in split seconds then you are delusional. A swerving, breakless human driven car will hit whatever happens to be where physics takes it. The scared monkey descendant holding on to the controls will as likely do the wrong thing as doing the right thing. Maybe a fighter jet pilot or a rally driver can do better but i wouldn’t count on it.
And besides how did that AV end up swerving with no brake? This is the reason why autonomous vehicles are set up with redundant brake actuation. If I would have doubts about our ability to stop the car I would much sooner implement a third independent brake system than to try to solve whatever philosophical runaway trolley problem you are concoting here.
However you construe my particular example to be "ridiculous" under the additional constraints that you imposed on it yourself, you are looking at an instance where an autonomous vehicle failed to make a reasonable decision that a human would right in this article.
In fact, a human, the garbage truck driver, had to react. They were not a "jet pilot" or "rally driver", and yet they were perfectly able to resolve the situation that the AV had gotten themselves into.
And remote assistance/teleoperation was reacting at the same time, for the vehicle.
Humans will screw up similar cases: end up in the emergency vehicle's way and panic and do the wrong thing. In fact, I saw a fire truck impeded Wednesday because of some judgment mistakes by a human driver that an AV would be unlikely to make.
So, frequency of "doing the wrong thing", and severity (here, probably measured in seconds) are completely reasonable questions to ask-- even if the circumstances where each tends to mess up are different. I don't think it's reasonable to ask that the autonomous vehicle be superior than a typical human in every axis of performance, just overall superior.
Yes. And autonomous trucks are an engineering problem, not an abstract philosophical question. You can ask the question: “how is the autonomous truck going to know if it should slam into the petrol station or into the fruit stand?” And there is no good answer to that. Or you can ask: “How do we engineer autonomous trucks so the probability of a runaway incident is lower than epsilon?” And then suddenly it turns out this is a solveable problem with our existing tools. (With redundant brakes, and with built-in brake health checks.)
> under the additional constraints that you imposed on it yourself
I assume you mean comparing to what a competent human would do? It is implicit in the whole discussion. Human drivers are the current best practice. You are asking about the “fruit stand vs petrol station” question presumably because a human would know to choose the fruit stand.
Nobody is asking this alternate question, because they would immediately feel it is ridiculous: A young boy is crossing the road in front of an AV. In 30 years he will become a politician, will instigate a violent sectarian war which will result in the death of millions of innocents. Should the AV run him over thus preventing all that suffering?
Just to state the obvious: no, the AV should not run the boy over. But why is nobody asking this question? Because it is obviously silly. We as a human can’t look at a young boy and know them as a future mass murderer, therefore we don’t expect this from an autonomous car either.
> you are looking at an instance where an autonomous vehicle failed to make a reasonable decision that a human would right in this article.
Oh yes. And it is a fascinating one. I was reacting to your “fruit stand vs petrol pump” hypothetical not to the article directly.
It is hopelessly naive to think that these problems can simply be engineered away. In the real world failures happen in redundant systems. Air brakes are supposed to "fail safe", but in reality a host of factors contribute to accidents: how well a truck or trailer's brakes are maintained, engine state, speed, loading, temperature and grade all combine to make them fail. Trains have multiple braking systems, yet sometimes all 3 fail and a spectacular accident occurs.
In addition to all the traditional mechanical issues, self driving vehicles have tonnes of software failure modes that traditional cars do not. More importantly, those software issues are not well understood at this point in time.
If you want to better understand why software can't be trusted to Do The Right Thing, go back and read investigations analyzing failures of systems that have come before. The Therac-25 is good over here: https://en.wikipedia.org/wiki/Therac-25
No system a human can build can be completely intrinsically safe. Mistakes by designers occur. Safety is a process that takes time and effort, and it will take decades for self driving cars to work out all the bugs.
I was in a car accident a few years ago where someone left a stop sign without realizing that there was traffic (me) in a lane they couldn't see. I wouldn't describe having felt like time slowed down, but that isn't an absurd way to describe it. I had a weird sense of clarity for the few moments before impact. I was able to slam the brakes, but I was way too close to avoid hitting them. I had a distinct feeling of "the front of the car is heavier and there's a person there". I swerved left instead of right to avoid hitting the front and slammed into the rear passenger side of their car. This spun their car completely around and totaled both cars, but both of us were able to walk away without any injuries.
An interesting summary of how they studied this if you're curious:
It only needs to understand when to ask for help:
the driverless car had correctly yielded to the oncoming fire truck in the opposing lane and contacted the company’s remote assistance workers, who are able to operate vehicles in trouble from afar. According to Cruise, which collects camera and sensor data from its testing vehicles, the fire truck was able to move forward approximately 25 seconds after it first encountered the autonomous vehicle
I don’t understand why people think that driverless cars need to deal with every one in a million scenario, it makes no sense.
This phrase is doing a lot of work here. It's one of my least favorite phrases (in a close race with "just do this") and is often associated with unrealistic feature requests.
I'm sure there is a disagreement between SFFD and Cruise as to exactly what happened, but the article implies that the Cruise vehicle isn't the one that moved to fix the problem.
> The fire truck only passed the blockage when the garbage truck driver ran from their work to move their vehicle.
Even if the Cruise vehicle was able to call for help, the car not only needs to call for help, but also wait for a response (at 4am), and give a remote human enough information to control a car remotely in a safe manner. None of these things are easy... not impossible, but not "only needs" easy.
> driverless cars need to deal with every one in a million scenario
Of course driverless cars need to deal with one in a million scenarios. Human drivers deal with one in a million scenarios every day. Nothing is ever the same when driving, so there are always subtle changes. But even if there is an unusual situation, there must be some kind of response. Even if that response is to move to the side of the road and put on hazard lights to indicate that it doesn't know what to do (which it may have done)... that would have been a better response than to do nothing and sit to wait for a human. There should be a default "unknown input" failure mode. The disagreement here is that SFFD didn't like how the Cruise vehicle failed. Maybe there is a better approach.
We are expecting these vehicles to move us around 24/7. That's a lot of trips. At this rate, one in a million scenarios will happen every day. That's the problem with large numbers -- even rare events are to be expected when N is high enough.
If an autonomous vehicle can't cope with a "one in a million scenario", it cannot be autonomous.
That a particular situation is rare, does not mean that you won't encounter multiple rare situations in a given time frame.
(By the way, the article stated in the beginning that the garbage truck had to move? Either way, it required manual intervention and a human's situational awareness. How does the car ask for the proper help at freeway speeds within seconds--not even split seconds?)
It seems it would have a way of prioritizing such things. That doesn't seem particularly complicated, to be honest... weighted decision making is certainly within its capacity. E.g. shopping cart has fewer "avoid hitting" points than stroller.
It's not like every scenario has to be explicitly programmed in, nor does the program need to run some analysis on a detailed backstory to justify that a baby is more valuable than groceries. In effect, somebody -- probably not a programmer either -- just needs to enter some numbers into a spreadsheet.
(yes there is complex programming that allows that to be manifested in the car's decisions, but the idea that programmers are themselves constantly making "moral calls" in the code, rather than the control data, is fiction)
And if it does have such prioritization in its logic, I'd say yeah, it "understands" the world in that respect. Unless you have defined the word "understand" in some mystical way that precludes non-biological machines by definition.
You are putting the cart before the horse. The problem is not in prioritization, the problem is in having the correct ontology to even get to the "prioritization" stage.
Does the car know what a fruit stand is? Does it know what a gas pump is? Does it know how the fruit stand relates to the gas pump in "expected outcome when being hit by a car"?
If you say "we can program that in", read my post again.
On a level below identifying stop signs and lollipop ladies and push carts, a SDC's stack needs to be able to identify:
1) Driveable areas. If something looks like a cliff maybe don't go there.
2) Fleeting obstacles. Dust blowing in the wind. A stray plastic bag, winging its way northwards to the waiting maw of a baby turtle. A person with a borderline credit score. A stray cat chasing a bug. That sort of thing.
3) Anything else that's physically present in the path of the car. Doesn't matter what it is. Do Not Hit The Thing is the second lesson anyone learns when being taught to drive, after Make It Go So Hit The Thing Is Even An Option.
"the problem is in having the correct ontology to even get to the "prioritization" stage."
That part isn't done by the program, it is done by whoever enters the prioritization numbers. That is, someone, possibly a committee, can dial up the "avoid gas pumps" weighting relative to the "avoid baby stroller" weighting if they are concerned that cars might swerve so widely to avoid coming near a stroller than they are risking hitting a different hazard. Or they can dial up the weight of grocery carts relative to dogs, since children might be in a grocery cart. Etc.
Those are humans, who can do whatever ontological analysis they need when deciding on the the settings. The car doesn't need to access any of that, it just needs a general look up table that can help make optimal decisions based on the human-entered value system.
It's completely feasible to imagine writing a list of the top, say, 100 things in the world that a car needs to make morally-significant decisions about, and then deal with every other accident or near-miss after the fact. Interactions with unrecognised objects should be rare enough to be a rounding error when comparing accident rates between autonomous and human-driven miles.
If it’s choosing between hitting a pole and hitting a pony, it should always hit the pole so long as no one is injured.
The real problem is that occasionally these cars today mistake roads for oceans, walls for roads, and people for inanimate poles.
This is all assuming it can distinguish between all of these objects, and that a real person could assign relative moral values to hitting one over the other.
That's a great example.
Not to mention triggering any Rube Goldberg-machine-like chain reaction (even with just a few steps) where a series of events would need to be predicted.
There's an actually measurable rate of humans blocking other traffic, and for how long before resolution, and an actually measurable rate of autonomous vehicles blocking other traffic, and for how long before resolved.
If the rate for autonomous vehicles is already below that of humans, or rapidly headed there, that's far more important to note than to theorize about other corner-cases.
(Also, as a San Francisco driver, I have serious doubts about the "general intelligence" of my fellow drivers. I don't see any reasons to hold autonomous cars to a higher standard – perhaps a much higher standard? – than other cars.)
OP was obviously not using that term mathematically (i.e. the cardinality of the power set of natural numbers), and obviously meant something in the neighborhood of "effectively not countable". And, again, in all but the formal mathematical meaning of the word "countable", many things are not countable (e.g. no one will live long enough to count all the natural numbers).
In the real world, assigning an ordinal number to an object/event/thing has a nonzero time cost. And accounting for every situation in software has a much greater cost.
> If the rate for autonomous vehicles is already below that of humans, or rapidly headed there, that's far more important to note than to theorize about other corner-cases.
Okay, but dangerous human drivers get systematically removed from the streets. Are we doing the same for self-driving cars? In this context, does every Tesla count as the same "driver"?
Maybe all Teslas should have their autonomous driving centrally disabled every time one causes an accident, or breaks a law, until that specific issue is fixed. But it would be impossible to run a car company that way, so of course that takes priority over, say, keeping innocents alive, right? /S
Effectively we are, as long as their developers fix the problems that led to the crash. Change the software and you have a new driver.
What proof to we have that any of these issues have been solved?
And self-driving is a combined hardware/software system, not just software. Certain problems aren't bugs, but limitations of the hardware.
Personally I try to stay alert while I drive, so I feel I'm safer driving myself than letting the machine do it. But I'm less confident that the best autonomous cars will have more accidents than humans in general, and they're improving all the time.
People can affix fake sirens to their cars and if they blare them at you you should pull over... after the incident society then throws an incredibly harsh penalty at the offender.
To make sure things run smoothly it benefits society to never doubt or question whether people who say they're police officers actually them - they might have something incredibly important to say (Like, hey, there's an active shootout ahead - please don't keep driving and get yourself injured).
If you as an individual create some funny gag to fake out autonomous vehicles into thinking that you're a cop whether you cheekily do it with "TOTALLY NOT A COP CAR" written on the side to get a laugh or not... you're almost certainly going to be charged with a felony crime.
This is missing the entire point of ML. ML is literally defined as not having to explicitly program responses in for every situation.
Needing to pull over because a fire truck has told you it's coming your way in 2 minutes is pretty easy compared with some of those other "uncountably many" situations these cars need to deal with.
> the autonomous machine does not understand the complex world it is driving in
Daniel Dennett would like to have a word with you. It's perfectly possible for systems to "understand" things for any useful definition of the word "understand". A calculator absolutely "understands" arithmetic.
So how do you know how the system will respond to an arbitrary situation? You could easily argue that we don't know how an arbitrary human will respond to an arbitrary situation, but we have systems in place to deal with the consequences if they handle it badly.
For example, if a driver handles a situation badly enough, they could lose their license. If an autonomous car does something bad enough that a human would have lost their license, what happens? Do all of that company's cars get pulled off the road until the bug is fixed and validated?
You put them in that situation and see how they respond. If they respond badly, you keep training them until they respond better. I'm not saying it's easy, but I am saying it's exactly what autonomous-car developers been doing all this time.
Right, but what do you do with all the other cars on the road that presumably still have the bad behavior (while the fix is being developed)? Just assume that the situation is rare enough that you'll be able to fix it before it happens again?
I suggest you look up the free lunch theorem of supervised learning.
> A calculator absolutely "understands" arithmetic.
I'm not sure how that's anything else than proving my point. Let's say a calculator "understands" arithmetic. It does not understand anything I would apply those calculations I make with it to. I cannot tell it "calculator, go do my taxes".
Your particular example is not even true: A calculator is able to perform calculations, it does not understand any of the axioms, theorems, and uses around it.
It understands the axioms because they've been literally built into its tiny brain. It doesn't understand the uses of arithmetic because nobody programmed it to.
Can you provide me a non-vacuous definition of "understand" that doesn't rely on human consciousness being extra-special and magic?
No they have not. Do you know what an axiom is? Do you know how a calculator is implemented?
It’s not about metaphysics, it’s about the difference in the experience of a lifetime compared to a car on a road. „Training data“, if you like.
What does it mean to understand something? Obviously (to me and I presume to you) a calculator doesn't understand anything! It doesn't have the capacity for understanding. Obviously (according to, I presume, feoren and Dennett) "understanding" means something very different, and a calculator is perfectly capable of "understanding" arithmetic.
The pile of logic gates is an encoding of the axioms. The fact that it evaluates mathematical expressions correctly is both necessary and sufficient to show that it understands arithmetic. Therefore the calculator knows the axioms and understands arithmetic.
In what way is Boolean logic not maths?
That is absurd and obviously false
It is actually "obviously true" unless you believe that human brains have a special metaphysical magic that makes them "more than just a system". That's literally the only alternative: human brains are magic and only humans are ever capable of "understanding". It's a vacuous definition of the word. Systems can understand things, which is good, because the human brain is nothing more.
See: Daniel Dennett's response to Searle's Chinese Room.
Also I do believe there is a metaphysical difference between a human and a calculator, literally magic
Honestly the article is pure tech-scare bait. It blames a car that stopped (while it was driving) when a truck was driving at it head on, but tries to glance over the fact that a (human driven) garbage truck is the reason the fire truck had to go into oncoming traffic in the first place.
> a San Francisco Fire Department truck responding to a fire tried to pass a doubled-parked garbage truck by using the opposing lane.
The reality of driving is that there are a lot of these messy situations, and driverless cars need to be able to handle them.
And importantly, unlike human drivers, autonomous cars will only get persistently better at handling these kinds of situations as they encounter more of them.
It's so an easy to just say human drivers aren't as good, but it's not grounded in any reality.
Just take voice controlled assistants for example. We've made pretty big strides on them, haven't we?
And yet an entirely unmotivated teenager who's doing badly at school working a 7-Eleven outperforms voice assistants by several orders of magnitudes, through the sheer wonder of context clues and general intelligence, no matter how dim you want to assert that teenager to be.
I am proud of my voice assistant when it properly understood that I wanted to indeed set a timer for 50 minutes instead of 15 minutes, while any person would immediately understand what I mean even in a noisy environment just from why I'm setting the timer, and even fuzzily adjust their behavior based on what continues to happen.
Anyone who claims automated cars can fully learn the arbitrarily complicated physical situations they have to navigate around is either disingenuous, or does not know how computers work.
You might have an overly-optimistic opinion of what AI can do. Not to say that humans are not stupid quite often, but we are very far from having AIs that can be as good as a 4 years old child.
AIs are good at what they know and what they can account for. So yes, they will get better at what they know, but they cannot, by design, get better at what they don’t. So there will always be situations where they will be unbelievably stupid because their designers did not think it would happen. There is no solution to that at the moment.
While I think it's a good idea I just don't see this happening in the majority of the US other then major cities. In my state I just don't see how it would be possible given people commute 30+ miles regularly.
After that, just copy the playbook from the Netherlands. Amsterdam was very much like most big American cities until the 70s, their streets were steadily being redesigned for human scale and less car dependency.
It can be done and it can be done easily if Americans stop believing in their Exceptionalism.
The Netherlands has a ridiculously dense population and is completely flat.
It's only a good model for other places with a ridiculous population density and pancake-flat geography.
You couldn't apply the Netherlands model to, for example, Scotland.
Electric bikes are too slow and inefficient, and require expensive hard-to-make components that depend on conflict minerals.
Do watch the videos, especially the "Strong Towns" series about the issue of how the rich suburbs are actually subsidized by poorer people living in bankrupt city centers. The only "straightforward" path for this to change is by getting younger people to participate a lot more on town hall meetings and to get their city councils on board with change. The status quo benefits and has no interest in doing that.
1. If fully self-driving cars come along at some point in the recognizable future, I would bet pretty large amounts of money that they'll be extremely popular in Europe or whatever else you're thinking of as not "car-dependent."
2. The amount of money, time, and intellectual capital put into building driverless cars is an infinitesimal drop in the bucket compared to the money, time, and intellectual capital that would be necessary to make a gigantic change in the US physical infrastructure. Particularly time -- there's no way that America is going to change into Europe, well, ever, but particularly not in our lifetimes, so if you want a better transportation system, I wouldn't bet on that.
3. But also, the time, money, and intellectual capital spent on driverless cars is non-rival to any (limited, local) makeovers that American localities are going to get.
2 and 3. The American way for city growth is the cause of financial troubles for most cities. And I cited Amsterdam exactly why it works as an example of how you can redesign a city to reduce car-dependency step-wise. Take a look at https://www.youtube.com/playlist?list=PLJp5q-R0lZ0_FCUbeVWK6... to see the economic problems that plague cities in the US, and how a lot of them are improving simply by changing zoning laws.
2 and 3: You didn't even remotely address anything I said.
That is not the relevant metric. The important metric is how many trips does the average dutch take by car that could be taken by some more efficient mode?
Lots of people in European cities have cars but use them only on the weekend. Or they use the car to go to work on the outskirts of the city, but manage to use public transportation when going to a football match for a night out. Lots of families have only one car when in the US it would be only possible to live if each adult had their own car, etc.
> You didn't even remotely address anything I said.
You are right, I didn't. Because to me holding self-driving cars as the panacea that is going to solve the issues in the US is ridiculously naive. It is an illusion created by the corporations that are not interested in actually solving the problem and just want to push out more cars, keep people addicted to broken lifestyles and over-consumption. And is backed by short-sighted technologists that see the world through their nerd lenses and don't stop to think if there are better, lower-tech ways to improve everyone's lives.
Let's just review this whole conversation:
1. I suggested to a poster who wanted to blame the whole Cruise problem on the garbage truck that garbage trucks do in fact have to block lanes.
2. You then jumped in and said that American should "just" reinvent itself so that cars weren't a big deal. You left it implicit why that was relevant here, but I think I understand you to believe that driverless cars are an unimportant technology.
3. I pointed out that "just" changing the fundamental mode of transit across all of America is in fact a tall order.
4. And then you've gotten less and less responsive to actual points until you get to here where you're now suggesting that driverless cars aren't a panacea, which is an argument nobody made.
To affirmatively restate my points:
1. Driverless cars, to be successful, must deal with a lot of messy road conditions, not just the 99% case.
2. If driverless cars do at some point cross the above line and genuinely become equal to or greater than the median human driver, I think they will be an important technology that will improve people's lives -- in both the United States and very significantly (if possibly somewhat less so) in Europe.
3. Driverless cars are plausibly on a much shorter timeline than remaking all of America's physical transportation/density infrastructure, and plausibly much less overall expensive than remaking all of America's physical transportation/density infrastructure.
4. Investment in driverless cars is non-rival to other changes to America's physical transportation/density infrastructure.
The question is whether people value this more than the benefits of not being car-dependent.
You might think that these are not connected, but they pretty much are. Taking that in consideration, I could bet that only a minority would be willing to accept the trade-off.
If in fact they improve people's lives, then great. And if people see a path to further improving their lives by, separately, changing other things about infrastructure or density or whatever, then that's fine too. Again these things are non-rival.
> And if people see a path to further improving their lives by, separately, changing other things about infrastructure or density or whatever
That's just a really twisted way of saying that you don't mind sacrificing the commons if it means you get to play with cool gadgets. It's a lot easier for one individual to go and buy a driverless Tesla and think "oh, at least my car does not stay in a parking lot after my ride is done" than it is to go and promote actual change in urban planning to avoid car dependency in the first place. But because you are (presumably) on the top of the pyramid, you don't actually care about it.
> Again these things are non-rival.
At the first order, it may seem like that. But after we see the interaction of these apparently-orthogonal policies (e.g, zoning laws and public infrastructure) and their inconsistent implementations the problems become very clear. It is hard to argue only with hypotheticals, but I could bet that if driverless cars became a reality tomorrow, cities would be worse in 10 years than they are today.
Also, you have no idea what my positions are about zoning or land use or infrastructure, as I haven't mentioned them.
It'll be the biggest boon to urban transit we've ever seen, because what self-driving actually does is make modality, routing and scheduling less important to individual trip planning.
Roads still require maintenance, and without a profound change in how cities are organized, these costs are always going to be overwhelmingly large compared with the cost of everything else.
> self-driving actually does is make modality, routing and scheduling less important to individual trip planning.
That is the worst type of optimization. It still keeps people in desolate suburbia and makes millions of people across America with the sensation that "commute time" is a constant that can not be avoided, so it should be at least made comfortable.
Investing to make the transportation systems smarter without even looking at how the urban spaces could be changed to get rid of cars is like the (myth) story of NASA putting millions of dollars to make a space pen, instead of using a pencil.
I think self-driving is more established in North America than in Europe due to less strict regulations, more availability of capital and technological investment, and particularly the greater predictability and homogeneity of the car-centric roads when compared to European countries. I'd like to see how a self-driving car handles Rome for example—if it can.
"Roads will be safer"? Yes, they are safer already on cities with well-functioning public transit.
"Less drunk-driving"? Same idea: if teenagers can live in areas where cars are not a necessity, they won't be behind the wheel and still be able to meet their friends, go to a party, etc.
"Less space needed for parking slots if cars are FSD"? Also true if people are not car-dependent.
To me, self-driving makes sense for trucks on highway roads, not for city traffic. And even then, we could also apply the same idea and think "why not improve the rail infrastructure"?
But anyway, how does that relate to the original question? Are you trying to justify your interest in FSD with the argument that would you give you some network of autonomous taxis?
Take the case of an elderly person with limited personal mobility. They are likely driving today because that’s their best option by far. They’re probably not going to start walking to their doctor; it’s probably not a good idea for them to take up biking. Taking a bus into the city center and another one back out to get to a point that’s radially 2-3 miles away and then reversing that to get home seems wildly less practical than hopping into an FSD ride, whether billed like an uber or privately/family owned. That’s what FSD brings over a status futurus that is less car dependent but lacks FSD.
There is also another video that talks about microcars for people with disability and reduced mobility: https://www.youtube.com/watch?v=B9ly7JjqEb0&t=397
No rational person can watch those videos and conclude that the best course of action to solve urban transportation requires self-driving cars. Getting FSD cars is just a rich nerd fetish which will not solve anything and likely contribute to make suburban sprawl worse.
The Downs-Thompson paradox may explain exactly what I see. We recently got priority bus lanes in my city (Cambridge, MA). The primary effect is that buses can now more quickly travel to where I didn’t need to go in the first place. Cars are still significantly faster, so people choose them.
Regardless of the amount of car traffic, a bus that goes from the outer edge of my city into the center to let me transfer to another bus to go out towards an outer edge at a different radial will never come close to driving point to point, even with parking aggravations. That explains why nearly everyone who is capable to drive and can afford a car has one. I think it’s also why many public transit advocates seem to focus so much energy on penalizing cars to enable public transit to become competitive. If competitively superior public transit existed, people would use it because they aren’t generally stupid. Even if it was roughly equal, people would use it in a lot of cases.
It is not about penalizing car driving. It would be a good start if the US did not subsidize car ownership. It the true cost of car ownership was put on those driving, perhaps cities would be able to finance better public transit...
Maybe they don't block the travel lane for extended periods, just like any regular person is expected to not do.
The article is basically saying "it's ok for the human driven car to do, but not the self driving car."
If they do that prior to sirens showing up there is no issue. They can’t predict the future.
I go around double parked cars in my city all the time. Sometimes drivers in both directions need to magically negotiate.
The driverless car that doesn’t understand it has to safely get out of the way of a fire truck is the new part of this situation.
How would one go about finding that data? Is there an authority that tracks how often and for how long emergency vehicles are impeded by human drivers? You might get something along the lines of average time to first response on the scene.
> Human drivers sometimes fail-to-notice sirens or other high-priority demands on road capacity. But, an automated system could broadcast the planned-routes of dispatched priority vehicles to every autonomous car in the city, allowing the autonomous cars to preemptively clear paths, before it even becomes an issue of local-reasoning about an exceptional-situation.
This doesn't comfort me. We already live in a society whose leadership doesn't believe in net neutrality, I have a hard time believing that an automated driving system that programmatically prioritizes vehicles on the road won't be co-opted in the same manner to benefit those with capital at the expense of those that don't.
The autonomous car companies have a lot of the data, since they've run the same cars with drivers extensively. They can see the differences in rates-of-situations between cars-working-autonomously, and cars-with-drivers.
Given the long periods of operating with autonomy-plus-backup-driver, they also have stats on how often an in-car driver overrides the car, and exactly why.
Cities also can & should independently collect such data. Some big-city buses are already equipeend with a photo-ticketing mechanism for immediately recording, & citing, and cars illegally parked in their paths/pickup-area. Emergency vehicles should have the same.
> This doesn't comfort me.
Well, I can't make you comfortable if you've got free-floating paranoia about abuses by the powerful. There are many such abuses!
But I can point out that the exact same rational comparative criteria should apply before making a tangible policy decision about the use of the shared roads. We shouldn't be ruled by imaginative worries extrapolated from other insecurities, but rather real measurements of how often a "send a request to clea roads for emergency vehicles" is broadcast, whether each use has a recorded & verified legitimate justification, whether such broadcasts save lives versus not-using-them, etc.
In the lane directly in front of them, facing them, and honking?
Yes! In SF and other big free-country cities at 4am, some human drivers are high!
Sometimes human drivers are passed-out or suffering a health crisis. Sometimes they've left their vehicles blocking key rights-of-way as they go somewher else for many minutes.
In San Francisco, I've seen human-driver-unattended cars block light-rail trains many times. (I live not far from the N-Judah line, which also has a history of pedestrian bystanders physically moving cars to clear the road: https://www.munidiaries.com/2014/02/10/muni-riders-lift-car-...)
We need to judge autonomous car suboptimalities against real human drivers, with all their failures - not against idealized, unerring humans-at-their-best.
> On an early April morning, around 4 am, a San Francisco Fire Department truck responding to a fire tried to pass a doubled-parked garbage truck by using the opposing lane. But a traveling autonomous vehicle, operated by the General Motors subsidiary Cruise without anyone inside, was blocking its path.
I don't think it's intolerable to use a straw-man argument when discussing scarecrows.
When the computer does this, it's always "sitting" in the vehicle, and is not acting on a selfish intent: it's just carrying buggy/incomplete requirements.
A human sitting in the vehicle is generally capable of responding to "sir, can you move your car?", or a horn from the fire engine or whatever.
These cars can barely drive themselves with hyper-accurate maps on a sunny day without any surprises, and we want them to receive and evaluate emergency dispatches from some central system that doesn't yet exist, and respond accordingly? I think we're a ways away from that.
So, you don't even need to upgrade the autonomous vehicles, you just change the other, existing, flexible, proven system for recommending routes.
The nost important question is - how do you recover. With every autonomous system, when it does fail, it is often catastrophic because there is no recovery path.
When a human clerk makes a mistake, you can talk to them, when a human driver stops in the middlw of the street, you can tall to them - when a computer crashes, you are tipically fucked.
So yes, the details of recovery are important – but only in comparison to the human baseline. If incidents happen less often than with humans, and/or recovery happens as fast or faster than with humans, that's the relevant criteria.
Also deserving weight: might autonomous cars avoid dangerous panics in some situations where humans sometimes panic & overreact? I've often seen human drivers make things worse by making an ill-advised impatient navigate-around. I've seen people trying, earnestly, to avoid a block or prior accident who – distracted, anxious – do something else wrong, causing another collision.
Weigh all the rates & severities. Don't set policy based on selective worst-cases, including many imaginary/theoretical contrived cases.
But also, what if this is a rare situation where the autonomous car completely fails compared to a human-in-seat? But, even this failure still resolves in under-a-minute. Or, worst-case, approximates the (quite-common) case of a breakdown or human-accident blocking traffic, in a manner that takes tens-of-minutes to resolve.
In that case, who cares that we've found one narrow failure? Humans show impatience, poor-judgement, bad exception handling, recklessness, and so forth at ample rates across many road interactions. It's the net total of blockages by autonomous cars, in count & duration & severity, that matters, and specifically whether overall they do better or worse than human drivers.
That net comparison is the most important consideration.
It does seem like the car "acted reasonably" (so to speak) - but it still did not act like a person which is what the emergency workers were trained for. The challenge of integrating non-human actors into a previously human-only space is, I think, going to be more difficult and wide-ranging than we imagine. It will probably require a substantial commitment that goes beyond the self-driving car companies.
It's worth saying that a lot of the critique of self-driving cars is that...they were sold on the premise that they would integrate into "existing infrastructure" without big changes and critics have always felt that would not really happen. We've had self driving cars in special environments for...50 years now?
I would firmly say that it did not act reasonably. Blocking the other lane of traffic is the second worst action I can think of, only behind ramming into the emergency vehicle. It would have been better to ignore the emergency vehicle and continue driving, which would have at least cleared the jam.
I worked in the space at one point, and I would put the issue down to autonomous vehicles being trained entirely to keep themselves safe. E.g. they know to stay a certain distance from other cars, but they have no concept of "could another get around me?". A similar example would be parking. On a street without lines to mark spaces, I would not be surprised to see an autonomous vehicle park in the middle of a space large enough for 2 cars, preventing someone from using the rest of the space. Another would be opening space for someone to merge. I don't think that's something they ever do on purpose.
We're teaching them to not hit things or get hit by things, rather than teaching them how to drive, which is a far more nuanced skill. The same thing would have happened with 2 autonomous vehicles, because neither even attempts to understand what the other is trying to do. They would just deadlock until the garbage truck moved.
With the fire truck using the oncoming lane to overtake the garbage truck, and with the articles saying a human could have reversed back into the intersection to clear the lane, it sounds like the fire truck would be in the way of the autonomous car just driving forwards.
> The same thing would have happened with 2 autonomous vehicles, because neither even attempts to understand what the other is trying to do
I don't think this is true in general - they seem to rely heavily on judging the intent/target of other vehicles to predict future path and react to those possibilities.
Likely that the autonomous car A (in the position of the fire truck) would not overtake the garbage truck when it can see car B oncoming in that lane in the first place, but in the event that it does occur, car B would likely slow to prevent a potential accident (understanding car A is attempting an overtake and may continue forward) and car A would probably pull back in behind the garbage truck (understanding that it'd be in the way of car B continuing forwards).
> It does seem like the car "acted reasonably" (so to speak) - but it still did not act like a person which is what the emergency workers were trained for.
I understand where you're coming from, but I disagree.
There's no reason to expect self-driving cars to play by a looser set of rules than human drivers in the same arena.
If a self-driving car fails to behave properly on the road, that's a fault of the self-driving car.
Blaming everyone else for expecting a car on the road to behave like any other car on the road feels like victim blaming. We shouldn't have to drive around guessing which cars are human-driven or self-driven and change our behavior accordingly. There's no way that's going to work.
Fire truck wants to use oncoming lane to overtake the double-parked garbage truck, autonomous car in that oncoming lane yields to the right to the extent allowed by more parked cars - but doesn't back up into the intersection to completely clear the lane.
Odd that we are focusing on the autonomous car that had no choice but to back up and not the double parked garbage truck that induced the need to backup in the first place.
I agree that the difference (5 secs) most of the time isn't all that important. I just don't see the need to round up for no particular reason.
If the point someone is trying to make is bad, inaccurate, or unconvincing that doesn't mean it's made in bad faith. It's actually less likely to be in bad faith.
People are nitpicking your nitpick, but the distinction really does matter in this kind of scenario where the argument is that "seconds count".
And yes, I absolutely would have checked my rear view mirror, then reversed (slowly).
Or tried to move over as much as possible to the right.
Or depending on the road and my position, U-turned and pulled in front of the garbage truck. Lots of options, blocking the firetruck not being high on the list.
There might have been room to U-turn for the automated vehicle. Bit of ascii art here
Bot, in that situation can U-Turn. Perhaps maybe back up a metre or two first (let's pretend there was another car or two there but there was a metre or two to backup - although if bot slowly backed with hazard lights on and honking a car behind them hopefully would have too - maybe no U-Turn necessary). Yet there would be absolutely no way for a (large) firetruck to get through. Point is, require's a human's full awareness of world and flexible thinking and when it is appropriate to break the rules (rule of arriving at destination, rules of road conduct). These bots still don't have the neural complexity of a crow, much less a human (probably not even that of a goldfish really). They are ok at the predictable but that's it.
I have no doubt they'll get there eventually, at which point we might be having another ethical conversation about what a AGI should be allowed to do with its life, but they aren't there yet...
Select all images with
This is not only a rule, but a law, in most US cities. Doesn't eliminate the behaviour, obviously!
§11 (1) StVO: https://dejure.org/gesetze/StVO/11.html
Like I don't think I've ever seen people driving on the sidewalk in an American city, but it's a constant in Berlin. During a recent fair (Neuköllner Maientage) we had tons of people driving into the park and just parking on trails, including right in front of benches. Even coming from another country with an aggressive, entitled car culture, that was breathtaking behavior.
Driving is intuitive on a city street. The scope of autonomous vehicles should be limited to places where there is clarity of transit and very few variables. I would be comfortable if a fleet of autonomous vehicles driving 200 miles/hour on a a dedicated side on a highway but not in a city or even an empty suburban neighborhood. I think for those intuitive situations the vehicles could even be driven remotely by a real person in a data center.
Self driving cars are not machine learning algorithms. They have machine learning parts in their programing but nobody reputable would hook up a neural network to the pedals and the steering and just let it rip.
You simply have a bad mental model of how these machines are built and it shows.
There are three big questions every self driving car has to answer: Where I am? What is around me? What to do next?
To reliably answer the “where” question you don’t just trust one sensor. You use multiple sensors. Yes, that includes a gps, but also cameras and a lidar and a radar too. It is not machine learning. If you are interested how this is done you need to look up topics like iterative closest point matching, multi-view geometry, kalman-filtering, bag-of-visual-words representation, and bayesian reasoning.
To reliably answer what is around the car you likewise use all of your sensors. Yes, this has bits of machine learning in it, but it is not only machine learning. You can and do a lot of model-free perception. For example if your laser is bouncing back from somewhere then there is something there. There is a lot of published literature on sensor fusion and tracking and prediction. These algorithms are not magic, and they are not machine learning. They are just what you also would come up with if you would think about the problem hard for 8 hours a day for tens of years.
Then there is the planning. These are not machine learning based algorithms. They usually generate a bunch of different plans, then cull the unsafe ones (for example the ones which would collide with a tracked object) and then you rate the plans according to some heuristic and choose the best, then repeat. It is not magic. There is a serious craft and engineering to it. You can read a good introduction to different approaches in Lavalle’s Planning Algorithms book if you are so interested.
This is precisely why Tesla Autopilot has such a good "safety record" - it only works on roads where people don't have accidents anyway. It works extremely well on motorways where everyone is travelling the same direction at the same speed and everything behaves extremely predictably. The moment anything crazy happens it hands off control to the human driver.
People compare this "x thousand miles without an accident" behaviour to the sum total of human driving, including twisty country roads and cities, but it's not a fair comparison.
doesn't this totally defeat the purpose of driverless cars? a remotely operated car is not the same thing as driverless. the company operating the car would still have to pay for a human to do work. also, as cool as remotely operated sounds, there are so many ways that is worse than a human in the car. for starters, peripheral vision is lost in remote.
Driving on a city street is habitual--not intuitive. Humans memorize their driving area and then act within that.
Put a new stop sign onto a road and watch the circus.
They already are.
It just needs to be 1% better than human drivers. If the net result is 1% less tragedies - sign me up. If it's better than the alternative, I'm in.
All that considered, I need autonomous vehicles to be MUCH better than average human drivers to consider riding in them. They need to be better than I think I am, so maybe top 5% or so of talent on the road.
1% better than the average human is still an awful driver IMO.
Say you have a child and want to send him to school, would you chose
1 drive him yourself, knowing you have 10 years of experience with zero accidents and you are responsible and care for your child
2 use a lottery that will give you a random driver, could be a 16 years old teen with 1 day experience, could be a 80 years old elder or a drunk or tired dude or a super driver
3 use an AI , is better then the drunk dude , 1% better then the teen but worse then you and much worse then a professional driver
I am curious why would chose 2 or 3 if 1 is an option.
- Not everyone has access to school bus options
- not everyone can drive
- not everyone owns a self driving car
I would prefer if we stay on topic and answer at in Hypothetical question.
Or if you have a bias, let use a different domain
Your child is ill and you call for a doctor, but in this universe doctors are a lot of time drunk or are allowed to practice with zero experience, whyat do you chose
1 you call your family doctor that you trust and is not a drunk
2 you call the AI doctor that is terrible compared to a normal doctor but is better then the drunk doctors
3 you realize that it would be so much cheaper then creating an army of AI robot doctors to just fix the problem with drunk doctors, and maybe the fact you are an AI enthusiast won't prevent you think that comparing an AI with a drunk doctor is stupid, that we should have better stats then is better then the worse.
I am attempting to show that 1% better then a terrible calculated average is bad, it could be great for someone that is a bad driver. My problem is that averages are dragged down but drunk driving or speeding, I am fine if we implement in parallel multiple solutions, like use tech/AI to prevent illegal driving (drunk, tired, texting) and in the same tiem continue the research on actual good AI drivers that are actual good at driving.
Opinions my own.
Long-term this seems like a very solvable problem given the ability to remotely operate the car.
I think it's going to prove a challenge for these remote operators to connect to a car, with no situational awareness, and then quickly determine the correct course of action remotely.
I wonder if these companies have tested these scenarios, and their remote drivers. Do they do any testing at all? Do they do "check rides" with these remote drivers? I'd really like to know what side of this entire operation actually looks like.
The driving capabilities are impressive from a tech perspective but they're pretty poor in comparison with average or even fairly poor drivers. Given these limited capabilities, the cars are obviously still in an early R&D stage and thus the companies running them ought to be giving far more priority to making the outward impact much less. How do they do that? Staff the human "take over" driver pools at far higher levels, so the take over is always smooth and rapid. Right now it's like those useless chat boxes on websites where they ask you to chat and then find they've got no human to engage with you, as they multiplexed excessively.
Bringing the number of humans down once the systems are good enough is fine and that's how they'll get their commercial rewards but doing it now is premature.
On the other hand, it's possible that while the vehicle couldn't act correctly in this particular circumstance, it would act correctly in many similar circumstances, and thus the issue would still be rare.
But... I rode an ambulance in the front seat transporting a family member in the back. Several cars drove in front of the ambulance in intersections, completely ignoring the siren. As we drove a several lane road that fed into the intersection was blocked by cars except for one lane, but it was too small for the ambulance to get through, only until the ambulance was stopped did the cars bother to make room. They are supposed to clear out well in advance. One even stopped in front and the ambulance drove around it. I commented on it, and the ER driver said that is how it is. This was late at night, in some traffic in southern California suburbia. I know cars make room in the sunlight. There were traffic cameras, likely police too; no one cared to enforce the law.
Not a complete blockage like that firetruck dealt with for 25 seconds on a small road, but if the software is made to respond to ER vehicles, would it perform better than humans 99% of the time?
I lost a bit of my faith in humanity seeing that one night-they could have easily made room. If it was during the day I think they would have acted better, while I believe a properly licensed and mature autonomous vehicle would have made room long before that day or night in most situations, and a human can take over in the rest like it did in that article.
I’m not a fan of tech companies doing their beta testing on public streets, but this doesn’t seem like a egregious incident despite the delay it caused.
The problem is you can't communicate with an autonomous car. Even if the human driver doesn't know what to do you can indicate them to move in the right direction.
So the real point is: do we damn want to allow proprietary hw and private made/designed such complex tools instead of imposing publicly developed FLOSS code on open hw made out of public research?
Artists have envisaged tragic scenarios from Terminator to Matrix passing through Gattaca and Minority Report, techies have read much more substantial and realistic nightmares, civil society answer? Not seen so far...
2. If you're still worried about it, perhaps the solution is to require manufacturer to make some kind of override for autonomous vehicles, then give it to first responders like fire, medics and police.
I see no justification for stopping in the travel lane. Did it think it was being pulled over?
They give the theoretical human too much credit. I've seen much worse things happen, such as not stopping at all.
And long-term work on avoiding this situation in software.
They might not have had the correct equipment or vehicle to move a car out of the way though.
> They might not have had the correct equipment or vehicle to move a car out of the way though.
Or they they didn't feel it was "a measured response". Who knows.
> a San Francisco Fire Department truck responding to a fire tried to pass a doubled-parked garbage truck by using the opposing lane. a traveling autonomous vehicle (...) was blocking its path. While a human might have reversed to clear the lane, the Cruise car stayed put (......) Tiffany Testo, a spokesperson for Cruise, confirmed the incident. She said the driverless car had correctly yielded to the oncoming fire truck in the opposing lane and contacted the company’s remote assistance workers,
So it sounds like the car was driving perfectly fine, but stopped when a car was coming at it head on and it didn't reverse.
I think the real issue here is a double parked garbage truck forcing the fire truck to go into oncoming traffic in the first place. Maybe there's a need for a loading only zone there so that the garbage truck doesn't have to block the travel lane.
I disagree. Driverless vehicles NEED to be able to handle these unexpected situations in an appropriate manner. That they don't currently is in fact a "real issue" and in this case at least is evidence this tech may not be ready for primetime.
Personal opinion here, but I think defaulting to "stop and contact support staff" is as good of a "fix" as we will have for a long time. According to Cruise (so potentially biased) it took 25s for the firetruck to be able to move after the scenario was encountered, though the article says the garbage truck moved.
I both agree, and see this as yet another reason driverless cars are on the road WAY too early.
In practice, double parking is extremely common in all cities...
If there’s a discussion to be had here, it’s not through the frame of the needs of driverless cars. Self-driving has to work in the driving environment that exists today to be viable, and double parked garbage trucks are part of the regular order of that driving environment.
But look at it this way: one was a vehicle with no driver present in the seat, and the other was a vehicle in which a driving agent is never not present. A human driver in that situation should not spend longer than 5 seconds figuring out and implementing a way to safely let the fire truck pass. The Cruise car took longer.
If a human were present with two working eyes and ears and armed with good judgement, they may never have been blocking the way in the first place because a fire truck in SF is blaring it’s horns while it charges to the situation it was called in to answer so it is entirely possible that seeing the garbage truck parked up ahead, they would have opted to not even continue driving because that looks like an obvious choke point for an emergency vehicle trying to pass. Alas, I cannot reasonably expect that kind of situational awareness and judgement from an average human driver so maybe that’s also too big an ask from Cruise.
Also note this from the article:
> The fire truck only passed the blockage when the garbage truck driver ran from their work to move their vehicle.
So in the end, the Cruise vehicle was not the one which allowed the fire engine to pass but a hurried garbageman moving his truck after getting back in the driver’s seat. 50/50 may actually be too generous to Cruise here.
Impossible to say for certain, but I'd bet that a significant number of human drivers would yield to the right (as the autonomous car did), or take longer than 5 seconds to reverse back into the intersection (as was desired of the autonomous car).
I would hope that after coming up with the strategy to reverse that you’d take most or all of 5 seconds to confirm there were no obstacles behind you. It may take you 2 seconds to realize the fire truck is stuck unless you move, 2 seconds to conclude you can only help by reversing, 3 seconds to check your mirrors, 0.5 seconds to shift to reverse, a couple seconds to look at the backup camera to check the spot not visible to the mirrors, 2 seconds to move the car 40 feet, longer if you have to check for cross traffic in the intersection.
Aviation research points to the effect of startling to have a negative effect on decision making quality and speed for up to 30 seconds. That’s for trained, generally fit aircrews.
Quibble with the times, but I doubt you get to a “most people would complete the task safely in under 5000ms” conclusion.
No traffic close enough to pose a danger isn't a given in this scenario - there may well have been traffic.
> why would it take you longer than 5 seconds to reverse out of the way?
I think many would be at least slightly hesitant to back into an intersection. There would be initial reaction/braking time, time to check for and evaluate options, then time to reverse backwards with caution.
> In the end the garbage truck moved, which is why the fire truck was “able to pass within 25 seconds of encountering the Cruise car”.
The autonomous car did yield to the right to the extent it could, so wasn't doing nothing for 25 seconds. Just that (I'll go with the first responders' judgement) doing so didn't give sufficient space.
Did that feel like too short an amount of time to watch your mirrors, and throw your car into reverse while making sure the hazard lights are on? The possible dangers at 4am are other cars (and you probably already knew if someone was driving behind you but you check anyway in case someone turned into your lane), people (who probably also heard the sirens and need to be yielding themselves) and animals (who are probably and hopefully not around anymore because fire engine sirens are extremely loud and scary). So you back up while watching your mirrors the entire time. By second 5, even if you go slowly, there should be space enough between you and the garbage truck for a fire engine to negotiate its way around you, and if there are other vehicles around but not also trying to get out the way, then congrats, you attempted to do your civic duty but the other guy failed. You’re absolved of your sins.
If that’s what happened, SFFD wouldn’t have complained to the City about this exact situation in which they described the three vehicles present: theirs, Sunset Scavenger’s and Cruise’s. Yielding in the way is not the correct maneuver when it doesn’t allow the fire engine to pass. Doing the maneuver that gets them through in the shortest amount of time without endangering anyone else in the process is, and it isn’t a complicated one, just un-fun and you shouldn’t be out-paced by someone who is literally outside their vehicle at the start of the situation when you are a piece of software that can never leave.
Check the video and I bet you’re well past 5 seconds when you complete the maneuver.
Edit: you are correct, flutas explained in sibling comment that it was alluded to in article. Thanks you both!
I think it's important to know what the autonomous car was thinking.