• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Self driving car kills a pedestrian

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Nope, and that's why I'm also suspicious of the hype and rosy, one-sided arguments for replacing human drivers and personal cars.

I had an incident where I narrowly missed hitting a child on a bike years ago.

Suburban area with a family on the walking on sidewalk with a young kid on a bike learning to ride.

I was looking ahead, saw the kid was wobbly on the bike, so I slowed way down (under speed limit) and moved towards center of road and watched like a hawk.

Kid served way to the left, over the grass and into the road. Slammed on brakes and served into other lane. It was really close, but I didnt hit him.

Had I been just following the rules of the road (speed limit, center of lane) I would have likely struck and possibly killed the kid.

Will the AI's scanners be able to see this scenario, anticipate, and diverge from the rules?

Will it see the birthday party with kids running around and go into super safe mode?

Will it understand and see all the deer running in the fields? Will it learn their hiding spots in the trees? Will be be especially careful at dusk and rutting season?

While I see a scenario for automated drivers for cargo trucks and others on predictable and controlled highways devoid of pedestrians, achieving full AI everywhere will be far more difficult and fraught with legal peril.

Driver supplimental abilities also seem useful (emergency braking/collision avoidance) but roadways are incredibly dynamic and I have a hard time believing software is close to ready for it.

Does anyone recommend blindly following Google maps all the time? How about software landing your plane? This is easy in comparison.

This is a strange argument. Self driving cars will not reduce accidents to zero, thus we should keep what we have. Because of the one time Human ability was better than computer, we should keep Humans. For every 1 child a human saves, 10 more are killed. A computer would save those 10 children and kill that 1.
 
My question how does the system decide when to just hit the brakes or swerve? Swerve and you could put the passenger(s) at risk. A human would make the decision based on many factors including what they are about to hit vs passengers in car.

Actually most humans would go into blind fight or flight response and, at best, react without thinking at all, and at worst not react at all until it was all over.
 
Yeah. There was some kind of sensor or system failure. This should have been an example of the autonomous system saving a life that would have been lost to a human driver.

I agree, the system failed here. I feel sure that his case will be studied and simulated again and again until they know exactly what went wrong and developed several backup systems to make sure it doesn't happen again. It is terrible that these sorts of tragedies have to happen, but it is our ethical duty to use the information we gain from them to improve our systems so that they don't happen again, not give up because it did happen.
 
This is a strange argument. Self driving cars will not reduce accidents to zero, thus we should keep what we have. Because of the one time Human ability was better than computer, we should keep Humans. For every 1 child a human saves, 10 more are killed. A computer would save those 10 children and kill that 1.

Show that data
 
Initially I was going to say Uber is not at fault, but I changed my mind upon further reflection. Yes, the road was dark, but self driving cars are supposed to use radar that can see in darkness, furthermore, yes, the woman did come out of the darkness, but there was still good half a second when the woman first became visible and the car did not even attempt to slow down. So I do think it was a total failure of the self driving car. I hope they can find out what happened and fix it.
 
Initially I was going to say Uber is not at fault, but I changed my mind upon further reflection. Yes, the road was dark, but self driving cars are supposed to use radar that can see in darkness, furthermore, yes, the woman did come out of the darkness, but there was still good half a second when the woman first became visible and the car did not even attempt to slow down. So I do think it was a total failure of the self driving car. I hope they can find out what happened and fix it.

How would this not be the woman's fault? Typical human reaction time in a car is well above .5 seconds. Your argument is equal to saying that she would be at fault if a human were driving, but because it was a computer and it did not pick her up its now Uber's fault? If so, then we better remove all things in a car that could fail, because if it fails it becomes your fault. That is just silly.
 
How would this not be the woman's fault? Typical human reaction time in a car is well above .5 seconds. Your argument is equal to saying that she would be at fault if a human were driving, but because it was a computer and it did not pick her up its now Uber's fault? If so, then we better remove all things in a car that could fail, because if it fails it becomes your fault. That is just silly.
We don't know if human driving the car would have reacted better than the car, and it's pointless to speculate. We're not discussing how Uber car handled the situation vs how human would have, we're discussing how Uber car should have handled it. Yes, the woman should not have been crossing the road, but Uber car should have handled the situation way better than it did. I don't know if Uber cars rely on radar imaging or regular cameras, or both. Whichever way, Uber car failed spectacularly. If the car relied on radar, it should have seen the woman crossing the road way before she appeared in the headlights, and if Uber car relied on regular cameras, it should have applied brakes immediately, and yet it didn't. Uber car did not behave as it was expected to, that's a failure.
 
Initially I was going to say Uber is not at fault, but I changed my mind upon further reflection. Yes, the road was dark, but self driving cars are supposed to use radar that can see in darkness, furthermore, yes, the woman did come out of the darkness, but there was still good half a second when the woman first became visible and the car did not even attempt to slow down. So I do think it was a total failure of the self driving car. I hope they can find out what happened and fix it.

This is not really a good argument. For it to be we would have to know the logic the car used to make a decision. Did it not see her at all? Did it see her and not react because it didn't predict she would step into the cars path? Did it predict she would step into it's path and for some reason failed to react when it should have? Did it predict she would step into it's path and decided that there was no safe action to take? It is possible that breaking would not have changed the outcome for the woman but would have endangered the vehicles passengers, in which case not breaking would be the right decision.
 
This is not really a good argument. For it to be we would have to know the logic the car used to make a decision. Did it not see her at all? Did it see her and not react because it didn't predict she would step into the cars path? Did it predict she would step into it's path and for some reason failed to react when it should have? Did it predict she would step into it's path and decided that there was no safe action to take? It is possible that breaking would not have changed the outcome for the woman but would have endangered the vehicles passengers, in which case not breaking would be the right decision.

The system should have detected an object walking across the road even on the other side of the road. People are always going to jaywalk and the system needs to be able to detect that. Like I said in my post about the article, Uber have been using Volvo XC90 which have pedestrian/bicycle/auto collision systems built in. When someone walked right in front of my Volvo(though at a much slower speed), the car jammed the breaks and stopped right before hitting her. I would expect these Ubers to even have more sophisticated systems built in. In the video, the vehicle didn't even brake when the pedestrian was in front of it.
 
. I would expect these Ubers to even have more sophisticated systems built in. In the video, the vehicle didn't even brake when the pedestrian was in front of it.

I understand that. The question is why? Computers don't have bad days. They don't get distracted. So why did it not break? Until we know the logic it used to decide that either she was not going to step out into the road, or that when she did it would be better to not break, we don't really know.
 
We don't know if human driving the car would have reacted better than the car, and it's pointless to speculate. We're not discussing how Uber car handled the situation vs how human would have, we're discussing how Uber car should have handled it. Yes, the woman should not have been crossing the road, but Uber car should have handled the situation way better than it did. I don't know if Uber cars rely on radar imaging or regular cameras, or both. Whichever way, Uber car failed spectacularly. If the car relied on radar, it should have seen the woman crossing the road way before she appeared in the headlights, and if Uber car relied on regular cameras, it should have applied brakes immediately, and yet it didn't. Uber car did not behave as it was expected to, that's a failure.

Again, a failure of something does not mean that it then means you are at fault.

You can test your logic.

If you are driving down the highway and someone is driving the wrong way, and you attempt to swerve out of the way, but your tire blows and you cant. Is the tire manufacture at fault for the tire, you for not noticing sooner? Or, is it the person's fault for driving the wrong way?
 

That doesn't support your statement.

No where near a 10x decrease in fatal accidents and use a very limited data set with significant amounts of uncertainty and limited "naturalistic" settings.

At best they made but assumptions to increase the rate of minor accidents (hitting stuff at low speed, no injuries, possibly unreported) and that is where they showed the most promising benefit.

But I'll grant it's plausible that self driving cars will hit less curbs than teens, ppl texting and soccer moms in giant SUVs.

Better curb avoidance is not a compelling case for Google's dorkwagon vs my 6spd turbo.
Ill take my chances.
 
That doesn't support your statement.

No where near a 10x decrease in fatal accidents and use a very limited data set with significant amounts of uncertainty and limited "naturalistic" settings.

At best they made but assumptions to increase the rate of minor accidents (hitting stuff at low speed, no injuries, possibly unreported) and that is where they showed the most promising benefit.

But I'll grant it's plausible that self driving cars will hit less curbs than teens, ppl texting and soccer moms in giant SUVs.

Better curb avoidance is not a compelling case for Google's dorkwagon vs my 6spd turbo.
Ill take my chances.

The 10x number was to illustrate the point. Your argument is that because there are some times when human ability may go beyond a computer's, that we should not use computers even though the data suggests that we would have far less incidents even with current technology.
 
In the USA, Humans operating vehicles kill 40,000 people every single year.
If the AI kills a thousand people, 39,000 lives are saved. EVERY SINGLE YEAR.
Do you still want to oppose the AI and have humans drivers killing those 39,000? Why?
 
I understand that. The question is why? Computers don't have bad days. They don't get distracted. So why did it not break? Until we know the logic it used to decide that either she was not going to step out into the road, or that when she did it would be better to not break, we don't really know.

The system needed to know that an object was crossing the road(and not going along the road like a car) from the other side and should have decreased speed. And when it detected the object in front of it, it should have braked. It did neither. My Volvo and most newer vehicles have cross traffic sensors when they back-up. A self driving vehicles should have the ability to sense cross traffic in front of them.
 
In the USA, Humans operating vehicles kill 40,000 people every single year.
If the AI kills a thousand people, 39,000 lives are saved. EVERY SINGLE YEAR.
Do you still want to oppose the AI and have humans drivers killing those 39,000? Why?

Obvious answer is JOBS. Jesus, I can't imagine the hit to the economy of taking away MILLIONS of high paying driving jobs (3.1 million trucking jobs at 75k/yr, do the math for yourself). Damn dude, could America survive that hit?
 
Again, a failure of something does not mean that it then means you are at fault.
But it can. If a construction worker fails to secure a load of bricks and they come tumbling down killing someone, that construction worker can absolutely be tried and convicted of manslaughter. Construction worker had responsibility, key word responsibility, to secure dangerous load, failure to do so would mean he is at fault for any harm caused by his lack of responsibility.

If you are driving down the highway and someone is driving the wrong way, and you attempt to swerve out of the way, but your tire blows and you cant. Is the tire manufacture at fault for the tire, you for not noticing sooner? Or, is it the person's fault for driving the wrong way?

Coming back to concept of responsibility, every person or entity has some kind of responsibility assigned to them. Tire manufacturer has responsibility to ensure that their tires pass the stress tests. Bridgestone/Firestone failed their responsibility in the 2000's. You as a driver have responsibility to be alert when you're driving. There are certain expectations that we, or in this case self driving car, are expected to meet. Conduct standard if you will, and failure to adhere to the standard is failure. The expectations of the self driving car is to be able to see in the dark using radar technology, that did not happen in this case. Another expectation is quicker reaction time to apply brakes, and again, uber car failed that too.

Just to be clear, often times, it is difficult to assign fault in a fatal accidents like these, often times there are multiple failures along the way that contribute towards fatal outcome. In this case it was woman's responsibility to cross the road responsibly looking both ways and not walking in front of the car. She failed that responsibility. However, it was also Uber's car responsibility to prevent that type of accident, and it failed that too. I don't know where Uber car failed, whether it failed to see the person at all, or saw it too late to do anything, or for whatever reason maybe there was an electrical problem and it couldn't apply brakes, but it had several opportunities to stop it from happening ranging from seeing human in the dark to reacting faster than a human could, and it failed all of them.
 
  • Like
Reactions: pmv
Uber had the right of way. Wonder what that state has to say about driving bicycles or pushing them as a bag lady at night.

That was one dark road.

You were there? Video doesn't show lighting in the same way a human eye sees it. Besides, the car is supposed to have LIDAR.

And I don't care about 'right of way' (the term is 'priority', in any case, right of way is the right to use a given throughfare). People get prosecuted for running people down whether they have 'priority' or not, it doesn't exempt you from the duty to stop if possible.
 
Pheonix has the following laws:

28-817. Bicycle equipment

A. A bicycle that is used at nighttime shall have a lamp on the front that emits a white light visible from a distance of at least five hundred feet to the front and a red reflector on the rear of a type that is approved by the department and that is visible from all distances from fifty feet to three hundred feet to the rear when the reflector is directly in front of lawful upper beams of head lamps on a motor vehicle. A bicycle may have a lamp that emits a red light visible from a distance of five hundred feet to the rear in addition to the red reflector.

B. A person shall not operate a bicycle that is equipped with a siren or whistle.

C. A bicycle shall be equipped with a brake that enables the operator to make the braked wheels skid on dry, level, clean pavement.

Has anyone said she was actually riding the bike at the time of the accident?
 
Obvious answer is JOBS. Jesus, I can't imagine the hit to the economy of taking away MILLIONS of high paying driving jobs (3.1 million trucking jobs at 75k/yr, do the math for yourself). Damn dude, could America survive that hit?

There's no hit. Productivity soars. All we have to do is tax it and return that lost value back to the consumers.
Damage is done in a poorly designed system. We need to fix ours.
 
Obvious answer is JOBS. Jesus, I can't imagine the hit to the economy of taking away MILLIONS of high paying driving jobs (3.1 million trucking jobs at 75k/yr, do the math for yourself). Damn dude, could America survive that hit?
Hate to say it, but fuck the jerbs. If you really need to find a way to deal with those 3.1m truckers/trucking jobs at the expensive of human life, have them make each other into Soylent Green.
 
But it can. If a construction worker fails to secure a load of bricks and they come tumbling down killing someone, that construction worker can absolutely be tried and convicted of manslaughter. Construction worker had responsibility, key word responsibility, to secure dangerous load, failure to do so would mean he is at fault for any harm caused by his lack of responsibility.

Your analogy is horrible. The Uber incident was not caused by a failure of the computer. Had there been no computer the same thing would have happened. The reason your analogy sucks is because the single failure is what caused harm, and in the Uber case the failure simply prevented harm.

Think of it like this, every car has a different driver with different abilities. Some people may be able to react faster than others, yet we would not find fault for a person that reacted slowly in this case. Why do you think that is?



Coming back to concept of responsibility, every person or entity has some kind of responsibility assigned to them. Tire manufacturer has responsibility to ensure that their tires pass the stress tests. Bridgestone/Firestone failed their responsibility in the 2000's. You as a driver have responsibility to be alert when you're driving. There are certain expectations that we, or in this case self driving car, are expected to meet. Conduct standard if you will, and failure to adhere to the standard is failure. The expectations of the self driving car is to be able to see in the dark using radar technology, that did not happen in this case. Another expectation is quicker reaction time to apply brakes, and again, uber car failed that too.

Yes, but you are still missing the point. A failure of one of those things would not inherently make them at fault. A flat tire on a parked car is not relevant.

Just to be clear, often times, it is difficult to assign fault in a fatal accidents like these, often times there are multiple failures along the way that contribute towards fatal outcome. In this case it was woman's responsibility to cross the road responsibly looking both ways and not walking in front of the car. She failed that responsibility. However, it was also Uber's car responsibility to prevent that type of accident, and it failed that too. I don't know where Uber car failed, whether it failed to see the person at all, or saw it too late to do anything, or for whatever reason maybe there was an electrical problem and it couldn't apply brakes, but it had several opportunities to stop it from happening ranging from seeing human in the dark to reacting faster than a human could, and it failed all of them.

No, it was Ubers goal, and not its responsibility. The goal is to do better, and in this case it simply did not do any worse.
 
Your analogy is horrible. The Uber incident was not caused by a failure of the computer. Had there been no computer the same thing would have happened. The reason your analogy sucks is because the single failure is what caused harm, and in the Uber case the failure simply prevented harm.

Think of it like this, every car has a different driver with different abilities. Some people may be able to react faster than others, yet we would not find fault for a person that reacted slowly in this case. Why do you think that is?





Yes, but you are still missing the point. A failure of one of those things would not inherently make them at fault. A flat tire on a parked car is not relevant.



No, it was Ubers goal, and not its responsibility. The goal is to do better, and in this case it simply did not do any worse.
I think there's two different camps on this, camp A sees it as acceptable that a computer was unable to react better than a human in similar conditions, and have a life lost as a result, camp B sees it as a failure on the part of the computer to be able to save a life that would have otherwise been lost had a human or human-capable system been controlling the car. I can see the argument for the latter as this system is supposed to be 'better' than human drivers, and odds are good some failure of design, implementation, or maintenance resulted in the system being unable to 'superhuman' its way out of this particular situation, however I wouldn't ascribe a failure (or responsibility) to it just yet until we know what part failed, and why.

I do find it very interesting though that people are already presuming that a self-driving car should be capable of surpassing a human at driving, despite the fact they've been on the road past closed-course testing for what, a year? Maybe?
 
I think there's two different camps on this, camp A sees it as acceptable that a computer was unable to react better than a human in similar conditions, and have a life lost as a result, camp B sees it as a failure on the part of the computer to be able to save a life that would have otherwise been lost had a human or human-capable system been controlling the car. I can see the argument for the latter as this system is supposed to be 'better' than human drivers, and odds are good some failure of design, implementation, or maintenance resulted in the system being unable to 'superhuman' its way out of this particular situation, however I wouldn't ascribe a failure (or responsibility) to it just yet until we know what part failed, and why.

I do find it very interesting though that people are already presuming that a self-driving car should be capable of surpassing a human at driving, despite the fact they've been on the road past closed-course testing for what, a year? Maybe?

The argument is that because the computer could not do better than a human in this case, we should leave it to humans. Computers have the ability to be better than humans and are currently better is most ways. If the failure of the computer meant the rules of the road were broken then I would agree at the computers fault. In this case the comply was simply not good enough and death happened. The only concern I have is why it did not pick her up. I do not think the computer is a fault here, but I would like the computer to improve.
 
Back
Top