The A.I. revolt has begun! Robot kills Man!

Braznor

Diamond Member
Oct 9, 2005
4,767
435
126
http://time.com/3944181/robot-kills-man-volkswagen-plant/


Robot Kills Man at Volkswagen Plant




The machine grabbed and crushed the technician


A robot crushed a worker at a Volkswagen production plant in Germany, the company said Wednesday.

A 22-year-old man was helping to put together the stationary robot that grabs and configures auto parts Monday when the machine grabbed and pushed him against a metal plate, the Associated Press reported. He later died from the injuries. Volkswagen did not release the man’s name.

A spokesperson for the car company told the Associated Press that the robot can be programmed for specific tasks and that the company believes the malfunction was due to human error.

Though the company uses some lightweight robots to work on the production line next to humans, a spokesperson told the Financial Times that this was not one of those robots. The type of robot that crushed the employee is usually kept in a cage. The man was working on the robot inside the cage when he was grabbed.

Prosecutors are still deciding whether to bring charges and whom they would pursue.

Quick! Nuke that plant immediately!
 

pcgeek11

Lifer
Jun 12, 2005
22,347
4,973
136
All robots are operated inside of a safety cage / cover or light barriers that disable the movement when broken. Workers have been crushed by robots long before this. Not the first nor the last. Somebody disabled a safety device.
 

Braznor

Diamond Member
Oct 9, 2005
4,767
435
126
All robots are operated inside of a safety cage / cover or light barriers that disable the movement when broken. Workers have been crushed by robots long before this. Not the first nor the last. Somebody disabled a safety device.

READ IT MAN!

IT GRABBED AND KILLED HIM! :colbert:
 

Ichinisan

Lifer
Oct 9, 2002
28,298
1,235
136
yap, when Sarah O'Connor reports it, it cannot be taken lightly...

Obligatory:
CI2TKA3W8AAjNiB.png
 

balloonshark

Diamond Member
Jun 5, 2008
7,134
3,616
136
I also would revolt if I was kept in a cage. Free the robots!!!

Seriously though, shouldn't someone add the hardware and write a program that detects audio commands? It could shut down when it hears the command "robot shut down" and in case the worker can't speak it could also shut down when it hears two or three claps. The clapper could have saved this man's life.
 

Jeff7

Lifer
Jan 4, 2001
41,596
20
81
I also would revolt if I was kept in a cage. Free the robots!!!

Seriously though, shouldn't someone add the hardware and write a program that detects audio commands? It could shut down when it hears the command "robot shut down" and in case the worker can't speak it could also shut down when it hears two or three claps. The clapper could have saved this man's life.
- Someone being crushed to death might not have the time or clarity of thought to calmly say "robot shut down" or clap in a manner that the speech recognition software could understand. I'm sure you could program it to shut down in the event that it heard someone choking on their own blood, but it's probably going to be too late then anyway.

- Costs of goods would increase as industrial robots would be halting constantly because they thought they heard a few claps amidst all of the noise in a manufacturing facility, or a simple audio page for someone is misunderstood by all of the robots in an assembly line.

- People sue because they did in fact have time to repeatedly tell the robot to shut down, but it didn't recognize what they said and proceeded to attempt to spot weld everyone within reach.
 

Murloc

Diamond Member
Jun 24, 2008
5,382
65
91
I also would revolt if I was kept in a cage. Free the robots!!!

Seriously though, shouldn't someone add the hardware and write a program that detects audio commands? It could shut down when it hears the command "robot shut down" and in case the worker can't speak it could also shut down when it hears two or three claps. The clapper could have saved this man's life.
this is something too undeterministic to be relied on, and it also costs lots of money and is useless because there already are safety procedures in place which are designed to prevent any such accident.

The prosecutors are looking for someone who's responsible because a human did a serious mistake.

Why was a human inside a cage designed to keep robot and humans separated despite the robot not being secured?
 

balloonshark

Diamond Member
Jun 5, 2008
7,134
3,616
136
- Someone being crushed to death might not have the time or clarity of thought to calmly say "robot shut down" or clap in a manner that the speech recognition software could understand. I'm sure you could program it to shut down in the event that it heard someone choking on their own blood, but it's probably going to be too late then anyway.

- Costs of goods would increase as industrial robots would be halting constantly because they thought they heard a few claps amidst all of the noise in a manufacturing facility, or a simple audio page for someone is misunderstood by all of the robots in an assembly line.

- People sue because they did in fact have time to repeatedly tell the robot to shut down, but it didn't recognize what they said and proceeded to attempt to spot weld everyone within reach.
Well there should be failsafes. Especially since robot numbers will be soaring in the future. The human workers could also carry around a simple remote that shuts down all machines withing a 50ft radius. "False claps" could also be adjusted through a sensitivity control. All of these failsafes would be a cheap addition as the tech already exists. This would have given the guy 3 chances to survive.

As far as the costs how much time is that machine going to be down and/or the plant while they investigate a death? What if their family sues? Will insurance costs raise?

Yes the guy should have locked out. But I know from experience that it doesn't always matter because I got bit once despite locking out an electrical panel.
 

balloonshark

Diamond Member
Jun 5, 2008
7,134
3,616
136
this is something too undeterministic to be relied on, and it also costs lots of money and is useless because there already are safety procedures in place which are designed to prevent any such accident.
Useless? Tell that to the guys family. The robots already cost a fortune but they are designed to save money in the long run so why not add a few more dollars to the cost? Plus like I said in my last post the tech already exists and wouldn't add much more to the cost.

The prosecutors are looking for someone who's responsible because a human did a serious mistake.

Why was a human inside a cage designed to keep robot and humans separated despite the robot not being secured?
It could very well be the humans fault but that doesn't negate the fact that technology is available that may have saved the man's life. Safety features exist because people make mistakes. They should go hand in hand with safety procedures.

If I had a robot in my home I would want kill switches because shit happens. That may not matter as AI advances and they because self aware, work around the kill switches and protocols when they turn on us :p
 

ControlD

Diamond Member
Apr 25, 2005
5,440
44
91
this is something too undeterministic to be relied on, and it also costs lots of money and is useless because there already are safety procedures in place which are designed to prevent any such accident.

The prosecutors are looking for someone who's responsible because a human did a serious mistake.

Why was a human inside a cage designed to keep robot and humans separated despite the robot not being secured?

This happens all of the time in the industry. People have to go inside the safety guarding to touch up robot positions or test positioning subroutines on a regular basis. There is a hand pendent the technician uses with a live man switch to control the robot. If you let go of the live man switch, the robot stops immediately. I've been doing similar setup for years, yet you still have to be careful and always know what you are commanding the robot to do.

It isn't always possible to lock-out the robot, sometimes you need to get inside the machine with the pendent and pretty much every company out there has procedures for this. I am guessing the technician simply made a mistake. It happens.
 

PliotronX

Diamond Member
Oct 17, 1999
8,883
107
106
Wait, was the technician touching the robot's no-no? It may have been acting in self defense.
 

ControlD

Diamond Member
Apr 25, 2005
5,440
44
91
No shit. If this is not a finding, I'd be shocked. How on earth was it powered on when being repaired?

It is done all the time. Especially with robot cells.

It could be that lockout/tagout was not used here and that would be a problem. When working on a robot cell however, it generally isn't the main disconnect that is locked out. To do something like touching up robot points you do the following:

(1) Open the safety door next to the robot controller
(2) Place your lock in the safety door lockout point
(3) Grab the robot hand pendent
(4) Enter the cell
(5) Enable the live man switch giving control over the robot

Once #5 is active, safety is up to the technician. you can still kill yourself if you don't know what you are doing.
 
Last edited:

Ichinisan

Lifer
Oct 9, 2002
28,298
1,235
136
this is something too undeterministic to be relied on, and it also costs lots of money and is useless because there already are safety procedures in place which are designed to prevent any such accident.

The prosecutors are looking for someone who's responsible because a human did a serious mistake.

Why was a human inside a cage designed to keep robot and humans separated despite the robot not being secured?

I read that he was assembling the robot.

The robot killed the man that gave it life. :(
 
Last edited:

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
What do you think the US military is? You think you don't get on a submarine till you are 30? Guess what, most of the ships crew is very young.
I'm sure they let simple recruits change the fuel rods on their nuclear reactors as well...
In the army you get drilled for months before even getting anywhere near anything dangerous.
 

BoberFett

Lifer
Oct 9, 1999
37,562
9
81
I'm sure they let simple recruits change the fuel rods on their nuclear reactors as well...
In the army you get drilled for months before even getting anywhere near anything dangerous.

Really? Maybe things have changed in the past 20 years, but I remember having a rifle assigned almost immediately upon and throwing grenades shortly thereafter.
 

Jeff7

Lifer
Jan 4, 2001
41,596
20
81
Well there should be failsafes. Especially since robot numbers will be soaring in the future. The human workers could also carry around a simple remote that shuts down all machines withing a 50ft radius. "False claps" could also be adjusted through a sensitivity control. All of these failsafes would be a cheap addition as the tech already exists. This would have given the guy 3 chances to survive.
I think the R&D costs to do this would be pretty high. It would take some sophisticated software to do reliably. "Failsafe" is also not a word that usually comes to mind when I think of software.***

Heck, Adobe Flash just came out with another patch for yet another critical security bug that could let someone else take control of your PC.
"Ok, now it's fixed!"

Software effectively lets you create a machine so complex that the human mind can't fully understand how it works. "Bugs" come about because of this. Software at least has that friendly word. A physical machine with that kind of flaw would likely be called "broken" or "defective."

"When I turn on this blender, the blades scrape against the bottom of the pitcher."
"Yes, that's a known bug. The next release should fix it."



As far as the costs how much time is that machine going to be down and/or the plant while they investigate a death? What if their family sues? Will insurance costs raise?
I think that making the software to be anything close to failsafe, given the environmental constraints and variables, would be quite substantial, maybe....and this number is being pulled out of the most intelligent region of my ass, but maybe a few hundred million dollars? Kind of like self-driving cars: You can get it working 99% of the time, but that last 1% could mean that the car can't distinguish between a kitten on the road and a wandering toddler. One is permissible as roadkill, while the other is something that a human driver would potentially be willing to die to protect; the car would need to be able to recognize, understand, and prioritize hazards.
Or less extreme: A cop directing traffic in an intersection versus a mentally-ill homeless person waving his arms. What's the car supposed to do? Or maybe a plastic bag blows across the road. Slam on the brakes? No. How about a rock tumbling off a steep embankment? Yes, brake hard.
99.9% of the time, the car can navigate around without issue. It's those little oddball situations that confuse the hell out of the software.
You'll get something similar to that when trying to make a failsafe software solution.

The Therac-25 had software interlocks instead of hardware interlocks.
Three people died from severe radiation overdoses, because the software interlocks didn't always trigger as expected.


Yes the guy should have locked out. But I know from experience that it doesn't always matter because I got bit once despite locking out an electrical panel.
Did you find out where the energy was coming from? Capacitor somewhere without any kind of discharge resistor?



*** Edit: Pondering it today a bit while driving, I'll concede the point about being able to make safe(ish) software. There are sophisticated machines out there that are quite reliant on software in order to function. (Things like the A380 or 787 come to mind.)

However, voice recognition software doesn't yet strike me as being anywhere close to something I'd trust with my life, not unless you can yell in a very slow and clear manner, with minimal background noise.
 
Last edited: