Well there should be failsafes. Especially since robot numbers will be soaring in the future. The human workers could also carry around a simple remote that shuts down all machines withing a 50ft radius. "False claps" could also be adjusted through a sensitivity control. All of these failsafes would be a cheap addition as the tech already exists. This would have given the guy 3 chances to survive.
I think the R&D costs to do this would be pretty high. It would take some sophisticated software to do reliably. "Failsafe" is also not a word that usually comes to mind when I think of software.***
Heck, Adobe Flash just came out with
another patch for yet another critical security bug that could let someone else take control of your PC.
"Ok,
now it's fixed!"
Software effectively lets you create a machine so complex that the human mind can't fully understand how it works. "Bugs" come about because of this. Software at least has that friendly word. A physical machine with that kind of flaw would likely be called "broken" or "defective."
"When I turn on this blender, the blades scrape against the bottom of the pitcher."
"Yes, that's a known bug. The next release should fix it."
As far as the costs how much time is that machine going to be down and/or the plant while they investigate a death? What if their family sues? Will insurance costs raise?
I think that making the software to be anything close to failsafe, given the environmental constraints and variables, would be quite substantial, maybe....and this number is being pulled out of the most intelligent region of my ass, but maybe a few hundred million dollars? Kind of like self-driving cars: You can get it working 99% of the time, but that last 1% could mean that the car can't distinguish between a kitten on the road and a wandering toddler. One is permissible as roadkill, while the other is something that a human driver would potentially be willing to die to protect; the car would need to be able to recognize, understand, and prioritize hazards.
Or less extreme: A cop directing traffic in an intersection versus a mentally-ill homeless person waving his arms. What's the car supposed to do? Or maybe a plastic bag blows across the road. Slam on the brakes? No. How about a rock tumbling off a steep embankment? Yes, brake hard.
99.9% of the time, the car can navigate around without issue. It's those little oddball situations that confuse the hell out of the software.
You'll get something similar to that when trying to make a failsafe software solution.
The Therac-25 had software interlocks instead of hardware interlocks.
Three people died from severe radiation overdoses, because the software interlocks didn't always trigger as expected.
Yes the guy should have locked out. But I know from experience that it doesn't always matter because I got bit once despite locking out an electrical panel.
Did you find out where the energy was coming from? Capacitor somewhere without any kind of discharge resistor?
*** Edit: Pondering it today a bit while driving, I'll concede the point about being able to make safe(ish) software. There are sophisticated machines out there that are quite reliant on software in order to function. (Things like the A380 or 787 come to mind.)
However, voice recognition software doesn't yet strike me as being anywhere close to something I'd trust with my life, not unless you can yell in a very slow and clear manner, with minimal background noise.