Dnia sobota, 1 lutego 2014 06:31:07 Sam Gordon pisze:
I liked it, but the realistic portions were destroyed by the A.I. 'argument' with the operator.
I have a hard time believing that to be the future, especially when in the video itself it shows the A.I. making a poor decision and getting the craft destroyed. I can't imagine a military buying weapons that can routinely overrule their chain of command
When you put it like that, sure. When you put it like it's not "overruling the chain of command", but "correcting operator mistakes in accordance with procedures", it's a whole different story! The latter creates the appearance as if a drone would not be be able make any "wrong" decisions, as all it would base its decisions on would be procedures written and implemented by "the right people". What gets hidden in such a scenario is that (obviously): - procedures are bound to have mistakes themselves (oh, the irony of unintended consequences!); - people implementing them will make mistakes. But that will not stop the introduction of such drones, if properly packaged in marketing mumbo-jumbo. Never underestimate the power of new shiny toys for the uniformed (just one letter away from "uninformed", eh?) boys!
Now explain to me that the female voice was actually a higher ranking secondary operator, and i'd start to see the benefits.
*operator 1 : do not want to sacrifice craft for discovery of enemy* *operator 2 : override previous command, potential enemy discovery of greater priority.*
Making a superior officer appear as if they are 'Siri' could possibly reduce confrontation and the feeling of being overruled, and thus increase individual operator happiness, job satisfaction, etc etc. It could also reduce the chances that an operator takes personal responsibility for their actions.
"That stupid AI fucked up, not me. I KNEW they were innocent!"
Consider: http://en.wikipedia.org/wiki/Firing_squad#Blank_cartridge http://en.wikipedia.org/wiki/Diffusion_of_responsibility -- Pozdr rysiek