So 4000 Google employees signed a petition urging the chairman of Google to opt out of Project Maven. The project described as; “a sweeping effort to enhance its surveillance drones with technology that helps machines think and see.” Also a dozen or so employees quit over this. See an article on this is available Here.
The military of the US and other countries are obviously interested in using ML for various tasks. Even if it was ethical to do so, we know that these systems are never perfect, and in-fact will always be tied to the imperfect training data they’re fed. Imagine a system trained on actual footage used by human controlled drones to target “bad actors.” How many of those targets were really “bad actors?” What happens when they need to augment the data for better “accuracy?”
Not in the article above, but in one I read but cannot find right now, google stated that they would be doing nothing more than using software and algorithms that were already in the"public domain." Does that mean pytorch will be used? Tensorflow? Fast.ai?
The article above goes on to quote; “. Last year, several executives-including Demis Hassabis and Mustafa Suleyman, who run Alphabet’s DeepMind AI lab, and famed AI researcher Geoffrey Hinton-signed a letter to the United Nations outlining their concerns.”
I’m not sure what the right approach is to limit the use of this imperfect and very dangerous technology in lethal applications, but a good set of international norms similar to those in place for chemical,biologial and nuclear weapons, seems to me to be a good start.
You are all very smart people or you wouldn’t be here. How can we educate our societies to the very real dangers that this technology poses before it is too late?