I thought about this for a while and almost got into a rabbit hole. Incidentally Wikipedia has an informative page on this.
Not answering your questions directly, but I think from engineering standpoint it is imperative that we too as software engineers think of ways to create ethical AI software. - asking the right questions to stakeholders, ensuring to the best of our ability unbiased and anonymous training data, ensuring the decisions the software take are explainable (huh), ensuring more open software than closed ones, having empathy towards society and taking a stand whenever necessary.
I am sure the team who created the Compas software (as mentioned in the article) would not have wanted the incident to happen. It will be interesting to know what they retrospected. Unfortunately the entire software and the team is behind a wall of secrecy.
He also said data from the criminal justice system was often unreliable and that it could call into question results from those algorithms.
Mr. Edwards urged caution in testing the results and eliminating any prejudices in the algorithms.
“I think we are kind of rushing into the world of tomorrow with big-data risk assessment,” he said, “without properly vetting, studying and ensuring that we minimize a lot of these potential biases in the data.”
I harken back to the days at defense contractors in the US where scientists and engineers were asked to build a black box, but weren’t told what it was for. Then some of them actually quit their jobs when they found out it was for weapons systems.
The ethics are a good question to ask. But even moreso is the legality/liability of situations. It’s tricky/troubling/exciting/crazy times we’re heading for. Let’s all hope cooler heads prevail than the current orange one sitting on PennAve.
Given the right situation, I would say in a big enough company you could refuse to work on a certain project. But in a 5-10 person startup, you’d probably be shown the door.
There is a growing awareness of ethical responsibility in the industry. The Information Technology Industry Council, a group that represents the IT industry giants, just released the first-of-its-kind principles to advance the benefits and responsible growth of AI. 
I guess Interpretability of models will be a big challenge.
However, with advancement of using Ai for building Ai systems how do we “encode” machine ethics?
Towards Practical Neural Network Meta-Modeling
I was interested in this case and the petition to the SCOTUS a few months ago:
Here is some background:
I don’t think I have a problem with algorithms being used for these kinds of tasks (especially if they demonstrably outperform judges), but the secrecy surrounding the inner workings of the algorithms is concerning.
edit: Hah, I just realized I linked the same article in the OP. Sorry about that! For some reason I only saw the facial recognition in China video - should have read more carefully.
A bit tangential to the discussion ongoing here, but Cathy O’Neil has written/published a book that touches on some of these issues. I’d highly recommend a read. https://weaponsofmathdestructionbook.com
This caught my eyes. A few months ago, I started to work for a company whose primary customers are military and government. Since I am not only classified but a green card holder from Japan, I must be escorted at all times when I am at a client site - even to go to a bathroom!
I do question how effective it is to build a system for which you cannot see the actual data - which is nothing like any other “black boxes” engineers built in the past. It has only been 4 months since I started learning AI and deep learning. And I am aware of biases that become a part of it whether it being intentional or unintentional. Maybe I am being naive, but I still hope I can be a part of reducing biases and making sure what I create for work is ethical.
This reminds me of something called homomorphic encryption; which according to wiki is defined as a form of encryption that allows computations to be carried out on ciphertext, thus generating an encrypted result which, when decrypted, matches the result of operations performed on the plaintext.
This concept of anonimyzing the original data, due to sensitivity and regulatory reasons, yet preserving the structure, that’s essential for ML, is somewhat pioneered by the folks at Numerai, a hedge fund that’s driven by algorithms developed on homomorphically encrypted data. You should definitely check this out.
Based on the same concept, a researcher from Oxford and author of Grokking DL book, Andrew Trask, recently started an open source Deep Learning project based on homomorphically encrypted data and the project was on GitHub trending projects list in September, 2017.
Through this battle against the epidemic, artificial intelligence has fully demonstrated its important value in assisting humans in responding to public safety emergencies at the time when humans are experiencing major diseases. On the front line, a series of products and solutions based on big data and AI technology have emerged, which play a role in various links including virus research, epidemic prevention and control, vaccine research and development, and epidemic information release. It can be said to be the first time. It forms combat effectiveness."
"At the moment, AI is entering a new stage of large-scale implementation. We are in a critical period when AI enters the field, enters the mining area, and enters thousands of households. We believe that AI technology will Penetrate into every corner of life to build a better society. automation in healthcare, retail, finance, IT will help a lot in industrial automation.