ETHICS of Deep Learning / AI

A place to post anything about ethics and Deep Learning / AI


BerkeleyAI Meetup is hosting and event entitled “Open discussion: Research papers on bias and fairness in AI/ML” on April 25th, 2019, 6-8pm.

They recommend some pre-meetup reading and have ~10 links to relevant papers.

@rachel @nbharatula

1 Like

This is great and I’m glad to see this discussion open up. One thing that I suggest all researchers understand is that, regardless of who’s direction you’re under the person writing the code is ultimately responsible for any undue harm/bias that occurs and to be mindful of this before deployment.
I think that this essay offers some valuable insight.

Cross posting to wider audience (not sure if everyone can access part 2 yet):

The AI Ethics landscape is large, complex and emerging. I am unpacking all its dimensions over 3 posts. Sharing part 1 (Also on Twitter in case you’d like to share!!):

The Hitchhiker’s guide to AI Ethics

I look forward to feedback from the community!

This is might interest you, Facebook and Udacity recently announced a new scholarship challenge: Secure and Private AI

It seems to be more focused on privacy than ethics in general, but might be still interesting and at the very least connect you with other students thinking about those topics.

The course is free and the applications are open up to May 21st.


This line “Can I trust an autonomous vehicle to have seen me?” motivated me to contact a friend at Cruise Automation and ask if they have thought of giving their cars a way to communicate that they had seen me. As a cyclist, I try to make eye contact with drivers. Is there an appropriate equivalent for a self driving car?


There have been some attempts by startups building autonomous vehicles - from giving it giant eyes that will turn and look at you, to continuous stream of messages on the vehicle’s body indicating what it is seeing at that moment (have seen this on the roads). I’m not aware of studies on how effective these really are.

1 Like

There is interesting research looking at interior and exterior sound design of future electric vehicles.

Your question made me wonder if anyone has looked at emitting external noise only when required. Avoiding noise pollution, and context aware sounds (perhaps silent high pitched sirens for an animal on road; directional warning sounds for pedestrians/cyclists).

1 Like

My friend from Cruise replied to my enquiry by saying “I think this is a very interesting problem in automous driving space.” And mentioned that they have few patents on related mechanisms.

This article he shared notes that companies are looking at reducing noise pollution:


Is there anything we as a community can do to prevent technology being used for warfare.
One thought would be that people inventing things release it with some kind of “only for peace” license.

Feels quite terrifying that autonomous killer drones are already in use.


Equity and Ethics in Data Journalism course from the Knight Foundation

I never considered something like that peaceful license. I’m curious about that too.

On a personal level, what are few suggestions to avoid becoming as a Bond villain as one becomes more powerful with deep learning?

Question inspired by a talk “Artificial Intelligence needs all of us” by Rachel Thomas.

2020: A Critical Inflection Point for Responsible AI with Rumman Chowdhury, Accenture