AI/ML Ethics, Biases, & Responsibility

Ethics of AI/ML is relevant to the entire fast.ai community. This thread is visible to all members across courses and a place to post/discuss any ethics related content. It also pulls in resources from ethics threads in course-specific categories.

Learning Resources (courses, videos, blogs etc)

Books & Courses:

Blogs, Tweet threads & Links:

Videos, Podcasts, Talks:

Community (people, conferences, events etc)

People:

Centers, Institutes etc:

Conferences, Events, Meetups etc:

Research (papers, reports, standards etc)

Category Title / Link Summary
General In Favor of Developing Ethical Best Practices in AI Research Best practices to make ethics a part of your AI/ML work.
General Ethics of algorithms Mapping the debate around ethics of algorithms
General Mechanism Design for AI for Social Good Describes the Mechanism Design for Social Good (MD4SG) research agenda, which involves using insights from algorithms, optimization, and mechanism design to improve access to opportunity
Bias A Framework for Understanding Unintended Consequences of Machine Learning Provides a simple framework to understand the various kinds of bias that may occur in machine learning - going beyond the simplistic notion of dataset bias.
Bias Fairness in representation: quantifying stereotyping as a representational harm Formalizes two notions of representational harm caused by “stereotyping” in machine learning and suggests ways to mitigate them.
Bias Man is to Computer Programmer as Woman is to Homemaker? Paper on debiasing word embeddings.
Accountability Algorithmic Impact Assessments AI Now paper defining the processes for auditing algorithms.
Guidelines Ethics Guidelines for Trustworthy AI Report by EU Commission on AI Expert Group
Guidelines Ethics of AI in Radiology North American & EU Multi-society report
Guidelines ITI AI Policy Principles ITI report

Second & Third Order Effects

9 Likes

@rachel, does it make sense to wikify this? Also, is there a way admins can fold other related threads into this one… I copied the links over but some of them have good conversations that we can archive under this master thread. Thanks!

@jamesrequa @init_27: pulled out content from your awesome threads so they’re accessible to all. please check if I missed anything. I couldn’t figure out a way to make the conversations also be available to all… some of them are worth reviewing often! If there is a way to pull in or embed whole threads pls let me know.

1 Like

@nbharatula great idea and thank you very much for doing this!! It looks great :slight_smile:

1 Like

thanks for posting and collecting all this valuable resources here

1 Like

New Paper on Bias in Analogies
Summary: Turns out “Man is to Computer Programmer as Woman is to Homemaker” came about because of an external constraint placed on the model - it was not allowed to return words/vectors too close to any input vector!

The authors recommend listing the top-N returned words, without any constraints on the model, to determine real bias (as opposed to sensationally cherry-picking returned words). So I did just that. And here is what I found for the query “boy is to handsome as girl is to ??” when the model was trained on Reddit:

45%20AM

Same query in reverse (“girl is to handsome as boy is to”) on same data/model returns:

51%20AM

This paper tests across 3 word embedding spaces - Google News, De-biased Reddit and original Reddit. Google News and De-biased Reddit had slightly better output.

Which brings me to an important question/observation - why do NLP researchers rely on Reddit for training when it’s well known to be biased (see this and this)? Even GPT-2 is trained on links recommended on Reddit, from their blog:

“In order to preserve document quality, we used only pages which have been curated/filtered by humans—specifically, we used outbound links from Reddit which received at least 3 karma.”

Also, the paper says analogies aren’t the best way to identify bias in existing text. If so, what is?

1 Like