Lesson 6 In-Class Discussion ✅

Which court and which legal system … historically companies claimed if they were USA based stock wise, then following us regulations is enough. Things like GDPR and general outrage is changing that thankfully. Not sure if everyone has seen this but the Guardian Facebook files is fascinating as they release a lot of leaked docs on the editing and approval processes etc https://www.theguardian.com/news/series/facebook-files.

acts for activations.

1 Like

No worries. Just a general reminder (and we’ll look at the clean rossmann notebook tomorrow :wink: )

1 Like

Is the intergalactic court system an option? I was thinking in US courts, but I guess anywhere that these companies can be held accountable, they should be. Maybe it is more effective to use public shaming than courts?

2 Likes

Court of public opinion is always dangerous. See - reddit identifies boston marathon bomber.

Even though judicial systems can be cheated, they ultimately exist to protect those who are innocent and unjustly identified as guilty.

Sadly, those with money can hire people who argue their case much better.

3 Likes

Agree in avoiding dangerous kangaroo courts etc @ this point tho the statutes and legal framework doesn’t support a lot of implementation in courts yet. … so influencing the companies to adjust their systems (while also working on legal frameworks to allow for formal approach) seems required

3 Likes

It’s an interesting topic. Maybe companies that care about this should give an API connection to make interacting with their models possible. Something like the rainforest alliance for model diversity testing.

Also maybe having a model flaw bounty program as well?

Thank you fast.ai team for this amazing lecture, especially that last very interesting bit about ethics in data science !

And for Sylvain, since we now know he absolutely adores cats, here’s a thank you cat :wink:

thank_you_cat

13 Likes

Great class! Thanks!

You also have the issue that everyone has inherent biases that they may not even realize. When building something, the best way to not be exclusive is to have a diverse group of people in the planning, building, testing, and implementation. Go out of your way to talk to groups you haven’t consulted with. Diversity is very important!

6 Likes

Great lecture. Thanks.

Yeah, I’m thinking a bounty reward system may be useful, but what metrics would they use to determine a flaw in the system. For example, if I am 1% better at a task on one group over another, is that something that needs to be discussed or only if there is a >10% difference. Bug bounty programs are quite a bit less subjective because you either have a bug or you don’t. Not sure what would be considered a flaw detection in a model, but I think something like that would be a useful way to crowd-source some testing of these models from people with different perspectives.

1 Like

Thanks for bringing data science ethics, biases & responsibility up. While we’re having so much fun learning and developing AI models, I think we should not forget to consider not only the obvious benefits of our work to society, but also the potential harmful impact. Great lecture.

2 Likes

The issue with this is that most often, terrible bias that creeps in is just unintentional and can be tough to anticipate and intercept. ATM, I believe that,sadly, there is no foolproof way of resolving all ethical concerns…The most one can do is to expose the model to great diversity,test it and hope for the best. However, most people aren’t concerned enough to do this. Hope this will change through more stringent policy and better awareness. Also, hope for better ways to find and eliminate systematic bias in the future…This is of great impact after all!

1 Like

A constant risk in any technical field is that the metrics we use internally and communicate to wider audiences are easily misunderstood either accidentally or intentionally.

On this note, would also appreciate if anyone could point me to any cool research on identifying and eliminating systematic bias in a deterministic way :slight_smile:

A horizontal edge (near a specific row) is associated with a sharp systematic change in brightness along many columns near that row.

One paper that I think is cool is “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” https://arxiv.org/abs/1607.06520

3 Likes

These are the weights/parameters of the convolution layers that we are tuning using gradient descent.

Not sure KarlH. If your logic applies, it should apply to the hidden layer activations also.

Even though we are not training multiple neural networks we are adding randomness at the mini-batch and epoch level. So Yes, I am with jcatanza on not understanding the intuition on why dropping inputs is a bad idea.