Lesson 6 In-Class Discussion ✅

The one you pass in your Lambda layer.

Didn’t IBM released a faces dataset to train models to prevent this bias? I thought I saw something back in the summer.

1 Like

Thank you. I’ve been stuck on this concept for a while.

This might get answered… but since its not clear what latent features actually represent, how can we be sure that our models are not optimizing for ethically problematic hidden latent features?

You can’t. Even if you think you have curated your data perfectly. You need to analyze your predictions, with as much as a diverse team as you kind muster (because what you can think of is going to be limited).

The predictions of your model are going to reveal to you bias in your data you wouldn’t have suspected.

7 Likes

This great review is a great read. It came out two days ago from Google, by Margaret Mitchell and Ben Hutchinson, titled 50 Years of Test (Un)fairness: Lessons for Machine Learning

5 Likes

Does using contrast/brightness help make models less race biased? or using grayscale would help?

Can we use hooks to grab the FC layer before passing to the softmax in case of multiclass classification for CNNs? Are there any other ways?

Joy Buolamwini and Timnit Gebru released a data set as part of their work (see gendershades.org for more).

IBM did respond to the release of their results.

2 Likes

Another definition question : is a latent feature one of the feature of the final feature map ?

1 Like

I am thinking of the inputs to the first layer – the training data itself. Applying dropout to the inputs shrinks the size of the training set, which allows the model to train faster.

Some of these face recognition police technologies look a bit like taken from some cyberpunk novel :sweat_smile:

1 Like

The question is very hard when it comes to ‘races’ that could come in different colors. Detecting race is very tricky, I even wonder if it’s even possible to build a good model.

For a visual overview of the forward pass in a ConvNet, I built these spreadsheets and they also have a nice little heat map with conditional formatting :slight_smile:

Supporting blog post here.

12 Likes

How are the values of the convolutional kernels usually selected?

This is answered above, kernels values are weights and are trained with backpropagation.

2 Likes

Not always. Some kernels can be “designed” using fixed values, like edge detectors.

I really love Jeremy and Rachel are talking about this topic. Very impactful and a good thing to remember for us learning this.

13 Likes

That wouldn’t shrink the inputs. That would just randomly feed 0s in place of real data.

You can fix kernel values and then freeze those layers but we do not do that usually since networks find features better than humans.