Lesson 5 - Official Topic

To follow up on Joseph Redmon’s quote, there’s some nice discussion on this Reddit thread.

4 Likes

Even more, can we expand to this question: How to make mutual feedback approaches for humans to reduce the bias of models, and models to reduce the bias of humans?

2 Likes

Should we have a Fast AI notebook which does solves some ML problem (basic one) in which we show how a person can systematically remove such biases? For example - How just removing the gender variable doesn’t remove the gender bias but multiple other things can be done to remove that bias.

1 Like

Reminder, My invite for questions for: AMA Interview with Rachel is still open, if you’d like me to ask any questions during the podcast interview.

3 Likes

I’m really sad that something that could be used to so many good benefits is being completely misused and the escalation problem (starting with CV for detecting funny faces and ending with people surveillance) it feels out of our control and makes me feel so powerless :frowning_face:

I find this topic incredibly interesting and of paramount importance.
Here a list of links to papers/repos/tools I am currently going through to understand how to check for bias in your datasets/algorithms AND address this bias:

AIF360 (ironically from IBM…) is the most complete tool I have explored so far.

15 Likes

Here’s a scenario, a self driving car is in a position where it could either save the people inside or the pedestrians and the accident is bound to happen, how will the AI decide who is it going to save? and if it choses either of them, is it safe to say that it’s because of the bias in the model?

1 Like

Maybe only evident bias info has been removed. A model can still catch what some people still see in the modified images, like the different body proportions for male vs female, weight, etc

1 Like

Powerful things can be used and misused.

The first hominid in eastern Africa to learn how to start fire was probably astounded by the warmth and ability to cook food. I wonder how long it took for someone to discover they could use it to burn a rival’s dwelling?

I find this class to be empowering, because people like us can spread the word and be vigilant. Jeremy’s mask odyssey is a lesson on this.

Think about his first comments on it, and how hopeless it seemed to spread the word on them. We can do the same here. :slight_smile:

3 Likes

The idea would be to avoid getting in such a situation altogether. Remember the self-driving car will be an order of magnitude better driver than you will ever be.

How about this for an idea? A system that tells YOU when you are reading something that confirms you own biases AND one that tells you when you are reading something that supports the opposite?

2 Likes

Thanks for replying. I agree there is a huge ethical error at that step. The problem is that the technology is openly available, those governments exist, and they are not following any ethical principles.

1 Like

I have to say, ethics as a junior developer is really hard. I didn’t know enough about the domain to know what was unethical, even though I was making changes with fairly large impact to customers. I more or less tried to be as transparent as possible, but besides that I felt like there was very little I could do at the time.

1 Like

lets just say, the AI found itself in such a situation and its unaviodable

Thanks that is empowering and that is truth information and sharing the right message will be able to make a change on how things could be done in the future. Thanks for cheering up :slight_smile:

1 Like

That’s the problem with the straw man argument though. These kinds of accidents are completely avoidable. Its never completely un-avoidable. Models don’t take risks for e.g (that are not built in already).

@rachel - In banking there is an independant model risk management group to validate/check all models. Do you think there are ways to implement similar indepedant reviews in other industries without slowing down innovation?

If yes, this would ofcourse add visible cost but possible invisible benefit, so how do you convince managemet to do this in the absence of regulation (like in banking)?

2 Likes

jeremy - You think transfer learning makes this tougher - auditing the data that led to the initial model?

4 Likes

Great question! Additionally, can we “fix” a biased model via transfer learning by training it further on non-biased data? Asking @rachel :slight_smile:

5 Likes

Thinking about Jeremy’s question about complex model vs. simple model.
I think a complex model is harder to figure out what it’s actually doing, and it discourage people to take a closer look. With a simple model, errors and problems are easier to see. So if there are some obvious bias or error, you will likely to find that quicker in simpler model?

1 Like