Lesson 5 - Official Topic

Thanks for replying. I agree there is a huge ethical error at that step. The problem is that the technology is openly available, those governments exist, and they are not following any ethical principles.

1 Like

I have to say, ethics as a junior developer is really hard. I didn’t know enough about the domain to know what was unethical, even though I was making changes with fairly large impact to customers. I more or less tried to be as transparent as possible, but besides that I felt like there was very little I could do at the time.

1 Like

lets just say, the AI found itself in such a situation and its unaviodable

Thanks that is empowering and that is truth information and sharing the right message will be able to make a change on how things could be done in the future. Thanks for cheering up :slight_smile:

1 Like

That’s the problem with the straw man argument though. These kinds of accidents are completely avoidable. Its never completely un-avoidable. Models don’t take risks for e.g (that are not built in already).

@rachel - In banking there is an independant model risk management group to validate/check all models. Do you think there are ways to implement similar indepedant reviews in other industries without slowing down innovation?

If yes, this would ofcourse add visible cost but possible invisible benefit, so how do you convince managemet to do this in the absence of regulation (like in banking)?

2 Likes

jeremy - You think transfer learning makes this tougher - auditing the data that led to the initial model?

4 Likes

Great question! Additionally, can we “fix” a biased model via transfer learning by training it further on non-biased data? Asking @rachel :slight_smile:

5 Likes

Thinking about Jeremy’s question about complex model vs. simple model.
I think a complex model is harder to figure out what it’s actually doing, and it discourage people to take a closer look. With a simple model, errors and problems are easier to see. So if there are some obvious bias or error, you will likely to find that quicker in simpler model?

1 Like

Physics version of the philosophy tool - If there is an exception to a rule and if the exception can be proved - the rule is wrong.

3 Likes

My question didn’t get asked. Would be great if you can ask this to @rachel during your interview. :slight_smile:

You’ll need more "like"s! :slight_smile:

To start, how to confirm data is not biased?

I think there has been good progress on attributing feature input value to the downstream prediction of the model, for black-boxy models like deep nets. However I’m not sure what people do with that information to reduce bias. Removing biased features, and strongly correlated features to those seems to be a common approach.

I also had a question around these I would it could get asked I’ll repeat:

Some friends in Mexico does not believe the COVID due to some Facebook adds that have been spreading disinformation. Is it really technology responsibility to fix itself (fixing the disinformation with another AI model) would that then control what people want to see? Should people be more involved or how can it help to try to detect and prevent disinformation?

1 Like

This is a real issue. Also a lot of times if you are in minority it is genuinely hard to raise issues even if you had the domain knowledge to do so. However, being aware of issues and possible blind spots arms you with the right questions to ask when you’re able to!

The onus really must lie with the organisation to train its developers and data scientists or hire an inter-disciplinary team to mitigate ethical issues.

Twitter Profile Link: @infodemicblog

I think FB should flag this as misleading. They should have some responsibility.

I think this requires human inspection, preferably someone who has domain knowledge on the dataset.

Is there a resource to get a quick glance of ethical issues/debacles companies have had per company? Think BBB

1 Like