Lesson 5 - Official Topic

Link to the article: The fundamental problem with Silicon Valley’s favorite growth strategy

4 Likes

I believe this strategy is going to change after COVID. I think we are gonna be more likely to create and support products that make us more resilient rather than optimizing for better, faster, cheaper.

Do we think that Deep Learning, with its ability to recognize patterns, will be able to recognize ethical behavior? Will it ever be able to do a better job than humans?

3 Likes

What are some strategies for finding the disconnect between the metric you’re optimizing and your actual goal?

1 Like

Is it possible to apply the philosophy of Dropout to metrics and error calculations, to avoid undesirable positive feedback loops?

This is such a great topic – I wanted to share a real world example of trying to do data science ethically. I got asked to build a candidate recommendation engine at work about a year ago, and it immediately set off some red flags so I posted here and started reviewing some of the great resources around here.

It’s been a major uphill battle with the business / product team to avoid bias, negative feedback loops, etc. It is REALLY hard to explain to business stakeholders that machine learning doesn’t just fix everything, and that you still need people. It took months of back and forth, but we eventually decided to abandon the recommendation engine and refocus the product to give some tools to help humans review candidates, rather than trying to automate everything.

Original post here:

8 Likes

Is it always possible to reach a consensus of what is ethical?

1 Like

Why are the current platforms unique in disinformation? And do you think the very platforms that are encouraging disinformation are also the means that cast light on the very problem.

Some friends in Mexico does not believe the COVID due to some Facebook adds that have been spreading disinformation. Is it really technology responsibility to fix itself (fixing the disinformation with another AI model) would that then control what people want to see? Should people be more involved or how can it help to try to detect and prevent disinformation?

From mohamedelhassan of TWiML Study Group: I think that ethics are very important, but in many cases (e.g. the Volkswagen diesel emissions scandal) management drives (and sometimes even rewards) unethical behavior. What can an individual engineer do in a case like this? Especially in a place like Silicon Valley where there people move companies so often?

How do we make Ethics in AI sexier? I feel is a similar problem when trying to make people think about the environmental consequences of training a large model for a long time.

1 Like

I think making Ethics sexier is the first step :slight_smile:

1 Like

Q for Rachel from naneetkrch of TWiML study group: “How will AI help in tackling Fake news/propaganda?”

1 Like

I like this. Including human in the loop is like augmenting the human, rather than replacing the human with metrics that can give faulty results.

2 Likes

I came across another problem related to Data ethics, When we use Beautify mode on recently developed Smartphone camera, the algorithm tends to lighten the skin tone to beautify the image which i believe is somewhat related to the examples we got ‘healthy skin’ results on google.

I’ve been worried about COVID-19 contact tracing and the erosion of privacy (location tracking, private surveillance companies, ect) - what can we do to protect our digital rights post-covid? can we look to any examples in history of what to expect?

7 Likes

I’ve seen results like this. I think it correlates to where the app was developed sometimes. Beauty is subjective and different countries view beauty in ways that are not always inclusive of all types of people.

Does this have a technology solution? Or are we looking for the keys where there is light?

My 2 cents: from an impact perspective it does not matter! If AI Ethics focuses on ensuring its impact is fair/just/ethical then it has to drive accountability for all decisions/actions, intentional or not.

2 Likes

I feel like there has always been disinformation/ethics issues in media. Why is this more relevant now? Is this because media is not as centralized? Should companies like Facebook decide what content to show us because they own algorithms they develop?

1 Like