Lesson 5 - Official Topic

@nbharatula I’m going to dig a little deeper. While I agree that it’s important to consider how Rachel feels about it and how Jeremy responds- that line of thought does beg the question whether something is “right”/acceptable/appropriate based on how the receiver feels about it or not. Many people in a vulnerable (inequitable) position are repeatedly treated a certain way over long periods of time. They may perceive it as the “normal” state. So, is it fine then?

1 Like

@rachel According to the NY Times, Apple and Google are building software into smartphones to alert people if they’ve come into contact with someone who had coronavirus. Apparently, it will be built into the operating system of iPhone and Android devices to “constantly log other devices they get close to, enabling ‘contact tracing’ of the disease”. What could possibly go wrong with THAT?

Added that the ethics discussion thread to as a link to the wiki. Really great resource, thanks for putting that together @nbharatula!

1 Like

Perhaps I should have clarified further. No, I don’t think that it is just about how the receiver feels, but that in this case I felt it was an important factor… given the observation was a result of topic, personalities, context etc.

As a society we do have laws as well as unwritten social norms that dictate what is right/acceptable… these vary with time and context and culture… but do exist everywhere.

Fantastic lecture! I just finished watching it and I felt I have now reading materials for as long as the confinement goes :open_book:

SHOULD FACIAL RECOGNITION BE BANNED?

I am posting about this because @rachel emphasized the importance of good policies being developed and right now the EU Commission is asking AI developers and deployers, companies, academics, citizens, etc. whereas facial recognifition should be banned and, if not, under which conditions should it be allowed. This question is part of the public consultation on the White Paper on Artificial Intelligence - a European Approach I made a short video to raise awareness about it in youtube but here is the critical information you need to be aware of:

I believe this is a very important topic and it is very unfortunate that due to another very important topic (COVID-19) it is not recieving any media attention at the moment. Most people I have talken to, despite living in the EU and working in AI, are not even aware of it… so I would like you to be part of the minority that does!

Until 31 May 2020 (midnight Brussels time) you have the chance to make sure your voice is heard by responding this EU questionnaire. :eu:

3 Likes

Would these tools have privilege rights and thus be suspect themselves, it’s not impossible.

But then you hide the 20% (80/20 rule) who intend no benefit to the user,

Trust no one???

@rachel About Claire Wardle’s trumpet of amplification. Are there studied cases for which politicians planted seeds into 4chan / 8chan to serve their agenda?

[Selfie] :wave:t6: I wanted to share some of my work on this critical topic – I research Ethics, Technology and Human Rights at Harvard, and I focus on extending the conversation from western norms of ethics, to thinking of other ethical systems that could provide stronger protections and inclusion.

I’m writing a book chapter to be released very soon, but here’s a short piece on an alternate conception of ethics and technology.

Hi sabzo I hope you are having a truly wonderful day!

I found your post interesting, informative and inciteful.

Given that the points you mention have occurred through out history to some degree, when ever new technology has been deployed from such simple technologies as guns and steel and other factors https://en.wikipedia.org/wiki/Guns,_Germs,_and_Steel.

Is it almost inevitable that with probably the greatest technology that man has created so far, (when I say greatest, I mean none of the other technologies I am aware of had the ability to out perform a human mentally). The impact of any atrocities or beautiful wonders committed with the use of this technology will be on a scale relative to the power of that technology?

Also having traveled to 58 different countries in my life my target is 150. I have met many people who I perceived as ethical (I have to be careful not to be biased) but I don’t believe I have been to any culture or country that is ‘totally ethical’ what ever that may mean.

Is it actually possible for humans to be totally ethical given that our emotional state changes from moment to moment.

One of the consistent themes I see across the world is the constant tussle between opposing sides on any subject you can think of and the quest, for things which could come under the headings of Wealth health, Love and Survival.

The other thing I have noticed in my travels is governments, companies and organizations don’t do great things but individuals who are prepared to suffer are the people who do great things.

In terms of ethics the two things I fear most are, the ethics of how technology is used and the ethics of all organizations of all shapes and sizes. https://en.wikipedia.org/wiki/The_True_Believer

Cheers mrfabulous1 :smiley: :smiley:

You’re not alone. We are concern too. I’m aware that there’s many work on privacy-preserving contact tracing apps including the recent collaboration between Google and Apple and Moxie (Signal’s protocol co-author) first look at Apple/Google contact tracing framework. There’s a lot of talk but too little work during this crisis.

Here’s a little story of mine. I’ve been voluntarily using Singapore’s COVID-19 contact-tracing app, TraceTogether for almost a month now.

FAQs on app permissions and privacy.

Technically, TraceTogether is built on the BlueTrace protocol. If you’re interested, you can read more about the BlueTrace Manifesto here.

Up till now, all is good and the tech behind app is open source, but I’m a privacy freak. We need proof and data. Data don’t lies. So, a group of us reverse-engineered the app just to verify things are working exactly as what they say. Here’s what we found:

tl;dr: The app works and does not store PII (Personally Identifiable Information) or location data and only stores encrypted temporary IDs which keep changing even for the same device. The only PII collected is a phone number, which is securely stored by the health authority.

Disclaimer: I’m not a security and privacy expert.

3 Likes

Another example of racial bias in the wid

Black person with hand-held thermometer = firearm.
Asian person with hand-held thermometer = electronic device.


Source: https://twitter.com/nicolaskb/status/1244921742486917120

1 Like

I just listened to an interesting interview discussing disinformation, social media and recommendation algorithms by a security researcher (Daniel Miessler) with Renee DiResta at Stanford Internet Observatory - and thought about the crossover with this lesson and deep learning in general.

https://omny.fm/shows/unsupervised-learning/a-conversation-with-ren-e-diresta-disinformation-a

In summary: focus is mostly on state actor disinformation methods, tools and campaign examples, also covers more local actors. Interesting points by interviewer about attribution.

On racial bias.

It seems FaceDepixelizer has seen more images of white men than black people of any gender, ending up depixelizing e.g. a blurred Obama image into a “standard” white man:
image

Or depixelizing a black woman into a face with male traits:
image

Or this:
image

Note: I’m not here to shame authors. I just stumbled upon these samples on Twitter. I still need to get my hands on the model and play with it by myself. In the context of this thread I believe it’s a good illustration of bias.

See also https://twitter.com/osazuwa/status/1274444300894572546

2 Likes

Interesting new paper ‘An unethical optimization principle’ from Nicholas Beale, Heather Battey, Anthony C. Davison, Robert S. MacKay https://doi.org/10.1098/rsos.200462 which concludes, If an AI aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk.

“Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be profitable to ‘game’ their psychology or willingness to shop around. The AI has a vast number of potential strategies to choose from, but some are unethical—by which we mean, from an economic point of view, that there is a risk that stakeholders will apply some penalty, such as fines or boycotts, if they subsequently understand that such a strategy has been used.”

1 Like

Hi AlisonDavey
Great find!
mrfabulous1 :smiley: :smiley:

1 Like

Hi, I’m checking the loss function that is used when you create a learner with cnn_learner, and I see FlattenedLoss of CrossEntropy loss. Please what is the difference between this and the normal cross entropy loss.

Who has watched the New Documentary/Film called, The Social Dilemma?
It touched a lot of issues in Data Ethics in regards to Social Media.

2 Likes

Whataboutism.

The link to the book Weapons of Math Destruction is redirecting to an unrelated website. Both here and on the course page. Where can it be reported?

Thank you. I’ve edited the link above to point at the wikipedia page for the book now.

On the course page, can report via ‘report issue’ link.

1 Like