Week 6: Algorithmic Colonialism, and Next Steps

When corporations from one country develop and deploy technology in many other countries, extracting data and profits, often with little awareness of local cultural issues, a number of ethical issues can arise. Here we will explore algorithmic colonialism. We will also consider next steps for how students can continue to engage around data ethics and take what they’ve learned back to their workplaces.

Required Reading:

Abeba Birhane, The Algorithmic Colonization of Africa

Amy Maxmen (Nature), Can tracking people through phone-call data improve lives?

Adrienne Lafrance, Facebook and the New Colonialism

Optional Reading:

Joe Parkinson et al, Huawei Technicians Helped African Governments Spy on Political Opponents

Davey Alba, How Duterte Used Facebook To Fuel The Philippine Drug War

Rumman Chowdhury, Algorithmic Colonialism

Daniel Greene, et al. Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning

Jess Whittlestone et al, The role and limits of principles in AI ethics: towards a focus on tensions

Sareeta Amrute, Tech Colonialism Today

This got buried in the chat rush, but Maciej has his “six fixes” for restoring privacy that I’m a fan of:

Right To Download
Right To Delete
Right to Go Offline
Limits on Behavioral Data Collection
Ban On Third Party Ad Tracking
Privacy Promises
3 Likes

Here’s the police ‘mindfulness’ study I was talking about in the chat: https://behavioralpolicy.org/wp-content/uploads/2017/05/Goff-web.pdf

And an interesting study re: bias training (at a VA, so slightly different, but does show the ‘rebound effect’ that solidifies some cultural biases): https://www.researchgate.net/publication/301625572_Evaluation_of_a_Pilot_Program_to_Improve_Patient_Health_Care_Experiences_through_PACT_Cultural_Competency_Training_about_Unconscious_Bias

Links on when Diversity Washing/Diversity Training can be harmful:

How Diversity Branding Hurts Diversity: post I wrote linking to a bunch of research on this

Fantastic twitter thread from Y-Vonne Hutchinson on all the problems with SF police dept having unconscious bias training. A few quotes from her thread:

  • People are motivated by incentives. Training is basically useless if other parts of the culture say that problematic behavior is just fine (if not encouraged).
  • It’s hard to change explicit bias with a training. But even if you could help some ppl,that just scratches the surface. Why? Because the bias we see in policing is both historical and systemic.
  • Say you manage to reach one of the “good guys” that “doesn’t have a racist bone in his body”. Good luck turning around that dept. Police operate in a rigid hierarchical environment with formal policies/quotas and informal practices and traditions. There are networks of loyalty and influence that keep people in check. And the strength of their relationships could have life or death consequences. There are very little rewards if someone makes waves and pisses off the folks that are just fine with all of their racist bones.
  • These broken corporate solutions cause real harm. It’s bad enough if Karen from HR leaves a training pissed b/c of the SJW’s that wasted her time. It’s even worse when we push this on ppl who have the power to destroy the lives of POC and face little accountability if they do so.

Here are the videos for week 6:

Venture Capital, Hypergrowth, & Blitzscaling (Data Ethics Lesson 5.3)

Algorithmic Colonialism & Next Steps (Data Ethics Lesson 6)

Hi Rachel, would you kindly post the latest slides from Thursday on algorithmic colonialism on Canvas so we can have them as a resource when we prep for the final/to have during the final? Thank you!

2 Likes

Hi Rachel, could you also pls post week 6, 7, and 8 lecture videos