Week 1: Disinformation-- Discussion

This is a place to post thoughts about the readings, questions about disinformation, or to share your own links or resources related to disinformation.

Required Reading:

  • Will Oremus, The Simplest Way to Spot Coronavirus Misinformation on Social Media: Profile of Mike Caulfield’s work. Caulfield is a is a digital information literacy expert working at Washington State University Vancouver. If you are interested, check out the twitter account @infodemic for shareable threads about how to to tell fact from fiction on the web regarding COVID-19
  • Guillaume Chaslot, How Algorithms Can Learn to Discredit the Media: Chaslot was a former Google/YouTube engineer and is founder of the non-profit watch group AlgoTransparency. He has probably done more to bring attention to the role of recommendation systems in radicalization than anyone else. For a counter-view on radicalization, see Rebecca Lewis’s work below.
  • Renee DiResta, Mediating Consent: DiResta is a top expert on computational propaganda, who led one of the two teams that analyzed the dataset about Russian interference in the 2016 election for the Senate Intelligence Committee, and now works at the Stanford Internet Observatory

Optional Reading:

Optional Lab for Coders:

Intro to Language Modeling & Text Generation: video lecture and jupyter notebook (from my NLP course)

Video (from previous iteration of course): Lesson 1, Disinformation

1 Like

Hi Rachel, Is the online link for the class being posted somewhere?

Here are a few of the links I mentioned last night:

Let me know if there are other articles or links I mentioned that you can’t find.

Also, I will post the video here once I’ve finished processing (should be this afternoon).


Other articles which were mentioned in the class:

Here is a twitter list of experts recommended: https://twitter.com/i/lists/1224493763583598592

1 Like

Here is an interesting article on Brazilian President Bolsonaro and how he won the 2018 election thanks to WhatsApp:


1 Like

So it was mentioned that a possibly better way to evaluate an article or piece of info was to get off that page and look into other references or mentions of that piece of info as opposed to solely using one’s critical thinking ability on that one piece.

Since this is a very domain-agnostic way to evaluate the credibility of information, wouldn’t this make it particularly suited for a potential algorithmic solution? (where the form or structure of a piece of info situated with respect to other info (such as a citation network analysis) is supposedly easier to analyze than the content or domain)

1 Like

Here is the recording from yesterday’s MSDS class:

Lesson 1: Disinformation

And here is part 2 of the disinformation lesson: Disinformation (Data Ethics Lesson 1.2)