The first piece discussing privacy (Ceglowski’s work), talks about tearing apart Facebook and Google. LinkedIn, like Facebook, has cracked “real identity.” What qualifies truly as “responsible AI” or data collection from those who have cracked “real identity” more or less, vs. social media platforms such as Reddit and Twitter (who may have multiple accounts or “fake” accounts or "anonymous accounts). The topic of anonymity or the right to be forgotten (death), or remaining anonymous is one that no one has solved quite yet on any platform.
In Rogaway’s piece he writes about his surprise about the youth not really thinking about the morality of the employers/institutions oft he jobs they were looking to apply to.
“I found on a Google search of deciding among job offers, not one suggests
considering the institutional goals of the employer or the social worth of what
they do.” what about the “do no evil” tenet by Google or new AI principles that rolled out, does that count or factor into decision making for prospective applicant engineers? I’m not sure if this is exactly a fair depiction, but what I would say is that many data scientists/engineers lack a foundation of ethics because people are obsessed with having enough data to be statistically significant to “science” or experiment upon in the first place for their jobs. It’s been an after thought, as with privacy and that data collection without parameters outside of biotech or bioethics or tech ethics I’ve found less parameters for a lot of other software B2C or B2B company’s involving data and a virtual identity without a bodily part to examine. He points towards technological optimists vs. pessimists and the optimists focus on context and that not everything in the world of tech should be seen with rose-colored glasses.
“People are often happy to get funding, regardless of its source. But I would
suggest that if a funding agency embraces values inconsistent with your own,
then maybe you shouldn’t take their money. Institutions have values, no less
than men. Perhaps, in the modern era, they even have more.”
So if someone was contracted to help Google with Responsible AI as a woman, would this be something they should decline because of centralization at Google even if there is a desire by leadership (Sundar Pichai) that there should be some form of regulation on AI to some extent?
I’m not sure that this is as black and white of an issue on “no working for any institutions with no values,” it seems polarizing with almost no middle ground for anyone. The suggestion “Think twice, and then again, about accepting military funding,” perhaps people may think twice, but many others may still conform. While there was much buzz about Project Maven and Dragonfly, talking to those involved directly may tell you something different than the news (instead of security, it’s just satellites in space one physicist told me, though from the last article by the New York Times, even scientists and engineers working there are not aware how their products they are developing are really being used potentially for mass surveillance.