Introduce yourself here!

Please use this thread to tell us a bit about yourself. What got you interested in data ethics? What are you hoping to get out of the course? Feel free to give some background about your other career or personal interests as well.


I will start: I became increasingly interested in data ethics over the last few years, and began doing more reading, writing, and speaking on the topic, particularly about bias, disinformation, and surveillance. For the course, I’m really looking forward to the discussions we can have with such an interesting and varied group!

As background, I have a PhD in math and previously worked as a software developer, data scientist, and bootcamp instructor. Four years ago I co-founded, a non-profit research lab with the goal of making deep learning more accessible to a broader and more diverse group of people. I had experienced some of the toxicity related to lack of diversity in both academia and the tech industry, and I believe this is linked to many of the ethical issues we see in tech. So my growing focus on data ethics felt like a natural extension of my work with

In terms of personal interests, I have a 4-year old daughter, who is a lot of fun and keeps me busy! I also like blogging (which I think of as a cross between a personal & professional interest), yoga, and weight-lifting. I grew up in Texas, and have lived in Pennsylvania and North Carolina, before moving to San Francisco 8 years ago.


Hi! I’m Collin Lysford, a report developer at El Camino Hospital, he/him pronouns. Essentially, I write database code and front-end reports to answer questions for clinicians and hospital administrators - so obviously a field where it’s important to steward data with care. As for why I’m interested, this hits at the intersection of two of the big research questions I’m interested in. I’ll just copy-paste from my resume rather than re-articulate:

Independent Research Questions with (Very Partial) Listing of Sources Studied
Unintended Consequences of Metrics and Algorithms: How does the act of measuring data change its predictive value? Goodhart’s Law, Weapons of Math Destruction (O’Neil), The Ethical Algorithm (Roth), Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech (Wachter-Boettcher)

Causal Inference and Intelligence: How much of our understanding can we encode into computers and AI? What limits exist for AI today? The Book of Why (Pearl), Artificial Intelligence: A Guide for Thinking Humans (Mitchell), Law as Data: Computation, Text, and the Future of Legal Analysis (Livermore)

Broadly speaking, I think that how we’re handling a lot of algorithmic decision making is unsustainable both from an ethical and technical perspective, and so having more tools in my toolbelt to identify when ethical concerns need to be raised is pivotal. Technical problems at least have direct evidence when they go wrong - whereas a lot of ethics problems often (in my experience, anyway) require you to proactively identify them off of your own system-level understanding, without a lot of good “clues” that they’re occurring. Phrased another way, the second research question isn’t precisely ethical, but it’s important that solutions to the technical issues are also ethically sound ones.

I spend a lot of time reading, cycling, and playing board games (or digial adaptations thereof - currently competing in the Meeple League for Through the Ages and always happy to play more.) I was born in Minnesota and did college at the University of Nebraska-Lincoln, so I’m thoroughly a Midwest boy and only moved here Nov 18’ for this job. I have two extremely cute cats and if you search #gaynerdcats on Instagram you can see 'em.


I’m Anna. I come from a non-technical background. When I was young, women didn’t have the STEM support they have now so I ended up going into the humanities. I was also raised in Texas. I moved to California to attend UC: Berkeley where I received my BA in Sociology. I then spent over 20 years working in the legal field as a paralegal, librarian, and in various other administrative capacities. Last year, I decided to pivot and pursue an MA in TESOL at USF. I combined my experience in law with TESOL to focus on the intersection of language issues in law and equity. I’ve enjoyed my time at USF so I’ve been actively pursuing other programs and learning opportunities.

My interest in data ethics has come about in a number of ways. In the course of doing research for conference presentations, I’ve felt stymied by a lack of data and have been left with questions about how institutions and researchers decide which data to collect and report and how a lack of data can make it harder to discern the presence of bias. I love reading. Like Collin, I also read Weapons of Math Destruction (O’Neil). Additionally, I read Invisible Women (Criado-Perez), Everybody Lies (Stephens-Davidowitz), and Dataclysm (Rudder). Moreover, as a woman in society I’m constantly faced with the practical applications of data in a world built by men for men (Muni seats so high women’s feet can touch the ground, kitchen shelves too high for women to reach, long lines for women’s bathrooms at every venue, etc.). I have also seen bias play out in academia (publication, citation frequency, etc.) as well as the news (how women are portrayed compared to men). Women also experience bias in the business world (banking, credit, pitching ideas, voicing ideas, being taken seriously, etc.). As a woman, I experience bias every day. So I’m interested in learning more about the algorithms, machine learning, natural language processing, and data that help facilitate it in the hopes of finding some solutions.I’m looking forward to learning more about ethical concerns with both data use and data collection. I’m really excited to be a part of this course!

I spend the majority of my free time reading or watching movies. I also enjoy spending time with friends. I’m an introvert but I love learning so I’m constantly observing and questioning.


Hi everyone,

Adrianna here. I am the director of product for San Francisco Digital Services, the city’s digital team. I am also the founder of a coding-for-underprivileged-kids org.

As a politics and tech nerd, I am interested in the ways tech intersects and, increasingly, interferes with human stories.

I hope I can bring some international aspect to our discussion here as well as in the classroom, as what tech companies do here impacts the entire world. My background as a person from Southeast Asia who has lived and worked in developed and emerging economies in Asia, Europe and the US will inform most of those thoughts. Everyday, I see big tech companies look to emerging economies with impunity: ‘the next billion users’ in countries that are legislatively unable to impose any administrative penalties for action (or inaction) that impacts those countries. I am personally interested in the similarities (and differences) between global tech and the walled garden of Chinese-language, Chinese-hosted networks. Sometimes, it feels like two completely different Internets. In all media and all languages, I am disdainful of authoritarians manipulating people through technology.

I hope to be able to eventually do research that can surface questions and discussion about data ethics across the areas (professional and geographical) that I care most about.

Hit me up on Twitter @skinnylatte, I am always up for an after-class beer, wine or non-alcoholic beverage!


Hey there - I’m Brett. I’ve been working on the web since the early days and a technologist/internet enthusiast earlier than that. I’ve held various roles and worked on all kinds of products in all kinds of fields. I currently run US operations for a boutique global product services company, Favorite Medium.

My interest in ethics has grown over the last 7-8 years, the first time I turned down a job for ethical reasons - building an EMR platform for a company that supplies prescription pads to doctors. It was going to have a recommendation component for pharma companies.

Earlier in my career - early 2000s, I did a bunch of early social network and platform work for media companies for tween audiences. We got to deal a lot with COPPA issues. Luckily, I was surrounded by a bunch of people that were as interested in protecting the community as much as providing them compelling experiences. There was a lot to learn through that experience.

Since that time I’ve seen more and more that smells wrong - I did some work with one of the big social media companies about 7 years ago and the things they bragged about made me ashamed for them. I should have raised my hand earlier.

Recently I’ve attended a couple of All Tech is Human conferences - they are great. I also attended the HAI conference at Stanford in the fall. I’m in full on learning mode around ethics in tech. I’m currently partnering with All Tech is Human on a survey that is hopefully launching next week to baseline the level of ethics conversations within tech.

I’d like to add an ethical component to my consulting practice. I’m particularly interested in the space where security practice and ethics meet or what can be learned from how security’s role has changed the tech industry with a seat at the table.

I can’t wait to learn from all of you and learn about your experiences.



Hi everyone! My name is Sravya and I am a senior Machine learning engineer at Adobe. I am generally a cautious technology optimist. It is amazing to see the advances in technology and availability of data at a very rapid pace which has the potential to improve the world in many unforeseen ways. At the same time, it can be extremely dangerous as most changes would have wide spread impact which could go on a very negative trajectory if not thought through well, especially if we don’t consider users, especially beyond first layer users for which technology was built originally proposed/built.

I am a long time student of Rachel and Jeremy and think it’s a great opportunity to learn more about how we can equip ourselves better in this age of rapid technological and data innovation. Thanks for offering this course!

Outside of data and ML, I have many other interests including dance, music. Also a mom of two little kids who keep me on my toes :slight_smile: and help me see world in a very optimistic perspective.


Hi everyone! My name is Tyler and I’m a Machine Learning Engineer at LinkedIn, where I’ve mostly worked on natural language processing, active learning, and recommendation systems. Before working in tech, I studied applied mathematics and biology as an undergrad.

Beyond a general interest in ethics, justice, and of course data/AI, I was drawn to this course because I want to better understand the structural forces and mechanisms that lead some technology companies to make unethical choices with severely negative effects, while other companies manage to mainly steer clear of moral hazard.

Why is Facebook in the news so much more often than, say, Pinterest is? What specific engineering, design, business, and policy choices have tended to produce harm? What are the moral foundations on which we should base future decision making so that we can have healthy, non-exploitative, and non-invasive online ecosystems.

My suspicion is that very few tech companies have been thinking explicitly about the ethical implications of their product decisions, and as a result, companies like Facebook have built some very destructive systems. The corollary to this hypothesis, however, is that companies like Pinterest can’t exactly claim the moral high ground — they haven’t been thinking about ethics either, and there’s no telling when their luck will run out. I’m hoping that courses like this one can help to develop a new moral core in tech, ultimately ensuring that we don’t repeat the many mistakes of the past decade.


Hey! I’m Amulya, I’m currently a user researcher working on ads at Linkedin. Previously, I was researching community creation and moderation at Reddit, and before that I was a Code for America fellow working on social services for New Yorkers.

It’s been fascinating to see how the work I’ve been doing intersects with ethics in so many ways and quite terrifying to realize that I, nor the folks that I work with, have been approaching this work with any intentional, ethical frame. I’m excited to take this class to start to formalize my way of thinking and become a resource to those that I work with.

Outside of work, I love to read (I think Jenny Odell’s How to Do Nothing approaches many of the topics we’re discussing, albeit from a different angle), cook, powerlift and hang out with my dog Kulfi. I’m always picking up new hobbies; at the moment I’m training to be a doula, (re)learning how to do calculus, and attempting to knit.

I’m looking forward to learning from all of you! Feel free to reach out at


Hi all! I am Raquel, I am from Lima, Peru and I am in the Data Science team at Indeed. Before Indeed, I was pursuing a PhD in Economics but left it to work in Industry. I moved from Lima to the US almost 9 years ago and have lived in North Carolina, Texas and California. Before moving to the US, I worked in consulting and the Peruvian government.

I love learning and I have always been interested in Ethics. I believe that “the unexamined life is not worth living” a little bit too much and always find myself going over the decisions I made that day and wondering whether that makes me a “good” or “bad” person. I am really interested in the ethical decisions that people in different companies make and the consequences that they bring in the short and long term to their users and the impact of those decisions to the world. And that goes not only for tech companies but any company that pollutes, overcharges people for things they need (pharmaceutical companies), or makes people go into debt with little ROI (for-profit universities).

Ever since I moved to the United States I have been learning what it means to be a Latin woman in America (which is very different from being a woman in Peru).

I spend my free time with my two cats, watching movies or shows, and listening to tons of podcasts ranging from politics, human interest pieces in Spanish, and Economics. I also love to read. I am currently trying to get back into a daily gym routine that just started today.


Hi all! :slight_smile:

My name is Eduardo and I am a Machine Learning Engineer at Directly, where my current focus is on NLP and Deep Learning. I am originally from Madrid, Spain and I have previously lived in San Diego, London and Oxford, before moving to San Francisco 2 years ago.

My motivation to take this course is to get a better understanding of the impact that my work and the work of others in the field have in society. It is nice to take a break from writing code and spend time thinking about the implications of your work, making sure you are making a positive impact. I am a strong believer in social justice and equal opportunities and I think technology should move us towards a fairer society rather than the other way around.

My educational background is in neuroscience and healthcare, so I also have a strong interest in applications of AI in medicine, where data protection is critical. I have also worked on a project regarding private AI in healthcare, using techniques such as federated learning and differential privacy.

In my free time, I enjoy hanging out in the city, weekend getaways or going to concerts and shows! In addition to being a data nerd, I am also a big music nerd :nerd_face: I have also been trying to learn how to surf for a while! :slight_smile:


Hi everyone! I’m Tuan, currently a data scientist at Salesforce. I’ve worked on relevance ranking for search engine, and more recently autoML system for structured data. I’m originally from Hanoi, Vietnam, and I moved here for my undergrad at Swarthmore College.

I’m excited about this class because I want to understand how different platforms built by tech companies become such powerful tools for misuse, and the reasons behind big tech’s reluctance/inability to address these issues at their core. I am also interested in technical solutions to detect disinformation, mitigate bias, increase privacy, etc at scale. As someone who’s working in the field and producing machine learning models that can affect many people, I’d love to spend more time studying and thinking more critically about the impact of my work.

In my free time I enjoy exploring the city on foot, with occasional road trips to places with nice water fronts. I love cooking, music (and concerts), and all things comedy.


Hi. I’m Claudia. I attended the previous Deep Learning pt. 2 class in the hopes of having enough knowledge to start developing models that could more accurately identify fine needle aspiration images in cancer dx, but alas, my coding is not there and my current job as an Oncology Data Coordinator keeps me quite busy. However, I do have many concerns regarding medical data, security, the effects of AI on people with disabling conditions & the acquisition of medical data sets by tech companies.

My Master’s is in vocational rehab/mfc counseling and I have worked in a variety of mental health and health care settings. Currently reading: Invisible Women: Data Bias in a World Designed for Men: Caroline Ceado Perez. In my free time, I eat and go to live shows. WuTang Forever!!


Hi all,

My name is Anna and I am an author and scholar based in SF, currently a visiting scholar at UC Berkeley’s Center for the Study of Religion. My academic writing is highly interdisciplinary and spans a wide range of topics in the humanities. I was a philosophy major as an undergrad at NYU and have for a long time been interested in the way that human biases relate to the construction of faulty master narratives in our society.

Over the past few years, I have begun to write for a popular audience about issues of intellectual and cultural biases. An article I wrote for discusses the bias in academia for research on death over birth: How Childbirth Became Philosophy’s Last Taboo. I also discussed this topic in my first book, a work that looks closely at social ontology and the construction of meaning in the context of birth. I have more recently written an extensive article for the SF Bay View Newspaper and republished in Counterpunch that examines bias and discrimination in the San Francisco Unified School District (SFUSD). Some aspects of the article that pertain to data ethics include the use of as the main form of communication in school communities, as well as SFUSD’s “lottery” system, which is computerized and has been found to benefit middle class and wealthy families (Segregation, Wealth and Education: The Politics of Liberal San Francisco’s ‘Separate But Equal’).

I learned about Data Ethics from my husband, a Deep Learning student, and so thought I would check it out. We live with our two kids (10 and 8) and a dog in SF.


Hi! I’m Mo.

I joined the class because I’ve been involved in community management, speaking, moderation & engagement with the field of tech/data ethics, but I’d like to do more projects focused on research, writing and policy. I’m looking forward to setting aside some time for myself to read and learn more about data ethics.

Most recently, I joined a multi-stakeholder project called Lowering Online Violence and Exclusion (Project Love), a New Zealand based group focused on increasing inclusion and healthy conversations in digital spaces and online communities in response to Islamophobia. Before that, I was the community lead for the project I mentioned above — the Global Data Ethics Project, a multi-stakeholder project that focused on developing a set of community-driven ethical principles for adoption in data science and technology. This provides the best summary of the work -

Before moving into the tech field, I worked in public health, policy and humanitarian data. I did most of this work as a field researcher in Texas and Peru, interacting with vulnerable communities, community-based organizations, government and nonprofits. While at the Texas Department of State Health Services, I brought together policy makers and researchers to discuss salient issues in preventive medicine, and analyzed data to drive policy initiatives. My main focus was infectious diseases. I studied anthropology and archaeology and originally planned to work in the national park system, but still I have a deep recreational love of the outdoors.

On twitter, I’m @moridesamoped

Looking forward to meeting everyone!


Hello everyone -

In early 2019, I completed an eight-month contract at Facebook as a User Experience (UX) Research Program Manager (RPM) for the Operations (Ops) team. During this time, I was responsible for redesigning and launching the UX Research Onboarding program, with the primary goal of condensing the orientation period from two weeks to one. My redesign of the program included not just a logistical and content overhaul, but also a cultural and behavioral shift for the operations team. The success of the onboarding process depends on ensuring that the various employees receive the appropriate content at a manageable pace in order to perform their job effectively. Most notably, I implemented the GDPR compliance/policy training course for all 500 Researchers. The program was launched, well-received, adopted by other cross-functional teams and is still being used to date.

I currently attend USF as an M.S. in Organization Development candidate. Top 5 things I would like to learn from USF:

  • Be able to understand, apply and present OD theories & practices.
  • Demonstrate quantitative and qualitative research methods.
  • Improve my presentation skills.
  • Develop my data analytics skills.
  • Apply and incorporate OD skills with Human Factors.

I have taken the data analytics proficiency certificate program, this Data Ethics class and will be taking the upcoming SQL class. My goal is to become a permanent Research Program Manager and be able to demonstrate & quantify my impact. Please feel free to connect with me on LinkedIn.


Hello! My name is Ugaso (You-gah-so) and my pronouns are she/her/hers. I’ve become interested in data ethics over the last several years, and increasingly after the last election. I’m concerned about discrimination, privacy/surveillance, and the automation of policing.

My undergraduate background is in somewhat multidisciplinary; I majored economics with a minor in math, but initially entered the field of economics from a social justice. After undergrad, I attended a coding bootcamp and I’ve spent the last 5+ years as a full-stack software engineer, taking a break the last couple years to care for aging family members. This year I am spending time learning and pursing my goal of a career in deep learning. I’ve taken Deep Learning 1 & 2 last year, and I plan to retake the updated version of Deep learning 1 again this spring and finish my degree in Math. I’m interested in problems in the intersection of computer vision and public art, and social justice issues in tech (eg., disability access/rights, mental health, and inclusion of marginalized populations needs in tech design process). In my free time I like to swim, play video games (Death Stranding/Furi are my current go-tos), and playing with my toy poodle puppy. I love drawing and painting, and I’m a member of an art collective in Minnesota called Rogue Citizen.


Dear All:

My name is Razvan Amironesei. I am a Visiting Scholar in Philosophy at UC San Diego. My interests are in data ethics, in particular, questions on algorithmic bias, privacy and cyber-surveillance. At UC San Diego, I have organized several events on algorithmic power and culture and I have also recently published on topics in environmental politics and political theory.

My training is in philosophy and political theory. I am interested in learning about relevant case studies in data ethics, which are amenable to fruitful theorizing in relation to larger societal problems, as well as collaborating with practitioners on these technical and moral questions.

Rachel kindly allowed me to audit this class online from San Diego, and I am eager to learn from all of you about these topics of crucial importance.


loved your article about childbirth! we’ve been talking a lot about how humans experience birth and its analogies to death in doula training, and what you’ve written has really helped contextualize my thoughts.

1 Like

Hello everyone,

my name is Glen Salazar. I am a graduate student at USF in the Master of Arts in International Studies program. My thesis focuses on the role AI played in Brazil’s 2018 presidential elections and how democracy in general is threatened by AI.

I decided to take this class to give me more understanding and perspectives on AI. I hope that my research will become more laser-focused and concise to come up with a good thesis that is backed up with the best information available for my premises.

I am interested in all things AI. Last year I attended a workshop here in San Francisco on the European Union’s AI strategy which was really informative. I got to hear from not only EU representatives but also tech giants like Google on what’s being done to research and advocate for effective AI policy.