From the book:
Evan Estola, lead machine learning engineer at Meetup, discussed the example of men expressing more interest than women in tech meetups. taking gender into account could therefore cause Meetup’s algorithm to recommend fewer tech meetups to women, and as a result, fewer women would find out about and attend tech meetups, which could cause the algorithm to suggest even fewer tech meetups to women, and so on in a self-reinforcing feedback loop. So, Evan and his team made the ethical decision for their recommendation algorithm to not create such a feedback loop, by explicitly not using gender for that part of their model.
But isn’t this exactly what we want from a recommendation system? Not many women attended the meetups, so it only makes sense to show them other meetings where there was higher chance they would enjoy them.
Obviously, the feedback loop it resulted in is horrible, and I have no doubt that isn’t what we want from a RS. But I have slightly hard time pinpointing the problem here. Wouldn’t this just push the issue further - creating feedback loops based on race, income? And even if we’d use only user’s history, the feedback loop seems inevitable, as the first interactions would affect the following ones and so on.
It appears to me that feedback loops are an inherent issue of these systems. Perhaps there’s difference between the types of different f. loops, like gender-based, history-based?
Thanks a lot to anyone willing to help me make sense of all this.