Part 2 Lesson 13 Wiki


(Brian Muhia) #81

Maybe setting some minimum distance, and rejecting anything that crosses the threshold? Could even be an adaptive distance based on a distribution learned from the dataset…


(Arvind Nagaraj) #82

yes - and it’s our human responsibility to check and de-bias the data we use. The argument that the data just mirrors the problems in society is not sufficient. we all have to do more.


(Alex Rigler) #83

Really inspiring to see Jeremy and Rachel talk about the ethics in class! Just more evidence that fast.ai is such a unique and valuable course.

Google’s engineering blog just recently put up how they are starting to look at bias in embedding models, really worth a read.


(Bart Fish) #84

very true, but have you looked at the ‘bias’ against cats in the imagenet images?


(Arvind Nagaraj) #85

Yes - the AI hype graph in the mainstream media has an exploding gradient. It’s important that great researchers like him are speaking out!


(Arvind Nagaraj) #86

@Ducky has…


(Ganesh Krishnan) #87

I work and think a lot about bias in models and causality. 3 things I’ve found useful in my work:

  1. Think about the incoming data distribution
  2. Factor in other things in your loss function, even if not explicitly, atleast in decision making
  3. Always build an exploration vs exploitation loop break for the model. The meetup example that Jeremy gave is great. Instead of always picking the top choices, pick some others a random fraction of the time.

(Amrit ) #88

@rachel, I don’t believe so as I did not see any camera’s there. I did take pictures of a couple of his slides and they were very thought provoking.


(Brian Muhia) #89

So we need to have some ideas for building computable conditional probability distributions? Probabilistic programming makes it easy to add them to neural nets!


(Ganesh Krishnan) #90

That’s a great idea. I’ve been looking into Pyro recently. I’ve been using Stan so far.


(Bart Fish) #91

which notebook is this?


(Brian Holland) #92

SUPER trivial question: What does Jeremy’s shirt say?


(Brian Muhia) #93

I’ve been studying probabilistic programming since the Church days :slight_smile: The introduction of Pyro was awesome timing for part 1.


(blake west) #94

Non adverserial adversarial learning


(Rachel Thomas) #95

Non-adversarial Adversarial Learning


(Brian Muhia) #96

Any chance we’ll study probabilistic programming in Pyro? I’ve seen some interesting autoencoders built in Pyro.


(Erin Pangilinan) #97

Yes wasn’t there a Jupyter link to this? Outside of IA (information architecture)?


(Matt Trent) #98

I don’t have a twitter citation, but I’ve seen some anecdotal evidence that folks still use VGG for image processing applications, because they find resnet etc… don’t have the capacity to represent the nuance of the pixel-level changes.


(blake west) #99

FYI ya’ll, I have a friend who created a new loss function for style transfer. It gets some cool results:

His code is here: https://github.com/VinceMarron/style_transfer
His paper is here: https://github.com/VinceMarron/style_transfer/blob/master/style-transfer-theory.pdf


(Erin Pangilinan) #100

I asked about Jeremy’s thoughts on Pyro last class (last term for part 1), it would be cool to learn!