Part 2 blogs

Hi All,

Ive written a couple of blog posts on dropout and it’s effect on RNNs.

part 1 is a high level overview of dropout from Hinton et al. 2012’s paper through to last year.

Part 2 shows results of experiments on dropout parameter variation for the fastai language modelling and translation tasks, as well as for Merity et al.'s awd-lstm-lm.

Any feedback is welcome. I note that fixes were made to weight drop by @sgugger since i initially ran these experiments.

Also im not clear on why results for weight drop where wdrop is >0.7 were so different between fastai and awd-lstm-lm. The code for WeightDrop looks almost identical. I will follow this up when i can.

7 Likes

Written a technical article on Wasserstein GANs for Intel Dev Zone
https://software.intel.com/en-us/articles/better-generative-modelling-through-wasserstein-gans

3 Likes

Have a read at my recent blog on self-attention GANs.

It’s been a while since I don’t post but had this one in my mind since part2 lessons, glad that summer brougth me the time,

1 Like

These articles, the first one in particular are masterful. Thank you for sharing.

Hi all,
I wrote a small blog post explaining the point of “Gan as a loss function” that Jeremy mentioned in lession 12.
If anyone can give me some feedbacks it would be great.

Hey everyone, I’ve just written my first ever blog posts! Part 1 is an explanation of the adaptive softmax, which is up to 10x faster than the full softmax, and Part 2 is a walk through a Pytorch implementation.

This is the first time I’ve ever shared code publicly, so I’d really appreciate any feedback on the blog posts and the github code:

In Part 2 I give a shout-out to fast.ai and @jeremy 's course, which were instrumental in getting me to where I am now :slight_smile:

Thanks much in advance for any helpful feedback or advice!

Check out my latest blog post! I use fast.ai tools to build a convolutional neural network (CNN) for natural language processing (NLP): https://towardsdatascience.com/how-to-build-a-gated-convolutional-neural-network-gcnn-for-natural-language-processing-nlp-5ba3ee730bfb . I appreciate any feedback… Thanks!

1 Like

Hey guys, I just wrote a blog post explaining SSD (single shot object detector) that Jeremy addresses in lesson 9 of 2018 course.

I would appreciate your feedback. Thanks in advance.

1 Like

Is the notebook for SSD still accessible??

Just recently completed a draft for a blog post looking at the various sorts of boxes in an SSD. Would really appreciate any comments on it: https://medium.com/@jackchungchiehyu/94d8b0cf5c16 thanks!

1 Like