Part 2 blogs

Thanks for the feedback! I added a summary of the impact to the intro (just the error decrease for now, though I think the time-related one is really interesting for future exploration). Also changed “higher” -> “better” and put in a hypothesis about why—I do think reflection padding might be at least part of it.

Just published the post! https://medium.com/@hortonhearsafoo/adding-a-cutting-edge-deep-learning-training-technique-to-the-fast-ai-library-2cd1dba90a49

6 Likes

There is always gonna be a Prometheus among Jeus and others … for the history repeats itself…:thinking:

1 Like

An invaluable tool for bloggers:

http://www.hemingwayapp.com/

5 Likes

This is Awesome William! I’ve been looking for an example of callback implementation in Fast.ai. Nice post.

1 Like

That is awesome. I’m all red… Time to simplify and make it actually readable. Is passive voice a bad thing in a blog post?

have you tried www.grammarly.com ? It helps a lot, highly recommended.

Hey guys, here’s an interesting MSFT blog on Transfer Learning for Text - https://blogs.technet.microsoft.com/machinelearning/2018/04/25/transfer-learning-for-text-using-deep-learning-virtual-machine-dlvm/

2 Likes

I’m looking for feedback on this blog post I’m putting together. I am definitely still in rough draft mode, but just wondering if you guys think I’m putting too much in the middle section about the variables. Honestly each of those could probably have a blog post of its own so I tried to keep the advice practical and keep it friendly for people that maybe haven’t build a language model before.

This may be particular to me, but I feel like there is way too much text to wade through. Perhaps more images? Just my opinion though :slight_smile:

1 Like

It’s definitely a lot of text. I also considered doing a lot less explanation of what good picks seem to be. That would reduce the text size. I will maybe take screenshots of the code and past it in. Thanks for the feedback.

My shot to explain GANs.

3 Likes

I decided that I was trying to use the wrong medium to share the information I had learned. I decided to use a gist instead and I’m pretty happy with the outcome. Thanks again for the feedback!

2 Likes

Hi all,

Below is the link to my first blog post and its about things i learned about Neural Style Transfer.

https://t.co/2QG8vJqMYp

I think blogging is a great way to not just emanate your learnings but also to make yourself understand better.

PS: I get to draw everything on my white board. and it helps in remembering concepts.

2 Likes

I found this blog post explaining einsum very useful .

2 Likes

Hello,
I created a small blog post about neural style transfer using what I learned from Lesson 13. Please check that out and give me some feedback to improve the content.
Thanks…!! :smiley:

I finished my implementation of the Deep painterly harmonization paper. The blog post is here and the notebook there.

I’m not perfectly happy about it because I didn’t get one part of their process (the histogram loss) to work (in the sense that I get the same result with or without it), but it’s driving me crazy so I’ll get back to it later. Any feedback is welcome!

8 Likes

Great article and results! The histogram loss part (and accompanying code example) was interesting. Did you include this from the beginning or after the second pass, for fine-tuning?

Only for the second step, like they did in their article.

Thanks great article, I have started using VAEs on data that is not image related and found two things I needed to consider. First was that a lot of care needs to be takes with real life data since if you are not careful the output (usually a sigmoid), simply can’t match the input. Indeed I found I needed to do quite a lot of feature engineering finishing of with a sigmoid before the input to make sure i wasnt building in reconstruction error the model could not remove due to the 0 to 1 limit.

The other thing I have been starting to look at is changing the ratio of reconstruction loss to KL loss. People always seem to leave them the same but i think some experimentation can give better results.

Many thanks again, look forward to reading future articles

1 Like