Part 2 blogs

Well this post from @irhumshafkat is already taking off just a few hours after I tweeted it, with hundreds of likes including many top researchers in the field.

Not bad for a high school student from Bangladesh, I think. :wink:

21 Likes

Only the natural result of open source deep learning frameworks, open access papers and preprints, and free, great resources like fast.ai to learn from and actually understand them all :grin:

Absolutely elated to see so many have found this a clear and useful article!

15 Likes

Sharing some docker files I use for fast.ai in the hope they may be helpful to those getting started or interested in trying out docker. Feedback welcome

blog

repo

Not quite directly related to what we are learning in class since this is reinforcement learning without gradient descent, but I just published a reproduction post for Uber’s neuroevolution paper. It’s still about neural network so I’m guessing people would be interested in checking it out and seeing how a very different use of neural nets can work.

Besides, I think people will be curious to see what I write about using spot instances on AWS and my learning from the paper, since I found what I think are fairly significant flaws in the results (though it’s still quite interesting).

5 Likes

Finally got around reading it. Thanks a lot for this really well written post, and looking forward to your future posts. :clap:

1 Like

Very well written blog, I really enjoyed it, great job! Tbh it’s hard for me to understand how VAEs works, I get the general idea but it’s still hard for me to figure out what is happening under the hoods like when you tell me about latent spaces I still can’t visualise what it is in my mind :sweat_smile: . That’s really impressive from someone of your age to know so much!

2 Likes

Not impressive enough if I fail to get some of the ideas across :stuck_out_tongue:! I’ll definitely write up another post explaining the whole latent vectors/encodings idea sometime in the near future, thank you for the feedback.

2 Likes

Not sure if this helps, but this is very similar in my mind to distributed representations / embeddings that we use to represent words (or concepts). You get some vector and you try to cram as much info in it in some way that something down the road can decode it. And in the act of doing so you get interesting representations, some of which are quite interpretable (like word vecs) or actionable (like the example of adding glasses to an image from the wonderful @irhumshafkat’s post).

I nearly always when I get a chance mention a paper from Hinton I really enjoyed. And guess what, today I typed it into google and it turned out a result I have not seen before :slight_smile: Another paper on this subject. This is crazy. There is another one.

Anyhow - sorry if this isn’t helpful :slight_smile: In either way I am looking forward to @irhumshafkat giving this subject a proper treatment in another of his articles :slight_smile: Frankly speaking, at this point I am looking forward to whatever @irhumshafkat will write :slight_smile:

6 Likes

This is always helpful! It depends on people but for me adding little pieces of information here and there allow me to see the same concept into different perspectives and to finally understand the big picture. I still have to read “No bullshit guide to linear algebra” book to get the bases I do not have. I think without this prior reading it will be hard to get to your level of understanding. Ty btw :slight_smile:

1 Like

One of the most unbelievable things about the papers is simply how old they are and still highly relevant, one of them is from 1986! This are some fantastic links, thank you @radek.

1 Like

Thank you very much @Ekami, you are very kind, as always :slight_smile: To be completely honest I do not know much about linear algebra - at this point I am not sure if it is helping or a detriment to learning :wink:

In all honesty, I am planning to learn more about lin alg (do Rachel’s course, even bought the book for it that I proudly keep on the shelf as it accumulates dust - gives me peace of mind to have it though). But for now it is all hands on deck to make the most of the course while it is in session so may the detection and localization continue! :slight_smile:

1 Like

I also really like papers such as this as they read like a good blog post or an email from a friend :slight_smile: A person telling you something interesting in a very unassuming fashion. The focus is explain an interesting idea and not to drown you in math notation :slight_smile:

Apologies for spamming you with the links to papers but there is one other paper that if you haven’t seen yet you might find interesting. Though suspect you might have come across it earlier.

A motivation for dropout comes from a theory of the role of sex in evolution (Livnat et al.,
2010). Sexual reproduction involves taking half the genes of one parent and half of the
other, adding a very small amount of random mutation, and combining them to produce an
offspring

Talk about neat ideas! :slight_smile:

2 Likes

Found a good trick in jupyter https://medium.com/infinity-aka-aseem/experimenting-with-python-you-can-copy-paste-more-than-you-think-is-possible-87c8968b33e2

Here is a blog, I wrote a while back which I guess might be relevant our last class. Basically, the blog tries to tell a story about how the tasks related to image vision evolved and how neural nets can be adjusted for each task (like we covered in class). Classification -> Single object localization -> Multi object using sliding window -> YOLO -> Anchor boxes and Non max suppression. I am going to put a link to notebook with codes for each task which will be just running lessor 8 and 9 codes :smiley: . Hope it turns out little helpful :slight_smile:

5 Likes

are there any current competitions that involve object detection and localization?

Sort of, the 2018 data science bowl involves finding the masks of individual cell nuclei, but detection and localization can be a crucial first step for that.

2 Likes

An interesting take on the Future of AI (AGI actually) And robotics:

We will be Zues. 
We Will be Prometheus, for these new beings. 
Do we give them fire?"
2 Likes

Hey guys,
Today I’m proud to announce my first of a 3 part series blog post.
In this blog post you’ll learn how I implemented the SRGAN as well as the SRPGAN papers which are doing Single image Super Resolution (superscaling while enhancing images in other words). You can have a live demo of this work here and understand why some images indeed looks awesome while others are “not so great” (little clue: It’s related to when @jeremy said in lesson 1 of the part 2: “The research level code is just good enough that they were able to run their particular experiments”).

This blog post will be cut in 3 parts:

  • Part 1: Implementing the research paper
  • Part 2: Creating an API endpoint for your model
  • Part 3: Creating the product frontend

For the Part 2 I used my own library based on Pytorch and Algorithmia and I will tell you why I now understand why Pytorch is not meant for production (nor is Algorithmia) and all the kinks I went through.
I hope you enjoy this blog post and I’m open for any feedback (even grammatical feedback as english is not my native tongue :slight_smile: ) and also any contributions. The code is open source and the script is standalone (you just have to run it to setup everything). There are few things like CGANs that I didn’t implement so there is still a lot of room for improvements :slight_smile:

14 Likes

Thanks for sharing! I’m not sure what you mean about the 2nd images - the GAN version looks way better to me! The windows are sharper, the cars are clearer, nearly every object I looked at was better defined. What didn’t you like about this output?

But you’re right - you might get even better results by adding different types of noise to your downscaled images, based on what kinds of noise happen in the real world. You could add a little blurring, or jpeg compress with a lot of compression, etc.

I don’t think it’s true to say “pytorch is not meant for production” BTW. I know for example all the AI2 web apps run on pytorch with simple flask endpoints.

Minor point - you probably should run a spell checker on your post.

1 Like

Then we really have a different view of the world. As you, few people told me the second picture was better, few other told me the 1st one was. For me, the image looks indeed sharper but the second image has some kind of noise I can’t describe. Anyway compared to the image with the crocodile we can clearly see how the results compare.

But you’re right - you might get even better results by adding different types of noise to your downscaled images, based on what kinds of noise happen in the real world. You could add a little blurring, or jpeg compress with a lot of compression, etc.

I’m sure I can still add a lot of optimization on top of what I already have :slight_smile: Thing is: “What kind of real-world noise are we talking about”?

I don’t think it’s true to say “pytorch is not meant for production” BTW. I know for example all the AI2 web apps run on pytorch with simple flask endpoints.

I’m sure they did, as I did with my demo here. But there are a lot of kinks you can’t solve with pytorch, for instance using my model on the CPU (see the issue here ) or exporting your models to a library which uses less compute/memory requirements etc… Creating an API off of Pytorch is one thing, maintaining it with the right tools is another :slight_smile: . I’ll share my experience on this on part 2 :slight_smile: .

Minor point - you probably should run a spell checker on your post.

Oops I’m sorry I didn’t do that sooner. Everything should be fixed now, thanks to @init_27 for telling me about Grammatically :slight_smile: