Well this post from @irhumshafkat is already taking off just a few hours after I tweeted it, with hundreds of likes including many top researchers in the field.
Not bad for a high school student from Bangladesh, I think.
Well this post from @irhumshafkat is already taking off just a few hours after I tweeted it, with hundreds of likes including many top researchers in the field.
Not bad for a high school student from Bangladesh, I think.
Only the natural result of open source deep learning frameworks, open access papers and preprints, and free, great resources like fast.ai to learn from and actually understand them all
Absolutely elated to see so many have found this a clear and useful article!
Sharing some docker files I use for fast.ai in the hope they may be helpful to those getting started or interested in trying out docker. Feedback welcome
Not quite directly related to what we are learning in class since this is reinforcement learning without gradient descent, but I just published a reproduction post for Uberâs neuroevolution paper. Itâs still about neural network so Iâm guessing people would be interested in checking it out and seeing how a very different use of neural nets can work.
Besides, I think people will be curious to see what I write about using spot instances on AWS and my learning from the paper, since I found what I think are fairly significant flaws in the results (though itâs still quite interesting).
Finally got around reading it. Thanks a lot for this really well written post, and looking forward to your future posts.
Very well written blog, I really enjoyed it, great job! Tbh itâs hard for me to understand how VAEs works, I get the general idea but itâs still hard for me to figure out what is happening under the hoods like when you tell me about latent spaces I still canât visualise what it is in my mind . Thatâs really impressive from someone of your age to know so much!
Not impressive enough if I fail to get some of the ideas across ! Iâll definitely write up another post explaining the whole latent vectors/encodings idea sometime in the near future, thank you for the feedback.
Not sure if this helps, but this is very similar in my mind to distributed representations / embeddings that we use to represent words (or concepts). You get some vector and you try to cram as much info in it in some way that something down the road can decode it. And in the act of doing so you get interesting representations, some of which are quite interpretable (like word vecs) or actionable (like the example of adding glasses to an image from the wonderful @irhumshafkatâs post).
I nearly always when I get a chance mention a paper from Hinton I really enjoyed. And guess what, today I typed it into google and it turned out a result I have not seen before Another paper on this subject. This is crazy. There is another one.
Anyhow - sorry if this isnât helpful In either way I am looking forward to @irhumshafkat giving this subject a proper treatment in another of his articles Frankly speaking, at this point I am looking forward to whatever @irhumshafkat will write
This is always helpful! It depends on people but for me adding little pieces of information here and there allow me to see the same concept into different perspectives and to finally understand the big picture. I still have to read âNo bullshit guide to linear algebraâ book to get the bases I do not have. I think without this prior reading it will be hard to get to your level of understanding. Ty btw
One of the most unbelievable things about the papers is simply how old they are and still highly relevant, one of them is from 1986! This are some fantastic links, thank you @radek.
Thank you very much @Ekami, you are very kind, as always To be completely honest I do not know much about linear algebra - at this point I am not sure if it is helping or a detriment to learning
In all honesty, I am planning to learn more about lin alg (do Rachelâs course, even bought the book for it that I proudly keep on the shelf as it accumulates dust - gives me peace of mind to have it though). But for now it is all hands on deck to make the most of the course while it is in session so may the detection and localization continue!
I also really like papers such as this as they read like a good blog post or an email from a friend A person telling you something interesting in a very unassuming fashion. The focus is explain an interesting idea and not to drown you in math notation
Apologies for spamming you with the links to papers but there is one other paper that if you havenât seen yet you might find interesting. Though suspect you might have come across it earlier.
A motivation for dropout comes from a theory of the role of sex in evolution (Livnat et al.,
2010). Sexual reproduction involves taking half the genes of one parent and half of the
other, adding a very small amount of random mutation, and combining them to produce an
offspring
Talk about neat ideas!
Found a good trick in jupyter https://medium.com/infinity-aka-aseem/experimenting-with-python-you-can-copy-paste-more-than-you-think-is-possible-87c8968b33e2
Here is a blog, I wrote a while back which I guess might be relevant our last class. Basically, the blog tries to tell a story about how the tasks related to image vision evolved and how neural nets can be adjusted for each task (like we covered in class). Classification -> Single object localization -> Multi object using sliding window -> YOLO -> Anchor boxes and Non max suppression. I am going to put a link to notebook with codes for each task which will be just running lessor 8 and 9 codes . Hope it turns out little helpful
are there any current competitions that involve object detection and localization?
Sort of, the 2018 data science bowl involves finding the masks of individual cell nuclei, but detection and localization can be a crucial first step for that.
An interesting take on the Future of AI (AGI actually) And robotics:
We will be Zues.
We Will be Prometheus, for these new beings.
Do we give them fire?"
Hey guys,
Today Iâm proud to announce my first of a 3 part series blog post.
In this blog post youâll learn how I implemented the SRGAN as well as the SRPGAN papers which are doing Single image Super Resolution (superscaling while enhancing images in other words). You can have a live demo of this work here and understand why some images indeed looks awesome while others are ânot so greatâ (little clue: Itâs related to when @jeremy said in lesson 1 of the part 2: âThe research level code is just good enough that they were able to run their particular experimentsâ).
This blog post will be cut in 3 parts:
For the Part 2 I used my own library based on Pytorch and Algorithmia and I will tell you why I now understand why Pytorch is not meant for production (nor is Algorithmia) and all the kinks I went through.
I hope you enjoy this blog post and Iâm open for any feedback (even grammatical feedback as english is not my native tongue ) and also any contributions. The code is open source and the script is standalone (you just have to run it to setup everything). There are few things like CGANs that I didnât implement so there is still a lot of room for improvements
Thanks for sharing! Iâm not sure what you mean about the 2nd images - the GAN version looks way better to me! The windows are sharper, the cars are clearer, nearly every object I looked at was better defined. What didnât you like about this output?
But youâre right - you might get even better results by adding different types of noise to your downscaled images, based on what kinds of noise happen in the real world. You could add a little blurring, or jpeg compress with a lot of compression, etc.
I donât think itâs true to say âpytorch is not meant for productionâ BTW. I know for example all the AI2 web apps run on pytorch with simple flask endpoints.
Minor point - you probably should run a spell checker on your post.
Then we really have a different view of the world. As you, few people told me the second picture was better, few other told me the 1st one was. For me, the image looks indeed sharper but the second image has some kind of noise I canât describe. Anyway compared to the image with the crocodile we can clearly see how the results compare.
But youâre right - you might get even better results by adding different types of noise to your downscaled images, based on what kinds of noise happen in the real world. You could add a little blurring, or jpeg compress with a lot of compression, etc.
Iâm sure I can still add a lot of optimization on top of what I already have Thing is: âWhat kind of real-world noise are we talking aboutâ?
I donât think itâs true to say âpytorch is not meant for productionâ BTW. I know for example all the AI2 web apps run on pytorch with simple flask endpoints.
Iâm sure they did, as I did with my demo here. But there are a lot of kinks you canât solve with pytorch, for instance using my model on the CPU (see the issue here ) or exporting your models to a library which uses less compute/memory requirements etc⌠Creating an API off of Pytorch is one thing, maintaining it with the right tools is another . Iâll share my experience on this on part 2 .
Minor point - you probably should run a spell checker on your post.
Oops Iâm sorry I didnât do that sooner. Everything should be fixed now, thanks to @init_27 for telling me about Grammatically