Blog Posts, Projects and Articles


(Gavin Francis) #42

There was a great presentation on drug discovery at this week’s Machine Learning Meetup in London. Among the topics presented by the speaker were

  • the application of convolutions to the structural graphs of molecules with the aim of identifying their properties
  • how to create generative models to propose molecules with desirable features
  • use of recurrent networks to exploit the grammar of text-based molecular formulae
  • use of autoencoders to create embeddings that covert the discrete space of molecules into a continuous space where gradient descent can be applied to search for molecules with optimal properties, such as ability to bind to a target without causing toxicity.
    He provided a link to his slides: http://www.ymer.org/papers/files/2017-London-ML-Meetup.pdf
    Link to meetup for anyone in London: https://www.meetup.com/London-Machine-Learning-Meetup/

(Kishore P. V.) #43

I have my new blog, which I started a few months ago. Computer Science stuff. I have two posts currently: One relating to “Rule of 72” and another about “Matrix Multiplication”. I am working on a new post about Precision and Recall metrics. Feedback is welcome! :slight_smile:

http://kishorepv.github.io/


#44

Is there any sample code to save the output of layer before final FC layer of vgg16 for all the training images and use it as input and train the FC as a linear model ? I understand the performance benefit but I have not done it before. Can you point me to any reference code ? Thanks.


(Matthijs) #45

You can find code that does this on the Keras blog: https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html


(Pietz) #46

I wrote my bachelor thesis about the age assessments of teenagers using MR images of their knees. This was a study for the German Research foundation to help with asylum applications in europe. It was a fun project because I needed to apply interesting image preprocessings, segmentations, regressions, transfer learning and also some shallow machine learning techniques. I was awarded this years innovation award of my university. thanks to Jeremy for his great course!

here is a quick write up on medium:

i also just open sourced the code and full thesis:


(Corbin Albert) #47

Made a blog post on my new deep learning box! Hopefully it can help some people out!


(Pietz) #48

I wrote a little bit about different types of convolutions because i found the topic to be quite confusing. maybe this helps a few of you :slight_smile:


(Pete Condon) #49

In the spirit of Cunningham’s Law, I’ve finally received permission put together a few posts about some of the more interesting topics we’re covering working. Very keen for feedback:


(QuantScientist) #50

PyTorch Model Ensembler + Convolutional Neural Networks (CNN’s)

Here, we investigate the effect of PyTorch model ensembles by combining the top-N single models crafted during the training phase. The results demonstrate that model ensembles may significantly outperform conventional single model approaches. Moreover, the method constructs an ensemble of deep CNN models with different architectures that are complementary to each other.


(Aless Bandrabur) #51

Reading this super interesting article
https://www.technologyreview.com/s/610278/why-even-a-moths-brain-is-smarter-than-an-ai/

I would love to be able to build a model with fast.ai library which contains such a simulated version of octopamine and compare it with the classical models. Anyone interested in neuroscience who wants to share?


(Pietz) #52

I wrote an article how popular convolutional blocks can be implemented in simple code. Maybe this helps a few people here.


(Jayesh Saita) #53

Hello guys,
I’ve finally written my first blog !
It’s a post where I explain how I did Speech commands recognition and also, how I was able to reduce training time by 96% using a simple trick on Google Colab !
Check out the post here - https://towardsdatascience.com/ok-google-how-to-do-speech-recognition-f77b5d7cbe0b