Thanks ahahha
I will update with fine-tuning as soon as I find out how to save the vocab in fastaiv2.
Edit:
For those wondering how to save the vocab:
with open(LM/'vocab.pkl', 'wb') as f:
pickle.dump(learn.dls.vocab, f)
Thanks ahahha
I will update with fine-tuning as soon as I find out how to save the vocab in fastaiv2.
Edit:
For those wondering how to save the vocab:
with open(LM/'vocab.pkl', 'wb') as f:
pickle.dump(learn.dls.vocab, f)
You can also do torch.save(dls.vocab) then load it back in with torch.load()
@muellerzr do you think its worth me submitting a dls.save
type PR? v1 had similar right? Especially hand for nlp where youād like to keep the vocab intact.
Hi yāall! I only started writing a research blog on Deep Learning x Astrophysics a few months ago, and wanted to share some highlights:
Hooks
, we can use dimensionality reduction tools to visualize the feature space (activation maps) of galaxy morphology (post):Also, I joined Twitter not too long ago ā feel free to follow if youāre interested in astronomical applications for deep learning!
Hey guys,
I only started my own data science blog following along the beautiful book āDeep Learning for Coders with fastai&PyTorchā. So yes I use fastpages for my blog
In my latest blogpost I used fastaiās DataBlock to build a classifier on Lego figurines. Absolutely glorious.
https://lschmiddey.github.io/fastpages_/2020/09/16/Lego-Classification.html
I hope you enjoy it!
Lasse
meh. not really as impressive as teddies vs grizzly bears though is it?
Hi lschmiddey Hope your having a lovely day!
Great concise and enjoyable post!
Cheers mrfabuous1
Upvote!
Just posted @Juvian and Iās results on CamVid. Short answer: there is no consistency in how models are being trained for benchmarks so we canāt truly compare anything. A better option is to train on the CityScapes dataset, as itās setup more like Kaggle so there is no possible way to actually get the true test set and reach the same conclusion (though it is for research purposes only). Along with this, if we were to ācompareā how fastaiās dynamic unet does, itās pretty decent with only a resnet34 backbone, and we saw a significant boost when applying Mish to the head of the network (couldnāt quite get it to replace the actual backbone w/ the activation functions well). If youād like to read more see here: https://muellerzr.github.io/fastblog/papers/2020/09/18/CAMVID.html
Hi guys,
Iām experimenting with transfering Pytorch/Keras/TF models to Fastai. My last post was about molecule generation using LSTM. Today I bring you a Message Passing Neural Net for bioactivity prediction!
The model architeture comes from this paper: https://jcheminf.biomedcentral.com/articles/10.1186/s13321-019-0407-y
And you can find the original implementation here: https://github.com/edvardlindelof/graph-neural-networks-for-drug-discovery
For those of you looking for new, cool models to use for drug discovery studies using fastai, hereās my notebook showing the working model:
File not found, for the link shared.
just edited
Hey guys!
I started a little project where I created a web app with binder to predict my ratings for a book given the text of an uploaded image.
I didnāt get the example of Jeremy working (voila_bears) so if anyone of you tries to make Binder work, check out my repo: https://github.com/lschmiddey/book_recommender_voila
My blogposts to this project can be found here: https://lschmiddey.github.io/fastpages_/2020/09/28/Build-binder-app-Part4.html
I hope you enjoy it!
Lasse
Hi all,
I just published a two-part introductory article on building your own image classifier for deep learning beginners - if youāve already completed Lessons 1-3 this is nothing new, but my target audience is myself before I started Deep Learning 2 years ago.
Its a hands-on article so it walks you through building a dataset and a model on Colab and running it on binder. If you have problems in the walk-through, Iād appreciate a comment so I can fix itā¦
Hereās the link : https://medium.com/@butchland/build-and-run-your-own-image-classifier-using-colab-binder-github-and-google-drive-part-1-bd1aebc626e
Best regards,
btw, forgot to mention thanks to @joedockrill for the jmd_imagescraper package and @vikbehal for the binder instructions ā hope you get to see your contributions in the article!
Best regards,
Butch
Hi all, my name is Dino and I am new to fastai. I absolutely love the material and the teaching style. Iāve read the first 7 chapters in the book and listened to the lectures online. Before moving further into the book, I decided to jump into a project and I was able to create a multilabel classification model on the āchinese-mnistā dataset. I would appreciate any feedback and I am really excited about moving forward with this course.
https://www.kaggle.com/dinodelao/fastai-v2-gpu-for-chinese-mnist-prediction
Hi All,
I have written a couple of blog posts explaining the workings of FastAI Optimizers and FastAI training loop. I have read the source code for these and have demonstrated my understanding with an Image classification example. Find the link to my posts below. Hopes this will be helpful.
Regards
Rakesh Sukumar
Iāve written my blog around the stacking of tensors using PyTorch.
For e.g. Iāve used MNIST dataset. This is my first blog in deep learning. So, please give it a read and let me know if any changes need to be made.
Thank you
Just added my first blog on image classification using fast.aiās vision library and jmd_imagescraper !! Please feel free to critique it.
You can find it at: A Beginnerās guide to Deep Learning
New blog post:
Check it out and please let me know if you find the visualizations helpful, or what can be improved