I enjoyed reading your article https://tapesoftware.net/fastai-onnx/ and the code is working great with your dataset. Unfortunately, I’m having problem when doing the Inference (under Windows) when using my own dataset (jpg images). The model give wrong/different prediction on Windows when comparing the same image on Colab.
On Colab, I got an accuracy of ~ 90 %.
Some parameters needs to be changed maybe something below !!!
var imageInfo = new SKImageInfo( …
…
I’d like to share with those of you interested in time series tasks that I’ve just finished a major update to the tsai (timeseriesAI) library and added lots of new functionality, models and tutorial notebooks. It now works well with fastai v2 and Pytorch 1.7.
If you are interested you can find more details in this blog post or in the repo.
This is my first fastai app, it’s called black bird detector and was trained to recognize between black birds, ravens and crows. I used 100 images of each bird species to train the transfer learning model with a resnet18. After adding image augmentation the accuracy is roughly 88%.
Captured images predict throttle and steering to navigate my basement track. I am doing the training on a NVIDIA jetson Xavier. Training data is collected by controlling the car with a bluetooth Xbox game controller. An onboard NVIDIA jetson Nano does the inference when the car is driving itself.
Hey guys,
I’ve done an in-depth tutorial on Image Colorization using U-Net and conditional GAN and published my project on TowardsDataScience.
You don’t need fancy GPUs or huge datasets to train this model. I’ve developed a strategy which allows you to train the whole model in less than three hours on only 8000 images and still get great results!
Yesterday, a new competition was launched on Kaggle. A image classification competition!
I wrote a quick fastai starter that does quite well right now. Hope it is helpful!
Here is my attempt. I had to iterate several times before I managed to get it onto production. Thanks for everyone on the forums to help out with the issues.
Here is my github repo. And my app looks like this:
@ilovescience Thanks for sharing this notebook. Really interesting to look through.
Regarding the Kaggle competition rule of not allowing internet access in this particular competition; I assume that means you couldn’t simply use ‘resnet18’, as an example, like we do in the FastAI lessons because it’s not allowed to download it?
You can add the pretrained model weights to the dataset, and move them to the location that PyTorch/fastai looks for previously downloaded weights. Then, you can use pretrained models without any issue. This is shown in my code as well (under the “model training” section).
I finally bought the book and its even better now that my eyes don’t hurt because of my notebook screen.
I decided to start blogging. I managed to setup my blog on github but it’s not ready yet. So, I will share my notebooks for those of you interested in chem and bioinformatics. The goal of this blog is to adapt the fastai book to my specific area of research (cheminformatics).
My first post will be similar to chapter_2, showing how to train a model end-to-end.
But that’s not all! In the notebook we will implement a CNN called Chemception for bioactivity prediction using only images of molecules. No facy features or any kind of important chemical information. The question is: how much chemical information is necessary to train a chemical model?