I saw this on the Twitter profile of the student @jeremy mentioned in the 1st lecture, the one who founded Delta and works at GoogleAI. @rachel
The previous students, mentors, those who are more proficient can apply for these and maybe students like me can participate (* i am not aware how the participation works, i am still going through the site)
I would like to have VGG network into create_cnn but unable to find out how to achieve this. It seems that we can only do ResNet through create_cnn model attribute. Also how i can override last dense layers created by default?
I am trying to plot the losses using interp.plot_top_losses(4, figsize=(15,11)) and it seems that fast.ai does not allow me to show the image name along with loss. It would have been of great help if we can have the image name also so that i can decide on further course of action on those images.
I tried to install fastai again in a new EC2 Instance. But I got an error when I install torchvision-nightly using “conda install -c fastai torchvision-nightly”. The error is:
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
I want to take a minute and thank everyone on the forums for not just sharing their amazing knowledge but also going out of their way to take out time from their lives to correct beginners like myself and and always answering the questions even if the answers are just a few google searches away.
Also to everyone on the forums who post about their amazing ideas. The list would be too long and I would annoy many people by tagging them.
I wish everyone on the fast.ai community a very happy new year!
May we continue to take on kaggle leaderboards and keep breaking SOTA on existing and new fields
I have tried to plot the images from top losses through ImageCleaner class but it is quite slow. So I feel that showing image name with top losses will be much more easier and then we can take action on those particular images.
What are the tools used to make a segmentation dataset? I want to create the masks from the raw images, and then feed those to a network. Came across U-net to go on about the segmentation problem. But how do I make the custom dataset from the raw images?
The task is to detect different types of red blood cells(different classes) in a microscopic image to give a count of each type of cell present and detect diseases if any. Basically it is a segmentation and classifcation problem.
Initally, the image is the be segmented and then patches are taken which is later fed to a network for classifcation. I am not confident in hand crafting the features myself(for segmentation)
@jeremy Also looking for your advice or suggestions if any.
I’d be careful of calling it production ready @cyberdroidmann - but I deployed to a public endpoint on Azure - and see my medium post https://medium.com/@lunchwithalens/deploying-my-fastai-predictor-to-microsoft-azure-c7e635d464a1 for more details. I have since migrated the App Service from P1V2 to a cheaper B2 service successfully - and I do work for Microsoft so I had some Azure credits. I think a B2 runs around $75/month - so if you just wanted to push it out and try it - then tear down it wouldn’t be that expensive. Let me know if there are questions the posting doesn’t address.
In data containing Multiple Labels, how to effectively split the train and valid from csv? The problem is occurring where I split the data randomly and this generates an error where the Valid split contains labels which is not present in Training split.
I always first identify minority classes and then do the image aug for minority classes only and add them with my input data. Then apply train and test validation…Another approach can be to have two models and one is exclusively for minority class ones so that test and train split is more reasonable w.r.t other classes data
@jeremy Are there any plans to offer part 2 of the course to remote participants? (Thought of asking this on the forum since I might have missed the announcement.)