I used the ImageCleaner for cleaning up the data as suggested in lesson 2. However, when I try to fit the learner with the new databunch I have two issues.
Issue 1: as the model is fitting the valid_loss column shows #na# values. That does not happen with the data bunch previous to using ImageCleaner.
Issue 2: if I ignore the #na# values and try to use ClassificationInterpretation.from_learner(), then I get: “IndexError: index 0 is out of bounds for axis 0 with size 0”.
Hello Monica ,i meet the same problem that all the valid_loss are #na#,can i ask you how to solve this problem
i use ImageDataBunch to get the dataset cifar10
im having the same issue. how would you recommend manually checking the validation set?
I used my_data.valid_ds and my_data.valid_dl methods but I don’t understand the outputs enough for that to be helpful. Do you have another suggestion?
In my case, I did not undertand the concept of validation. By this mistake now I learnt it. If it happens to you, read about validation. You would need to have a folder created for validation. Now it works for me.
I am having exactly the same problem as you. Did you find any solution to that ? I have tried tweaking almost everything but it keeps on happening (even on different datasets). Someone help me please…
I read the whole thread and I looked up my validation set to make sure it was consistent but the problem remains the same for me.
Whenever I unfreeze my learner (even before using the cleaner), lr_find() pops #na# valid_loss run by run, while train loss seems to be computing fine.
However, fit_one_cycle() computes the validation loss just fine !
I explored the documentation for lr_find and other thread but I can’t find a proper and definitive answer as to why lr_find would compute valid_loss with #na#.
I had the same issue and I was using from_folder method. upon checking I found that the valid LabelList had zero items. so I checked my validation folder and found a file which was not an image. the .DS_store of mac computers. I deleted it and everything worked perfectly. so my advice, check if the folder that contains the validation data has non image files. these files could be hidden. maybe use the terminal with ls -la to list all of them.
Hi elie, thank you for your input. Do you mind sharing how you did find the validation folder? I am on colab and I am not able to see any validation folders created
Hi @GrigorijSchleifer. I first connect colab to my google drive using the following:
from google.colab import drive
drive.mount(‘/content/drive’)
Then I change the path of download to be one of the directories in my drive. in your case, the valid folder will be created in your drive and you can see it listed. here is an example of how I downloaded orange dataset in a folder that I named navel inside a folder called oranges:
Alternatively, you can change fast AI config to let it use your drive directories instead of the default ones.
Config.DEFAULT_CONFIG = {
‘data_path’: ‘/content/drive/My Drive/your folder to data’,
‘model_path’: ‘/content/drive/My Drive/folder where you want to keep the model’
}
You can then save the configurations in a config file so that you don’t have to repeat the above steps.