There is a small confusion. The total data needs to be split into two 1) train/validation and 2) test sets. Then the train/validation set needs to be split into 1) train 2)validation by 80% and 20% respectively. I think I might still need the split_by_rand_pct. What do you think?
I generally avoid adding test data to the same databunch, as in most cases I am not able to. You only need test dataset when you are ready for deployment, so having it just as dataloader would be sufficient.
You are right. I don’t need my test data for training the model. But the dataset I am using has all the images inside one folder and now they have provided the training_val_list.txt and test_list.txt to segregates both from each other. Since I am new to Fast AI doesn’t know if there is anything to process such types of data.
What you want is completely achievable with FastAI… but it’s not necessarily going to be an “off the rack” solution that just automatically works with your data set. Spend some time looking at the documentation I linked to, and I think you’ll figure it out pretty soon. If you still end up stuck, come back and ask for help.