Validating pretrained models (available in torchvision) on Imagenet validation set

Hi,

I have almost finished the DL part 1 and now I wanted to get my hard dirty directly with Pytorch a little bit (I am doing research at the interesection of information theory and deep learning so I thought I should be able to quickly try out new ideas directly in Pytorch).

The problem I am facing is as follows: I want to evaluate the performance pretrained models (pretrained on Imagenet} available in Pytorch model zoo (torchvision.models) on the Imagenet validation set (afterwards I want to code some adversarial attacks and test their impact but that is once I have solved current issues).

As it turns out (although every paper uses it and all models are pretrained on it) Imagenet is apparently not so easy to get ones hands on, especially if I want to only get the validation set (as the whole dataset is too big and occupies a large portion of the SSD on the server + I don’t need the training set right now). Finally I found the validation set from this link

http://www.image-net.org/challenges/LSVRC/2012/nonpub-downloads

As the validation set is available as a single folder and in order to use one of the predefined way to load dataset in Pytorch (specifically torchvision.datasets.ImageFolder (as this seemed to be the best choice)) I used the validation_set_creator notebook . It basically uses the ILSVRC2012_validation_ground_truth.txt to split the validation set into 1000 subfolders, each corresponding to one class. This code works fine and I also verified that the folder names (which end up becoming class labels when torchvision.datasets.ImageFolder is used as far as I understood) correspond to the class labels used by Imagenet (verified using the map_clsloc.txt (second column matches the folder names in my directory structure)). I verified manually by checking many folders that the folders contain the images of only the true category corresponding to that label.

Then I was trying to check the validation performance of different models via the code in Imagenet_validation_pretrained notebook. First I had the problem that I was directly using target variable obtained from the following line

data, target in test_loader

as the target labels to compare to but I realized (by plotting the figures in the minibatch and the corresponding values of target ) that torchvision.datasets.ImageFolder does some internal relabelling so this did not correspond to the actual folder numbers (which is equal to the second column of map_clsloc.txt ). After some digging I found that the folder names (which are also the target labels) can be found via

class_names = val_dataset.classes

target_classes = [class_names[x] for x in target]

and I verified (again manually) that now this provides the correct target labels for the images in the minibatch. Now comes the problem that I am currently facing and couldn’t find a solution to: The labels generated by the pretrained models (I checked at least Alexnet and resnet50) do not match at all with the target labels. For both I am getting around 50 correctly guessed labels out of 50000, so basically equivalent to random guessing. Based on my online search in the last couple of days, it seems like that this is because these pretrained nets use some other order of labelling but I couldn’t find how to map the two to each other.

Has perhaps somebody else faced this issue and can guide me how to do this mapping or somebody knows about some resource doing similar task in Pytorch? I was also thinking that if I do this using fastai wrappers for dataloaders, I will probably suffer from similar issue (unless I remove the fully connected layers and retrain those layers again, in which the different labelling issue will be automatically resolved but I do not want to do any modification to the pretrained models and use them as it is).

I am stuck on this small issue for sometime and I would be really thankful if somebody can guide me a bit.

(I have been learning from this forum for sometime now but I am new to posting here so I couldn’t figure out a nice way to put in my code in the post. Hence I have attached the necessary files as pdfs. Hopefully that is fine. )

Imagenet_validation_pretrained.pdf (81.9 KB)
validation_set_creator.pdf (56.5 KB)
ILSVRC2012_validation_ground_truth.pdf (930.5 KB)
map_clsloc.pdf (49.5 KB)

Hello!
A late response, as I faced this problem myself just recently. How I solved this is by renaming all the folders from their number to their Synset number. For example 1 --> n02119789, and then evaluate using target labels as usual, without changing target to target_classes.