Getting better results from second test of exported model

I am using colab as environment, I am quite new to fastai, I train the model(vgg13_bn) with one cycle policy then export and test it again on the same validation set, I get the same results of the validation but when I close the session and start a new one I get even better results by an increase of 2%.
I search about safe export of model all I found was to change this

  learn.export()

to be

  learn = learn.to_fp32()
  learn.export()

and yet it does not work either

I test it multiple times but still get improvement and this improvement is fixed not random
I will always get 0.9899 from the original 0.9649 validation accuracy after the first time the exported model results change, I tried to repeat the train again and again but it changes always to get better results than the saved results on the validation set.
I save exported models in the driver as storage of colab but before i export model, i use a callback to load best result in all epochs :

class SaveBestModel(Recorder):
    names="best_model"
    def __init__(self, learn,name='best_model'):
        super().__init__(learn)
        self.name = SaveBestModel.names
        self.best_loss = None
        self.best_acc = None
        self.save_method = self.save_when_acc        
    def save_when_acc(self, metrics):        
        loss, acc = metrics[0], metrics[2]
        if self.best_acc == None or acc > self.best_acc:
            self.best_acc = acc
            self.best_loss = loss
            self.learn.save(f'{self.name}')
            print("Save the best accuracy {:.5f}".format(self.best_acc))
        elif acc == self.best_acc and  loss < self.best_loss:
            self.best_loss = loss
            self.learn.save(f'{self.name}')
            print("Accuracy is eq,Save the lower loss {:.5f}".format(self.best_loss))
    def on_epoch_end(self,last_metrics=MetricsList,**kwargs:Any):
        self.save_method(last_metrics)

then

learn.load("best_model")

unfortunately, I could not test it because I do not have a test set so I do not know if this improvement is good or bad.

any ideas what the cause of improvement?

1 Like

I retrain my model with train validation test split and results increased for test set as I mentioned upside for validation set
I suspected shutil lib for editing some weights somehow because I used to move files to driver and rename so I zipped the exported file before I moved it but still happening now I suspect driver itself for editing saved file somehow when I save exact name with last part of name accuracy as different to save files with same name+accuracy of that train model
But anyway both the small and best accuracy changed when I save new file that is all I know how it happens I don’t know how or even sure that this is what happens
Now it gets crazy that even test accuracy increase so any ideas😅

After I make sure of every part in code I could track the problem that tricks me it is because of colab random seed that could trick me it changes every part of the time after session ends
so what happens my dataset was zipped in a file that I have unzipped it, the part of code that uses train_test_split with random seed that takes data-frame created from the unzipped folder that folder change depend on this seed that change
so how could I not see this?
I was printing data-frame after split both train and val and test data-frames with the fixed random state but it was always the same every run I was looking to first 5 and last 5 rows in all frames as and print(df) would do in a notebook
the change of seed was effect unzipped file by change only few paths form test to train and from train to test
so the model is safe, test dataframe is the thing that change
sorry that I take time to reply but I was busy with something else when I get free I write this comment right away