Create your own image transfer learning data?

Hi everyone!

I have a crop image dataset that I am trying to predict with a new dataset of images.

There is a huge public dataset already out there that I could use. I’m trying to figure out if I could create my own sort of resnet type transfer learning to bring into my resnet model to increase my accuracy. Perhaps I’m missing something super obvious on how to do this in the best possible way.

Ideas?
Best,
Charlie

Hi!

I’m not sure what to make of this:

I have a crop image dataset that I am trying to predict with a new dataset of images.

Maybe I’m missing something here, but have you tried training on the public dataset first (usually using an already Imagenet-pretrained model), and then further finetune your model with your custom dataset?

Hi.
I have, but what I’m trying to sort through is I have saved and exported my model.
I found this thread and wanted to do something similar:

I’ve trained my model on my images (call it imageSet1) and trained on the new image set (imageSet2) using a similar basic setup.

What I think I’m missing is how can I take my trained imageSet1 and how do I transfer learn that and build it out with resnet etc. to get transfer learning from both. Again, maybe I’m missing something obvious.

I have saved and exported both, I’m just missing the connection.

Hi, I’m not sure what the best way to deal with this is.

If you don’t want to train again, try using both models, and averaging their predictions. Just like ensembling of any other models would work.

I don’t know if there is a way to combine them in a single model (what you might be asking for?).

If you can afford to train again, I’d say try combining both datasets and train a new model on it. Or, take the model that was trained on the large dataset and finetune it with your smaller dataset for a few epochs, that should be faster?

So I have found the easiest approach is to retrain the entire model once xx number of false positives have been identified. Simply retraining on a few small examples just didn’t seem to have that large of an impact. It also became a less general model. It’s likely I’ve missed a simple step, but that approach has worked in our production scenario.