Model can only recognize birds when birds are close to camera

You can prevent cropping by providing the image size as a tuple:
Eg: size=(200,300) . This ensures resizing of image instead of cropping.
Hope this helps.

thanks @tapashettisr , I checked the source code of

from_folder

 @classmethod
    def from_folder(cls, path:PathOrStr, train:PathOrStr='train', valid:PathOrStr='valid', test:Optional[PathOrStr]=None,
                    valid_pct=None, seed:int=None, classes:Collection=None, **kwargs:Any)->'ImageDataBunch':

it does not even have the Size parameter~!:flushed:

Am I missing something?

Also, why does not it crop images when providing a tuple???

Size parameter gets passed upstream to image handling functions. It is part of **kwargs. If you search through documentation/ forum it is mentioned somewhere that id size is passed as tuple
then image will get resized. You can test it by passing a size tuple and then using
data.show_batch(8,figsize =(10,10)) which will show 8 random images after transformation. You will find the difference when for using size=225 and size=(225,225).

1 Like

thanks @tapashettisr I tried, it does the work.
Thanks.

Now, I need to find found why my model still cannot recognize birds when birds images are moving away.:weary:

Try using a higher max_zoom value like 1.5-1.8.

where does the max_zoom get applied?
could you please show me a line of code?

Sorry, I just started only learned 1 and half lessons >_<;

thanks a lot~!

data = ImageDataBunch.from_folder(dataPath,train=’.’,valid_pct=0.2,ds_tfms=get_transforms(do_flip=True,flip_vert=False,max_zoom=1.1,max_lighting=0.2),size=(150,50),num_workers=1,bs=64).normalize(imagenet_stats)

Was it of any use??

hi @tapashettisr thanks for your help.

I will get it a try once I’m back to home tonight.

Once again, thank you for your help.

Hi @tapashettisr, I have put the zoom_max = 1.5 to my code, however I am stuck at a problem:

train_loss decreases for 3 epochs and then keeps increasing.

So I cannot verify your suggestion.

Could you please help?

Here is the detailed problem:

Thank you so much~!

Have u used find_lr for getting the optimal learning rate. Will you be able to post the lr_find plot?

Also use do_flip=True

Thanks @tapashettisr for your suggestions, I have applied all the tricks in my code.

Yes, I did use lr_find() to set my lr.
Do_flip is set to True by default in get_tramsform()

Yes, the lr_find() plot is also available in that question.

You could also try progressive resizing, see the segmentation with CamVid lesson to see an example :slight_smile:

thanks, is it Lesson 3?

Can you post screenshot of the training and validation error/loss during progress of training?
I suspect the learning rate might be too high.

hi @tapashettisr sorry man, I gave you the wrong question link:

here is the correct one which has all things you asked, e.g. screenshot of train_loss and valid_loss during training:

OK, now here is my explanation:
You are achieving a error rate of 0.005 i.e 0.5 % which is excellent. The figure of merit to be used is error rate and not train/valid loss. If you do not enable early stopping then you will find that the train loss will decrease further. For a problem where one class(others) is not well defined this error rate is excellent.
Pl try running the training for 35-40 epochs without any callback. I think you will be able to see the train loss reduce.
Pl let me know what you found.
Regards

hi @tapashettisr thanks a lot for taking time and efforts to help me.

I think you are correct.

After watched video of Lesson 3, I realized that fit_one_cycle() and EarlyStoppingCallback() are not a good pair to each other.

The reason is explained in the video starts from time: 1h23min20s. @jeremy explained how the fit_one_cycle() jumps its lr to quickly find the global optimal LR. So the LR will increase to its max and than decrease back its minimum. BUT EarlyStoppingCallback(ESCB) will stop the training once ESCB found the model is not improving. So my training always stopped at the peak of LR, rather than waiting LR to be back to its minimum.

I guess that’s why I always saw my training got worse. If I remove ESCB, I should be able to see the model improving at the last few epochs.

Good suggestion. Thank you ~!

Btw, what is "Pl " ??
E.g. you said: Pl let me know what you found.
Is it Please?

okie, I have the new train_loss and valid_loss after removed EarlyStoppingCallback()

I also uploaded my plots for train vs valid loss and lr/iteration and lr/batch.

Please see them in my update 1 in this post.