I believe the …normalize(resnet) bit will do precisely that.
Yep, like you guessed the size parameter does that on the fly (applying the transformations each time and taking slightly different crops).
If you’re images are huge and resizing is taking a long time you can preprocess with
verify_images which has a parameter that controls the maximum output image size.
Thanks for response. is there anyway i can stop crop while resizing? I think the cropping is affecting my network prediction accuracy…
Here is the output and I am not sure why varrying image size i can see even after transform. Does this mean that resizing is not happening? I also dont want any cropping…Can you please guide me how this is possible?
data_whale = (src_whale.transform(tfms_whale, size=224, bs=32)
y: CategoryList (1582 items)
[Category w_3de579a, Category w_3de579a, Category w_3de579a, Category w_3de579a, Category w_3de579a]…
x: ImageItemList (1582 items)
[Image (3, 700, 1050), Image (3, 600, 1050), Image (3, 700, 1050), Image (3, 591, 1050), Image (3, 700, 1050)]…
y: CategoryList (395 items)
[Category w_efbdcbc, Category w_75f6ffa, Category w_23a388d, Category w_0369a5c, Category w_fd3e556]…
x: ImageItemList (395 items)
[Image (3, 484, 1050), Image (3, 522, 914), Image (3, 600, 1050), Image (3, 600, 1050), Image (3, 700, 1050)]…
I want to use Text_Classifier_ Learner but there is no argument for changing metrics, Can anyone suggest how to manually change the Metrics from Accuracy to Fbeta?
Restarting kernel is incredibly useful in python
When is lecture 7. Wasnt it supposed to be today?
Can anyone please guide me how i can use SMOTE for image generation of rare class? I know that image augmentation can be used to synthetically generate for rare ones but i am exploring SMOTE one…
How to change the beta value to 1 in Fbeta Metric here?
learn = text_classifier_learner(data_clas, drop_mult=0.5) learn.metrics = metrics=[acc_02, f_score]
where the f_score code is -
f_score = partial(fbeta, thresh=0.2)
You can choose a different size. 224 is the usual size for resnet34 - but I found improved accuracy with 448 - but it does create a bigger model which is a consideration for deployment. I’d played with resnet50 too and the larger size in resnet34 was comparable with the largest I could use in resnet50 before hitting memory issues. Jeremy shows in the pets more notebook an approach where you increase the size towards the end of training.
I had it on my calendar as well but could not attend. Not seeing a link or dicussion on the boards so I am not sure if it happened.
Never mind, saw this from Jeremy on a post:
“Ah the last class has been moved. Dec 12th is the correct new date. Perhaps some kind person can created corrected calendars we can link to?”
Amazingly helpful !
Pleased to see the stable Windows version of PyTorch 1.0! https://pytorch.org/ - time to see if I can get a FastAI 1.0 environment working on my Win10 GTX1080 setup!
Do we have to update our environments (like GCP) for the stable version or leave it as it is?
Got into a bit of a pickle with the Windows version - @jeremy says it may take a few weeks to fix. Back to my Azure Ubuntu system for now.
Is there anyway I can join two ImageDataBunch object? Looks like they are list type of object…
I was trying data_low = data_low.append(data) and getting error…
data_low = (src_low.transform(tfms_whale, size=((224,224)), bs=8)
Looking for some help here
I am wanting to split the data into train/val sets based on the name of files contained into the list trn,valn
But i get the error message as list type as not attribute label from df .
(ImageItemList.from_csv(path, ‘train.csv’, folder=‘train’, suffix=’.png’)
.label_from_df(sep=’ ', classes=[str(i) for i in range(28)]))
Can some one please help here with correct way of doing it…
The above works fine if use random by pct .
Without going into a too technical answer, I believe Jeremy commented on this, stating that because of Pytorche’s inner workings this is quicker.
Also normal practice for coders to scale along the power of two scale.