i had this problem and reducing batch size, etc didnt do anything. finally i saw i had a few notebooks running, not actively training, just running. killing all the other notebooks got rid of the error.
i’m pretty new to jupyter notebooks…
Is that the only file that’s causing an issue? What if we delete that image, does that solve the problem?
I had a similar issue, not exactly the same.
Maybe try creating a new dir in jupyter that is not prepended with ’ .’ for example: ‘fastai’ instead of ‘.fastai’
Jupyter seems to want to suggest folder creation and does so with a '.'prepended which caused me issues.
yeah …i made a video of learning rate vs batch size base on that paper …enjoy!
Deleting the image worked, thanks! Very strange I couldn’t find another way to fix it
yeah i made a video read along + trying to explain that paper …that is amazing discovery!
Hi @reshama
I am working on paperspace gradient, and I have been running a cell with the following commands at the top of every notebook.
#do this at the start of every notebook
#update fastai repo in directory for course_v3
!pip install --upgrade pip
!cd /notebooks/course-v3
!git pull
!pip install fastai --upgrade
I don’t know if this will cause problems down the road, but so far, so good. In fact, when upgrade pip first, I don’t get the spacy error that comes up if I don’t.
I have a test csv file without labels, I’m unable to load the data for that and make predictions.
I am facing the same issue while running the lesson3-camvid.ipynb file.
I also tried by adding padding_mode=‘zeros’,num_workers=0 in databunch but it does not solve the problem.
data = (src.datasets(SegmentationDataset, classes=codes)
.transform(get_transforms(), size=size, tfm_y=True)
.databunch(bs=bs,padding_mode=‘zeros’,num_workers=0)
.normalize(imagenet_stats))
Please suggest how to fix this issue
Thanks,
Ritika
In my case, the issue was that I didn’t have installed pytorch
v1.
Thanks for your suggestion.I have already installed pytorch
v1. Still i am facing the issue.
Thanks,
Ritika
It seems Keras default input size for resnet50 is 224x224 so maybe resnet50 was trained on that size?
But I agree there is no harm in trying 299x299 and it seems usually it will give better results.
299x299 size defaults in keras are for:
- InceptionResNetV2
- InceptionV3
- Xception
224x224 defaults for all other models in keras:
- VGG16
- VGG19
- ResNet50
- MobileNet
- DenseNet
- NASNet
- MobileNetV2
Hey Guys… I am getting this error while running Lesson 1 on my dataset. Can someone tell me how to correct it. Thanks in advance
AttributeError: module ‘PIL.Image’ has no attribute ‘register_decoder’
Here are some of the questions I have on image classification task.
Suppose we have a label set that has more than two levels of classification?
For example, labels.csv will look something like:
img, cat_1, cat_2
- img_a, 1, 11
- img_b, 1, 12
- img_c, 2, 21
- img_d, 2, 24
- img_e, 1, 13
As shown above, here we have two categories for cat_1 column and more than two(>2) categories for cat_2 column.
-
One approach was using label_col attribute when we create Databunch. See here: https://docs.fast.ai/vision.data.html#ImageDataBunch.from_csv. And perform the classifications separately. Is it the right approach?
-
Also, what fn_col attribute does in above call? i couldn’t find an example for that.
-
When image sizes range(100x100…400x400) differ in the training set, how do we resize and perform the classification? Will rand_resize_crop work in these cases?
Hi, I have created GCP instance for FastAI course
However when trying to access notebooks from browser using http://localhost:8080/tree, I am getting following error
Any pointer what wrong I am doing ?
@dinesh.chauhan You are trying to connect to the localhost, which is your local machine. On your machine, there’s no server listening for connections, hence, connection refused.
To connect to your GCP instance, there must be a public url available, something that’s not localhost.
Please check out this forum or post the question there: https://forums.fast.ai/t/platform-gcp/27375
The command that you type in the terminal
gcloud compute ssh --zone=$ZONE jupyter@$INSTANCE_NAME -- -L 8080:localhost:8080
needs to include your ZONE and your INSTANCE_NAME. You can see these on your GCP Compute Engine page. The guide https://course-v3.fast.ai/start_gcp.html goes through all the steps.
Can anyone explain the regular expression used.
r’/([^/]+)_\d+.jpg$’
I don’t get it how it only extracts the name of the dog. Shouldn’t it extract the name of the dog with the number and .jpg extension.
For example if the path is ‘/tmp/.fastai/data/oxford-iiit-pet/images/keeshond_104.jpg’ then shouldn’t it extract keeshond_104.jpg rather than only keeshond