This V2 is very different from V1 since it enforces more on learning the high-level picture (top-down approach) by abstracting more implementation into fast.ai library. It is not a course to learn how to use TF, Keras, etc.
Is there more information (maybe a paper/post and author) about the epithelial/stroma classifier mentioned at 47:18? I’m interested in further reading about this.
How to change the batch size?
I can’t find this variable in the code!
Batch size is identified in the def get_augs() function as bs.
You can change your batch size like so:
data = ImageClassifierData.from_paths (path, tfms=tfms, bs=30, …)
Hope this helps.
I have a question…
How could the classifier asses itself on the test set while it doesn’t know the correct answers of the test set?
I mean how could the classifier be sure that it will achieve certain accuracy 98 or 99 when it only knows the correct answers of the validation set but not the test set?
How could it be sure that it will achieve exactly the same accuracy as the validation set on the test set?
Hey guys, check out my first medium post, its based on an image classifier i wrote.
please let me know your thoughts
I’m having a hard time with using the fast.ai library on a Linux machine (ScientificLinux7 ) I SSH to. In short: When building resnet50, the machine I SSH to is unable to locate the pre-trained model.
I set up the fast.ai on the machine by following the instructions on the wiki. When I try to build the fast.ai model:
PATH = 'my_data/hep_images/' sz = 300 arch = resnet50 data = ImageClassifierData.from_paths(PATH,tfms=tfms_from_model(arch, sz),bs=32 ) learn = ConvLearner.pretrained(arch, data, precompute=True)
This results in the following error:
FileNotFoundError Traceback (most recent call last)
2 arch = resnet50
3 data = ImageClassifierData.from_paths(PATH,tfms=tfms_from_model(arch, sz),bs=32 )
----> 4 learn = ConvLearner.pretrained(arch, data, precompute=True)
/mnt/scratch/eab326/fastai/courses/dl1/fastai/conv_learner.py in pretrained(cls, f, data, ps, xtra_fc, xtra_cut, custom_head, precompute, pretrained, **kwargs)
111 pretrained=True, **kwargs):
112 models = ConvnetBuilder(f, data.c, data.is_multi, data.is_reg,
–> 113 ps=ps, xtra_fc=xtra_fc, xtra_cut=xtra_cut, custom_head=custom_head, pretrained=pretrained)
114 return cls(data, models, precompute, **kwargs)
/mnt/scratch/eab326/fastai/courses/dl1/fastai/conv_learner.py in init(self, f, c, is_multi, is_reg, ps, xtra_fc, xtra_cut, custom_head, pretrained)
38 else: cut,self.lr_cut = 0,0
—> 40 layers = cut_model(f(pretrained), cut)
41 self.nf = model_features[f] if f in model_features else (num_features(layers)*2)
42 if not custom_head: layers += [AdaptiveConcatPool2d(), Flatten()]
/mnt/scratch/eab326/anaconda3/envs/fastai/lib/python3.6/site-packages/torchvision/models/resnet.py in resnet50(pretrained, **kwargs)
186 model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
187 if pretrained:
–> 188 model.load_state_dict(model_zoo.load_url(model_urls[‘resnet50’]))
189 return model
/mnt/scratch/eab326/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/utils/model_zoo.py in load_url(url, model_dir, map_location)
55 model_dir = os.getenv(‘TORCH_MODEL_ZOO’, os.path.join(torch_home, ‘models’))
56 if not os.path.exists(model_dir):
—> 57 os.makedirs(model_dir)
58 parts = urlparse(url)
59 filename = os.path.basename(parts.path)
/mnt/scratch/eab326/anaconda3/envs/fastai/lib/python3.6/os.py in makedirs(name, mode, exist_ok)
–> 220 mkdir(name, mode)
221 except OSError:
222 # Cannot rely on checking for EEXIST, since the operating system
FileNotFoundError: [Errno 2] No such file or directory: ‘/home/eab326/.torch/models’
I’d appreciate any help resolving this issue. Thank you!
I guess that your question got lost among others and as I didn’t see anyone answer I will try to explain (I guess you know by now, but at least it can be registered for posterity).
So, an epoch is one pass in all the training set. The batch size is how many images you analyse at once. See, images are represented as tensors (n dimensional arrays) and you can make calculations with big tensors or small tensors.
If you use big tensors it is faster because you need less computations to pass through all your data. In other words, you need less iterations to finish an epoch. But you will also need more memory to keep that big tensor (the batch) in RAM. In general, you want to keep batches as big as your GPU memory allow.
To set the batch size you can try different sizes and keep an eye in your GPU usage (
nvidia-smi command). If you set it too high, your code will have a runtime error. If that happens, try to reduce the batch size.
Hey guys, check out next part of the blog on data visualization techniques. Any suggestions, queries and comments are most welcomed.
ConvLearner.pretrained should work in Kaggle Kernels now if you turn on the new Internet connected option. I’ve got lesson 1 running in this kernel: https://www.kaggle.com/hortonhearsafoo/fast-ai-lesson-1
I am Murali from India. I feel this course is a perfect complementary after Andrew NG’s deep learning specialization.
I started a series ‘Fast.AI Deep Learnings’ where I would like to practically implement and share my experiences about each topic.
Here is the first post
Please provide your feedback