Found the content of the part 2 below:
https://course19.fast.ai/videos/?lesson=8
Can you try to create a separate code block from above and run this code: !pip install -Uqq fastbook? After that, try to run a code: import fastbook.
Here you go:
@jeremy I have learned about indexed images. It is very helpful for Mask files in Semantic Segmentation. You can store values from 0 to 255 in uint8 and represent each value with the color that you want:
img = Image.open(file_name)
img = img.convert("L")
# print(img.mode)
#img = img.convert("P",palette=Image.ADAPTIVE,colors=2)
img.putpalette([0,0,0, #black
255,255,255 #white
])
img.save(file_name.replace(".png",".tif"))
However, MaskBlock doesn’t works with this kind of files because it opens mask files with L mode. Indexed images uses P mode.
I have tried monkey patching as follows:
from fastai2.vision.core import PILBase, AddMaskCodes
from fastai2.data.block import TransformBlock
from fastai2.data.transforms import IntToFloatTensor
class PILMaskV2(PILBase): _open_args= {'mode':'P'}
PILMaskV2 ._tensor_cls = TensorMask
@ToTensor
def encodes(self, o:PILMaskV2): return o._tensor_cls(image2tensor(o)[0])
def MaskBlockV2(codes=None):
"A `TransformBlock` for segmentation masks, potentially with `codes`"
return TransformBlock(type_tfms=PILMaskV2.create, item_tfms=AddMaskCodes(codes=codes), batch_tfms=IntToFloatTensor)
I have also passed to unet_learner
the CrossEntropyLossFlat version and it fails at runtime with next error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-24-f6d02b91c02c> in <module>
1 learn.load("unet-manual-no-data-augmentation-before-unfreeze-best")
----> 2 learn.validate()
~/anaconda3/envs/ex1/lib/python3.7/site-packages/fastai2/learner.py in validate(self, ds_idx, dl, cbs)
215 with self.added_cbs(cbs), self.no_logging(), self.no_mbar():
216 self(_before_epoch)
--> 217 self._do_epoch_validate(ds_idx, dl)
218 self(_after_epoch)
219 return getattr(self, 'final_record', None)
~/anaconda3/envs/ex1/lib/python3.7/site-packages/fastai2/learner.py in _do_epoch_validate(self, ds_idx, dl)
181 try:
182 self.dl = dl; self('begin_validate')
--> 183 with torch.no_grad(): self.all_batches()
184 except CancelValidException: self('after_cancel_validate')
185 finally: self('after_validate')
~/anaconda3/envs/ex1/lib/python3.7/site-packages/fastai2/learner.py in all_batches(self)
151 def all_batches(self):
152 self.n_iter = len(self.dl)
--> 153 for o in enumerate(self.dl): self.one_batch(*o)
154
155 def one_batch(self, i, b):
~/anaconda3/envs/ex1/lib/python3.7/site-packages/fastai2/learner.py in one_batch(self, i, b)
159 self.pred = self.model(*self.xb); self('after_pred')
160 if len(self.yb) == 0: return
--> 161 self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
162 if not self.training: return
163 self.loss.backward(); self('after_backward')
~/anaconda3/envs/ex1/lib/python3.7/site-packages/fastcore/utils.py in _f(*args, **kwargs)
429 init_args.update(log)
430 setattr(inst, 'init_args', init_args)
--> 431 return inst if to_return else f(*args, **kwargs)
432 return _f
433
~/anaconda3/envs/ex1/lib/python3.7/site-packages/fastai2/layers.py in __init__(self, axis, *args, **kwargs)
300 "Same as `nn.CrossEntropyLoss`, but flattens input and target."
301 y_int = True
--> 302 def __init__(self, *args, axis=-1, **kwargs): super().__init__(nn.CrossEntropyLoss, *args, axis=axis, **kwargs)
303 def decodes(self, x): return x.argmax(dim=self.axis)
304 def activation(self, x): return F.softmax(x, dim=self.axis)
~/anaconda3/envs/ex1/lib/python3.7/site-packages/fastcore/utils.py in _f(*args, **kwargs)
429 init_args.update(log)
430 setattr(inst, 'init_args', init_args)
--> 431 return inst if to_return else f(*args, **kwargs)
432 return _f
433
~/anaconda3/envs/ex1/lib/python3.7/site-packages/fastai2/layers.py in __init__(self, loss_cls, axis, flatten, floatify, is_2d, *args, **kwargs)
277 def __init__(self, loss_cls, *args, axis=-1, flatten=True, floatify=False, is_2d=True, **kwargs):
278 store_attr(self, "axis,flatten,floatify,is_2d")
--> 279 self.func = loss_cls(*args,**kwargs)
280 functools.update_wrapper(self, self.func)
281
~/anaconda3/envs/ex1/lib/python3.7/site-packages/torch/nn/modules/loss.py in __init__(self, weight, size_average, ignore_index, reduce, reduction)
929 def __init__(self, weight=None, size_average=None, ignore_index=-100,
930 reduce=None, reduction='mean'):
--> 931 super(CrossEntropyLoss, self).__init__(weight, size_average, reduce, reduction)
932 self.ignore_index = ignore_index
933
~/anaconda3/envs/ex1/lib/python3.7/site-packages/torch/nn/modules/loss.py in __init__(self, weight, size_average, reduce, reduction)
17 class _WeightedLoss(_Loss):
18 def __init__(self, weight=None, size_average=None, reduce=None, reduction='mean'):
---> 19 super(_WeightedLoss, self).__init__(size_average, reduce, reduction)
20 self.register_buffer('weight', weight)
21
~/anaconda3/envs/ex1/lib/python3.7/site-packages/torch/nn/modules/loss.py in __init__(self, size_average, reduce, reduction)
10 super(_Loss, self).__init__()
11 if size_average is not None or reduce is not None:
---> 12 self.reduction = _Reduction.legacy_get_string(size_average, reduce)
13 else:
14 self.reduction = reduction
~/anaconda3/envs/ex1/lib/python3.7/site-packages/torch/nn/_reduction.py in legacy_get_string(size_average, reduce, emit_warning)
35 reduce = True
36
---> 37 if size_average and reduce:
38 ret = 'mean'
39 elif reduce:
~/anaconda3/envs/ex1/lib/python3.7/site-packages/fastai2/torch_core.py in _f(self, *args, **kwargs)
270 def _f(self, *args, **kwargs):
271 cls = self.__class__
--> 272 res = getattr(super(TensorBase, self), fn)(*args, **kwargs)
273 return retain_type(res, self)
274 return _f
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
Could you support Palleted based masks too, please?
I’m getting this error on a FastAI GPU from Paperspace that was working fine this morning. Is this related to the v2 release?
You’re using v1 code here, as you’re using TextLMDataBunch
. I invite you to read the book or the course notebooks to see how the names have changed, as nothing is the same in the newest version.
I’m happy to look into it. Can you please create a GitHub issue with this information?
Thank you very much for answering @jeremy . I created next issue on GitHub: https://github.com/fastai/fastai/issues/2663
I added a more extended explanation.
Thank you. Can I still run my v1 notebook on a v1 Paperspace instance? The reason I am asking is that some libraries that were previously loaded automatically with the FastAI (pre-v2 release) are no longer available on the v1 Paperspace instance, and I can’t terminal into that machine to install it with ‘conda install’. What is the best way for me to continue working with my FastAI v1 code on Paperspace? Is this still possible? Thank you.
Nevermind… I just read the previous post that we should use !pip install to install missing libraries.
They’re working on a migration ATM For now do a
??
and see the affiliated notebook in the fastai
repo
Yep, I just wanted to announce in case it was not expected, I’m reading the docs nbs from repo
New DNS is gradually propagating - apologies for any site errors whilst it does.
Sorry is this is a newb obvious question. I’ve been banging my head agains this for a few weeks now and didn’t find anything on the forums.
Since I can’t run FastAI2 code locally on my computer (no CUDA GPU), I am trying to run the fastaibook (or even parts of it) on Google colab or from Binder. I always run into basic install problems like: ---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
in ()
2 #hide
3 get_ipython().system(‘pip install -Uqq fastbook’)
----> 4 import fastbook
5 fastbook.setup_book()
on Google colab.
Or on Binder, what might be an out of memory error from line [4] of the intro book.
I am just running the first chapter of the fastbook from the binder site, as it seems to indicate is possible.
https://hub.gke.mybinder.org/user/fastai-fastbook-3jjfpv1t/notebooks/01_intro.ipynb
What am I missing here?
I can’t imagine it’s possible to run any of the chapters on Binder. Use one of the recommended options from course.fast.ai .
I am now finally sucessfully running the intro notebook in Google Colab up to line 31, so far… I had to add a git clone line to get the fastai tree and change a path here and there. I suspect I would have had to make the same changes in Binder. But at least now it does finally work!