Lesson 1 official topic

I thank you.

Is there a step by step procedure to install RISE on the Kaggle notebook? I have not been able to get the command palette button or the cell toolbar. I used:
pip install RISE as explained in RISE ā€” RISE 5.7.1

Hi, I really hope this is the right place to ask my question.

Iā€™m getting an error when I run the first example of the fastbook (the one that trains a model, shows an upload button, then determines whether my image is of a cat vs a dog)

  • I was able to successfully train the model
  • I was able to successfully load my image (tested that the problem was not with the upload button by showing a small thumbnail of the image I uploaded via to_thumb())
  • everytime a run
     is_cat,_,probs = learn.predict(img)
    
    The predict function throws an error :frowning:

The error starts with learn.predict and ends with:

File ~/jupyter/lib/python3.10/site-packages/PIL/Image.py:529, in Image.__getattr__(self, name)
    527     deprecate("Image categories", 10, "is_animated", plural=True)
    528     return self._category
--> 529 raise AttributeError(name)

AttributeError: read

I can post the full backtrace but itā€™s quite long. Any help would realy be appreciated.

Note: Iā€™m running jupyter on my linux laptop inside a venv. Python version is 3.10.6.

Hi, Iā€™m new to the course, and Iā€™m trying just to train the cats&dogs model, using Google Colab it takes years (more than 1 hour and still running) to run the ā€œfine_tuneā€ function.

Iā€™m also trying it on Kaggle, it looks a bit faster, but still is taking lot of time. Is this normal?

Hi,

I donā€™t think that is normal. It looks like you are not using GPU.
On Google Colab, you can click on Runtime tab on the top left corner and click change runtime type. If itā€™s CPU, you should change it to a GPU.
On Kaggle, you can take the similar approach.

1 Like

Ok! thank you for the tip, I thought the GPU choice in Kaggle was a payment option or something like that. I see theyā€™re limited to an amount of hours per week, which, of course, is enough for a student like me.

It took only 2 minutes now to train the cats & dogs model. What a difference :smiley:

Since it took me so long without the GPU, I setup a conda environment locally and it took 7 minutes in my non-Nvidia GPU desktop, that anyway, was much better than the 1h in Google Colab & Kaggle without GPU. But having the GPU option in Kaggle, Iā€™ll do the things there.

3 Likes

Hello! I am new and am excited to check out this course. I noticed thereā€™s an issue with the very first Kaggle demonstration notebook (Is it a bird?).

If this should be posted elsewhere, please let me know ā€“ but PLEASE do not be mean like many on StackOverflow. I attempted to solve the error with GPT-4 as well, and it was having a tough time.

Without making any modifications, and just running the code as is, the last cell output produces an error. I find this unfortunate and it does not aid in my learning. All cells in the notebook were ran sequentially.

I have attempted to run the cells in both google colab and kaggle environments, and both produce the same error. I also could not similar questions posted.

Hereā€™s the last cell ran:

is_bird,_,probs = learn.predict(PILImage.create('bird.jpg'))
print(f"This is a: {is_bird}.")
print(f"Probability it's a bird: {probs[0]:.4f}")

Error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
/tmp/ipykernel_17/1213330699.py in <module>
----> 1 is_bird,_,probs = learn.predict(PILImage.create('bird.jpg'))
      2 print(f"This is a: {is_bird}.")
      3 print(f"Probability it's a bird: {probs[0]:.4f}")

/opt/conda/lib/python3.7/site-packages/fastai/learner.py in predict(self, item, rm_type_tfms, with_input)
    319     def predict(self, item, rm_type_tfms=None, with_input=False):
    320         dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms, num_workers=0)
--> 321         inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True)
    322         i = getattr(self.dls, 'n_inp', -1)
    323         inp = (inp,) if i==1 else tuplify(inp)

/opt/conda/lib/python3.7/site-packages/fastai/learner.py in get_preds(self, ds_idx, dl, with_input, with_decoded, with_loss, act, inner, reorder, cbs, **kwargs)
    306         if with_loss: ctx_mgrs.append(self.loss_not_reduced())
    307         with ContextManagers(ctx_mgrs):
--> 308             self._do_epoch_validate(dl=dl)
    309             if act is None: act = getcallable(self.loss_func, 'activation')
    310             res = cb.all_tensors()

/opt/conda/lib/python3.7/site-packages/fastai/learner.py in _do_epoch_validate(self, ds_idx, dl)
    242         if dl is None: dl = self.dls[ds_idx]
    243         self.dl = dl
--> 244         with torch.no_grad(): self._with_events(self.all_batches, 'validate', CancelValidException)
    245 
    246     def _do_epoch(self):

/opt/conda/lib/python3.7/site-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
    197 
    198     def _with_events(self, f, event_type, ex, final=noop):
--> 199         try: self(f'before_{event_type}');  f()
    200         except ex: self(f'after_cancel_{event_type}')
    201         self(f'after_{event_type}');  final()

/opt/conda/lib/python3.7/site-packages/fastai/learner.py in all_batches(self)
    203     def all_batches(self):
    204         self.n_iter = len(self.dl)
--> 205         for o in enumerate(self.dl): self.one_batch(*o)
    206 
    207     def _backward(self): self.loss_grad.backward()

/opt/conda/lib/python3.7/site-packages/fastai/data/load.py in __iter__(self)
    125         self.before_iter()
    126         self.__idxs=self.get_idxs() # called in context of main process (not workers/subprocesses)
--> 127         for b in _loaders[self.fake_l.num_workers==0](self.fake_l):
    128             # pin_memory causes tuples to be converted to lists, so convert them back to tuples
    129             if self.pin_memory and type(b) == list: b = tuple(b)

/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
    519             if self._sampler_iter is None:
    520                 self._reset()
--> 521             data = self._next_data()
    522             self._num_yielded += 1
    523             if self._dataset_kind == _DatasetKind.Iterable and \

/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)
    559     def _next_data(self):
    560         index = self._next_index()  # may raise StopIteration
--> 561         data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
    562         if self._pin_memory:
    563             data = _utils.pin_memory.pin_memory(data)

/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
     32                 raise StopIteration
     33         else:
---> 34             data = next(self.dataset_iter)
     35         return self.collate_fn(data)
     36 

/opt/conda/lib/python3.7/site-packages/fastai/data/load.py in create_batches(self, samps)
    136         if self.dataset is not None: self.it = iter(self.dataset)
    137         res = filter(lambda o:o is not None, map(self.do_item, samps))
--> 138         yield from map(self.do_batch, self.chunkify(res))
    139 
    140     def new(self, dataset=None, cls=None, **kwargs):

/opt/conda/lib/python3.7/site-packages/fastcore/basics.py in chunked(it, chunk_sz, drop_last, n_chunks)
    228     if not isinstance(it, Iterator): it = iter(it)
    229     while True:
--> 230         res = list(itertools.islice(it, chunk_sz))
    231         if res and (len(res)==chunk_sz or not drop_last): yield res
    232         if len(res)<chunk_sz: return

/opt/conda/lib/python3.7/site-packages/fastai/data/load.py in do_item(self, s)
    151     def prebatched(self): return self.bs is None
    152     def do_item(self, s):
--> 153         try: return self.after_item(self.create_item(s))
    154         except SkipItemException: return None
    155     def chunkify(self, b): return b if self.prebatched else chunked(b, self.bs, self.drop_last)

/opt/conda/lib/python3.7/site-packages/fastai/data/load.py in create_item(self, s)
    158     def retain(self, res, b):  return retain_types(res, b[0] if is_listy(b) else b)
    159     def create_item(self, s):
--> 160         if self.indexed: return self.dataset[s or 0]
    161         elif s is None:  return next(self.it)
    162         else: raise IndexError("Cannot index an iterable dataset numerically - must use `None`.")

/opt/conda/lib/python3.7/site-packages/fastai/data/core.py in __getitem__(self, it)
    456 
    457     def __getitem__(self, it):
--> 458         res = tuple([tl[it] for tl in self.tls])
    459         return res if is_indexer(it) else list(zip(*res))
    460 

/opt/conda/lib/python3.7/site-packages/fastai/data/core.py in <listcomp>(.0)
    456 
    457     def __getitem__(self, it):
--> 458         res = tuple([tl[it] for tl in self.tls])
    459         return res if is_indexer(it) else list(zip(*res))
    460 

/opt/conda/lib/python3.7/site-packages/fastai/data/core.py in __getitem__(self, idx)
    415         res = super().__getitem__(idx)
    416         if self._after_item is None: return res
--> 417         return self._after_item(res) if is_indexer(idx) else res.map(self._after_item)
    418 
    419 # %% ../../nbs/03_data.core.ipynb 53

/opt/conda/lib/python3.7/site-packages/fastai/data/core.py in _after_item(self, o)
    375             raise
    376     def subset(self, i): return self._new(self._get(self.splits[i]), split_idx=i)
--> 377     def _after_item(self, o): return self.tfms(o)
    378     def __repr__(self): return f"{self.__class__.__name__}: {self.items}\ntfms - {self.tfms.fs}"
    379     def __iter__(self): return (self[i] for i in range(len(self)))

/opt/conda/lib/python3.7/site-packages/fastcore/transform.py in __call__(self, o)
    206         self.fs = self.fs.sorted(key='order')
    207 
--> 208     def __call__(self, o): return compose_tfms(o, tfms=self.fs, split_idx=self.split_idx)
    209     def __repr__(self): return f"Pipeline: {' -> '.join([f.name for f in self.fs if f.name != 'noop'])}"
    210     def __getitem__(self,i): return self.fs[i]

/opt/conda/lib/python3.7/site-packages/fastcore/transform.py in compose_tfms(x, tfms, is_enc, reverse, **kwargs)
    156     for f in tfms:
    157         if not is_enc: f = f.decode
--> 158         x = f(x, **kwargs)
    159     return x
    160 

/opt/conda/lib/python3.7/site-packages/fastcore/transform.py in __call__(self, x, **kwargs)
     79     @property
     80     def name(self): return getattr(self, '_name', _get_name(self))
---> 81     def __call__(self, x, **kwargs): return self._call('encodes', x, **kwargs)
     82     def decode  (self, x, **kwargs): return self._call('decodes', x, **kwargs)
     83     def __repr__(self): return f'{self.name}:\nencodes: {self.encodes}decodes: {self.decodes}'

/opt/conda/lib/python3.7/site-packages/fastcore/transform.py in _call(self, fn, x, split_idx, **kwargs)
     89     def _call(self, fn, x, split_idx=None, **kwargs):
     90         if split_idx!=self.split_idx and self.split_idx is not None: return x
---> 91         return self._do_call(getattr(self, fn), x, **kwargs)
     92 
     93     def _do_call(self, f, x, **kwargs):

/opt/conda/lib/python3.7/site-packages/fastcore/transform.py in _do_call(self, f, x, **kwargs)
     95             if f is None: return x
     96             ret = f.returns(x) if hasattr(f,'returns') else None
---> 97             return retain_type(f(x, **kwargs), x, ret)
     98         res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
     99         return retain_type(res, x)

/opt/conda/lib/python3.7/site-packages/fastcore/dispatch.py in __call__(self, *args, **kwargs)
    118         elif self.inst is not None: f = MethodType(f, self.inst)
    119         elif self.owner is not None: f = MethodType(f, self.owner)
--> 120         return f(*args, **kwargs)
    121 
    122     def __get__(self, inst, owner):

/opt/conda/lib/python3.7/site-packages/fastai/vision/core.py in create(cls, fn, **kwargs)
    123         if isinstance(fn,bytes): fn = io.BytesIO(fn)
    124         if isinstance(fn,Image.Image) and not isinstance(fn,cls): return cls(fn)
--> 125         return cls(load_image(fn, **merge(cls._open_args, kwargs)))
    126 
    127     def show(self, ctx=None, **kwargs):

/opt/conda/lib/python3.7/site-packages/fastai/vision/core.py in load_image(fn, mode)
     96 def load_image(fn, mode=None):
     97     "Open and load a `PIL.Image` and convert to `mode`"
---> 98     im = Image.open(fn)
     99     im.load()
    100     im = im._new(im.im)

/opt/conda/lib/python3.7/site-packages/PIL/Image.py in open(fp, mode, formats)
   2919         exclusive_fp = True
   2920 
-> 2921     prefix = fp.read(16)
   2922 
   2923     preinit()

/opt/conda/lib/python3.7/site-packages/PIL/Image.py in __getattr__(self, name)
    539             )
    540             return self._category
--> 541         raise AttributeError(name)
    542 
    543     @property

AttributeError: read

Looks like this may work ā€“ but begs the question of why hasnā€™t anyone updated the default notebook? Perhaps because itā€™s a free offering and no one is financially dependent on ensuring this course stays a good product for years to comeā€¦

if iskaggle:
    !pip install -Uqq fastai==2.7.10 duckduckgo_search
import fastai
fastai.__version__

Thanks @bencoman

1 Like

Another note, it seems the -Uqq still updates to the newest version. When removing the ā€˜Uā€™, it seemed to install the right one:

if iskaggle:
    !pip install -qq fastai==2.7.10 duckduckgo_search
import fastai
fastai.__version__
1 Like

I 've started the course. I have nearly zero prior experience, other than trying different data analytics and python courses here and there. Feel quite excited! Hope I can come back to this my post in 3-4 months with a smile :slight_smile: Here is my first result of identifying two different swimming strokes: fly or breast. Apparently, I need to learn how to train the model manually :slight_smile:

Impressiveā€¦you seemed to avoid all the issues myself and others have had in the past (regarding the out of date notebooks). You seem like youā€™re very knowledgeable in the field, already!

Thank you Appreciate your kind support

Hi, Iā€™m just pretty new to the course. One thing Iā€™d like sharing to other newcomers.

If youā€™re working on Kaggle and you change the accelerator (Non/GPU) youā€™ll need to restart and run everything from the beginning.

Iā€™ve been struggled a bit with the ā€œis a bird practiceā€ because the ā€œvision_learnerā€ function was not found. Then at some point I restarted an run everything without accelerator and everything worked fine.

Then I switched to GPU Tx2 and GPU P-100 and it didnā€™t find the function again, but I restarted the notebook and run all againā€¦ and voilĆ”, the function was found.

So switching the accelerator implies resetting and run it all again.

Am I wrong?

I noticed that as well. Also, wasnā€™t there a breakthrough in Deep Learning in 2012 (2 years prior this comic) with AlexNet that could classify images?

I know that this is trivial stuff and doesnā€™t contribute in making models but I am a bit nerdy on these things :sweat_smile:.

Hello,

I am trying to follow the first lesson and code from scratch in my own kaggle notebook.

When running this code block I am getting an error that there is no module named fastbook:

!pip install -Uqq fastbook
import fastbook
from fastbook import *
from fastai.vision.all import *

What am I doing wrong?

1 Like

For reading the chapters I recommend you to follow the links to the chapters provided in (Practical Deep Learning for Coders - 1: Getting started)

In case you want to write the is_a_cat? model from scratch, you donā€™t need to install fastbook or import it, just:

from fastai.vision.all import *
import ipywidgets as widgets

Hi @eyp

thanks for answering. Could you explain why you dont need to import either fastbook or fastai? Without importing it my notebook would not have access to either of those libraries no?

your code worked, but when I try to run the next part of the lessons code:

from fastai.vision.all import *
path = untar_data(URLs.PETS)/'images'

def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
    path, get_image_files(path), valid_pct=0.2, seed=42,
    label_func=is_cat, item_tfms=Resize(224))

learn = vision_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)

I get the following error:

gaierror: [Errno -3] Temporary failure in name resolution

During handling of the above exception, another exception occurred:

URLError                                  Traceback (most recent call last)
/tmp/ipykernel_27/1897567515.py in <module>
      3 # CLICK ME
      4 from fastai.vision.all import *
----> 5 path = untar_data(URLs.PETS)/'images'
      6 
      7 def is_cat(x): return x[0].isupper()

Seems whatever is the URL at URLs.PETS is not valid.

However this code works when running it from here: Google Colab

1 Like

Hello guys,Iā€™ve followed the instructions of chapter 1 and wanted to learn the model to check for busy vs empty supermarket parking photos (e.g. ā€˜Lidl parking supermarket photosā€™). However, the accuracy is quite low at 0.4017. Any ideas how can I improve it?

searches = 'empty lidl parking photo', 'busy lidl parking photo'
path = Path('xoxo')
from time import sleep

for o in searches:
    dest = (path/o)
    dest.mkdir(exist_ok=True, parents=True)
    download_images(dest, urls=search_images(f'{o} photo'))
    sleep(10)  # Pause between searches to avoid over-loading server
    download_images(dest, urls=search_images(f'{o} sun photo'))
    sleep(10)
    download_images(dest, urls=search_images(f'{o} shade photo'))
    sleep(10)
    download_images(dest, urls=search_images(f'{o} dark photo'))
    sleep(10)
    resize_images(path/o, max_size=400, dest=path/o)

failed = verify_images(get_image_files(path))
failed.map(Path.unlink)
len(failed)

dls = DataBlock(
    blocks=(ImageBlock, CategoryBlock), 
    get_items=get_image_files, 
    splitter=RandomSplitter(valid_pct=0.3, seed=42),
    get_y=parent_label,
    item_tfms=[Resize(1024, method='squish')]
).dataloaders(path, bs=32)

dls.show_batch(max_n=6)

learn = vision_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(4)

is_parking,_,probs = learn.predict(PILImage.create('bird.jpg'))
print(f"This is a: {is_parking}.")
print(f"Probability it's a lidl parking: {probs[0]:.4f}")

Probability itā€™s a lidl parking: 0.4017

Well, this is what Iā€™ve got in my Kaggle notebook, it ran without errors: