Beginner: Basics of fastai, PyTorch, numpy, etc ✅

yeah just dropped the PR to fix that → Fix error AttributeError: read by drsmog · Pull Request #52 · fastai/course22 · GitHub

1 Like

Not sure if the cause of this is permanent, or just a result of an unintentional library change in the latest release (2.7.11)

1 Like

For the following code block:

#hide
# For the book, we can't actually click an upload button, so we fake it
uploader = SimpleNamespace(data = ['dog.jpg'])

#img = 'images/chapter1_cat_example.jpg' #PILImage.create('images/chapter1_cat_example.jpg')
is_cat,_,probs = learn.predict('dog.jpg')
print(f"Is this a cat?: {is_cat}.")
print(f"Probability it's a cat: {probs[1].item():.6f}")

I get the following error:


FileNotFoundError Traceback (most recent call last)
in
1 #img = ‘images/chapter1_cat_example.jpg’ #PILImage.create(‘images/chapter1_cat_example.jpg’)
----> 2 is_cat,_,probs = learn.predict(‘dog.jpg’)
3 print(f"Is this a cat?: {is_cat}.“)
4 print(f"Probability it’s a cat: {probs[1].item():.6f}”)

25 frames
/usr/local/lib/python3.9/dist-packages/PIL/Image.py in open(fp, mode, formats)
2973
2974 if filename:
→ 2975 fp = builtins.open(filename, “rb”)
2976 exclusive_fp = True
2977

FileNotFoundError: [Errno 2] No such file or directory: ‘dog.jpg’

No such file or directory: ‘dog.jpg’

In the folder you are running this code, is there such file ‘dog’jpg’?
Try…

!ls 'dog.jpg' 

Hello all !
I’m also having a similar problem with the last step in Lesson 1. I have tried the fix mentioned further above, which says to remove the PILImage portion of that line of code with simply ‘bird.jpg’
This does indeed get rid of the error, but (at least for me) does not actually work.
For example, if I save an image of a car in the directory and call it ‘bird.jpg’, the notebook still says it is 100% probability of being a bird.
If I replace the bird image with a human cartoon, it says its 100% probability of being a bird.
In summary, the fix mentioned above seems to get rid of the error, but the last step is not able to differentiate between a bird and things that are not birds.

Ok, I’ve managed to figure out the answer to my own question. I’ll post it here in case it helps others (and hopefully I’m right in what I’m saying.)
What I said above is true. If I feed the notebook a picture of a coffee or a car or a beer, it says it is a bird. But I think I understand now that we haven’t really trained (fine-tuned) the model, to recognize birds (so the xkcd cartoon at the beginning of the lecture is a little bit misleading)…but rather, what we’ve done is taught the model to be able to tell the difference between a bird and a forest. That’s a significant difference.
I expanded the loop in the notebook to also get pictures of ‘car’ and of ‘beer’. Then I tested the notebook with new pictures of birds, cars, beers, and forests. It was very accurate in recognizing each new picture! But if I give it something new, like a cooking pan with some stuff in it, it says its a bird.
So I think what this notebook does is to teach a model to distinguish items from each other, but not in general.

1 Like

Indeed, if you train a classifier on pictures of birds or forests, it will always output either one of these 2 classes. Even if you give it a picture of a car.

What you could do is use multi-label classification, in principle that should be able to also classify “nothing” in case you run inference on an image of a class it wasn’t trained on.

1 Like

Hello everyone,

As I’m a total beginner, I still feel a bit overwhelmed browsing the https://docs.fast.ai docs, especially when I need to look for a very specific piece of information.

In the lesson 1 of the course, the method fine_tune is called on a Vision Learner, as in =>

learn = vision_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(3) 

What is the integer param passed to the fine_tune method ?

Thanks in advance !

Hi Yacine,

You pass the number of epochs you want the model to train, aka the number of times the models gets to improve using the entire dataset.

For more details, have a look at my answer here: "fine_tune" vs. "fit_one_cycle" - #7 by zerotosingularity

1 Like

thanks a lot ! checking your linked answer right now

Small update: in v2.7.12 (PyTorch 2.0.0) you can use the PILImage again to make a prediction:

img_file = "my_image.jpg"
img = PILImage.create(img_file)
learn.predict(img)

or

learn.predict(img_file)
is_bird,_,probs = learn.predict(PILImage.create('bird.jpg'))
print(f"This is a: {is_bird}.")
print(f"Probability it's a bird: {probs[0]:.4f}")

For this last cell in Lesson 1’s code, how do we know which index of probs is referring to bird? I tried a very simple change in the notebook by changing all the ‘bird’ to ‘dog’ but to get the relevant prediction I had to use ‘probs[1]’ instead of ‘probs[0]’. What gives?

To match the probability with the corresponding index, change the first line to

is_bird, index, probs = learn.predict(PILImage.create('bird.jpg'))

and in the last line, use that index as {probs[index]:.4f}`.

It will take care of the issue itself without us bothering what to put!

Here is what happens behind the scenes.

2 Likes

I want to create an instance of ResNet for 3D, 1-channel images (MRIs). All the examples and libraries seem to assume 2D and/or 3-channel images.

How do I an instance of ResNet for 3D, 1-channel images?

@paul.reiners

Perhaps the Faimed3D extensions (or members in that topic can help.)

I was trying to run this cell in the fourth notebook of the book :

#hide_output
im3_t = tensor(im3)
df = pd.DataFrame(im3_t[4:15,4:22])
df.style.set_properties(**{‘font-size’:‘6pt’}).background_gradient(‘Greys’)

and got this error :

AttributeError Traceback (most recent call last)
/usr/lib/python3/dist-packages/IPython/core/formatters.py in call(self, obj)
343 method = get_real_method(obj, self.print_method)
344 if method is not None:
→ 345 return method()
346 return None
347 else:

~/.local/lib/python3.10/site-packages/pandas/io/formats/style.py in repr_html(self)
381 Hooks into Jupyter notebook rich display system, which calls repr_html by
382 default if an object is returned at the end of a cell.
→ 383 “”"
384 if get_option(“styler.render.repr”) == “html”:
385 return self.to_html()

~/.local/lib/python3.10/site-packages/pandas/io/formats/style.py in to_html(self, buf, table_uuid, table_attributes, sparse_index, sparse_columns, bold_headers, caption, max_rows, max_columns, encoding, doctype_html, exclude_styles, **kwargs)
1306 Whether to sparsify the display of a hierarchical index. Setting to False
1307 will display each explicit level element in a hierarchical key for each
→ 1308 column. Defaults to pandas.options.styler.sparse.columns value.
1309
1310 … versionadded:: 1.4.0

~/.local/lib/python3.10/site-packages/pandas/io/formats/style_render.py in _render_html(self, sparse_index, sparse_columns, max_rows, max_cols, **kwargs)
203 Renders the Styler including all applied styles to HTML.
204 Generates a dict with necessary kwargs passed to jinja2 template.
→ 205 “”"
206 d = self._render(sparse_index, sparse_columns, max_rows, max_cols, " ")
207 d.update(kwargs)

~/.local/lib/python3.10/site-packages/pandas/io/formats/style_render.py in _render(self, sparse_index, sparse_columns, max_rows, max_cols, blank)
160 Also extends the ctx and ctx_index attributes with those of concatenated
161 stylers for use within _translate_latex
→ 162 “”"
163 self._compute()
164 dxs = []

~/.local/lib/python3.10/site-packages/pandas/io/formats/style_render.py in _compute(self)
255 self.ctx_columns.clear()
256 r = self
→ 257 for func, args, kwargs in self._todo:
258 r = func(self)(*args, **kwargs)
259 return r

~/.local/lib/python3.10/site-packages/pandas/io/formats/style.py in _apply(self, func, axis, subset, **kwargs)
1665 “hidden_rows”,
1666 “hidden_columns”,
→ 1667 “ctx”,
1668 “ctx_index”,
1669 “ctx_columns”,

~/.local/lib/python3.10/site-packages/pandas/core/frame.py in apply(self, func, axis, raw, result_type, args, **kwargs)
9421 Apply a function along an axis of the DataFrame.
9422
→ 9423 Objects passed to the function are Series objects whose index is
9424 either the DataFrame’s index (axis=0) or the DataFrame’s columns
9425 (axis=1). By default (result_type=None), the final return type

~/.local/lib/python3.10/site-packages/pandas/core/apply.py in apply(self)
676 # “Union[Series, DataFrame, GroupBy[Any], SeriesGroupBy,
677 # DataFrameGroupBy, BaseWindow, Resampler]”; expected “Union[DataFrame,
→ 678 # Series]”
679 return self.obj.index # type:ignore[arg-type]
680

~/.local/lib/python3.10/site-packages/pandas/core/apply.py in apply_standard(self)
796 “”"
797 we have an empty result; at least 1 axis is 0
→ 798
799 we will try to apply the function to an empty
800 series in order to see if this is a reduction function

~/.local/lib/python3.10/site-packages/pandas/core/apply.py in apply_series_generator(self)
812 from pandas import Series
813
→ 814 if not should_reduce:
815 try:
816 if self.axis == 0:

~/.local/lib/python3.10/site-packages/pandas/core/apply.py in f(x)
131
132 self.result_type = result_type
→ 133
134 # curry if needed
135 if (

~/.local/lib/python3.10/site-packages/pandas/io/formats/style.py in _background_gradient(data, cmap, low, high, text_color_threshold, vmin, vmax, gmap, text_only)
3627 -------
3628 self : Styler
→ 3629
3630 See Also
3631 --------

AttributeError: ‘ColormapRegistry’ object has no attribute ‘get_cmap’

Hey, your code looks correct, and the error log indicates that the libraries pandas and matplotlib have difficulties talking to each other. So this seems like a dependency issue. Maybe you don’t have the current versions of libraries installed?

You can update them with !pip install -Uqq pandas matplotlib (here, U means upgrade and qq means do a quite update, ie don’t give me any logs). Run this command in jupyter and try again. Does it work then?

Afaik, there is no pre-built 3D-resnet in fastai (or anywhere else, really).

So you would have to adapt the resnet architecture to use nn.Conv3d instead of nn.Conv2d and train from scratch.

1 Like

Hello, I cannot start the course from the beginning. When I reach the part of the code :sparkles:
from duckduckgo_search import ddg_images
from fastcore.all import *

def search_images(term, max_images=200): return L(ddg_images(term, max_results=max_images)).itemgot(‘image’) and the I call : urls = search_images(“dog images”, max_images=10)
print(urls[0]) I get a 403 (Forbidden message back). It has been some days I am stuck here contary to aking even form chatGPT?

@ericvondike

The latest version of DuckDuckGo Search API follows a different format.

Can you try the following code snippet?
It worked for me now.

from duckduckgo_search import DDGS
from fastcore.all import *

def search_images(term, max_images=200): 
    return L(DDGS().images(term, max_results=max_images)).itemgot("image")
    
urls = search_images("dog images", max_images=10)
print(urls[0])