Beginner: Beginner questions that don't fit elsewhere ✅

Yup, you’ll need to run that line every time you open the notebook, as it will install the library.

1 Like

Hi All, Good day. I started the course “Practical Deep Learning for Coders 2022 part 1 – version 5” today and noted that the https://cups.fast.ai/fast/ is not active and possibly leading to some advertisements. If thats known its fine.

Hi all, I just started the 2022 Part 1 course. Running the bird kaggle code, I got an error I’m not sure how to interpret:

from duckduckgo_search import ddg_images
from fastcore.all import *

def search_images(term, max_images=30):
print(f"Searching for ‘{term}’")
return L(ddg_images(term, max_results=max_images)).itemgot(‘image’)

NameError Traceback (most recent call last)
/tmp/ipykernel_17/1717929076.py in
1 from duckduckgo_search import ddg_images
----> 2 from fastcore.all import *
3
4 def search_images(term, max_images=30):
5 print(f"Searching for ‘{term}’")

/opt/conda/lib/python3.7/site-packages/fastcore/all.py in
1 from .imports import *
2 from .foundation import *
----> 3 from .dispatch import *
4 from .utils import *
5 from .parallel import *

/opt/conda/lib/python3.7/site-packages/fastcore/dispatch.py in
170
171 # %% …/nbs/04_dispatch.ipynb 87
→ 172 @typedispatch
173 def cast(x, typ):
174 “cast x to type typ (may also change x inplace)”

/opt/conda/lib/python3.7/site-packages/fastcore/dispatch.py in call(self, f)
146 else: nm = f’{f.qualname}’
147 if isinstance(f, classmethod): f=f.func
→ 148 self.d[nm].add(f)
149 return self.d[nm]
150

/opt/conda/lib/python3.7/site-packages/fastcore/dispatch.py in add(self, f)
93 if t is None:
94 t = _TypeDict()
—> 95 self.funcs.add(a0, t)
96 t.add(a1, f)
97

/opt/conda/lib/python3.7/site-packages/fastcore/dispatch.py in add(self, t, f)
58 def add(self, t, f):
59 “Add type t and function f
—> 60 if not isinstance(t, tuple): t = tuple(L(union2tuple(t)))
61 for t_ in t: self.d[t_] = f
62 self._reset()

NameError: name ‘union2tuple’ is not defined

So it looks like there is something wrong with the fastcore/dispatch.py method (add).

Can anyone help a novice debug this simple error?

Thank you!

Try updating fastcore: pip install -U fastcore

This was intended just for the live version of the course for Jeremy to know if everyone was following along ok.

1 Like

In general for most problems, fine tuning is dramatically more efficient than training from scratch and is the preferred method. Typically a lot less labelled data is required, performance is significantly better with smaller datasets and it takes less time and compute resources. If you’re training from scratch you typically need a lot more labelled data, compute, and time.

In general you should probably have an intuition or strong reason as to why training from scratch is required or better than fine tuning, otherwise you should probably be fine tuning.

2 Likes

My jupyter labs kernel keeps dying during learning. Any ideas on how to fix it?

I’ve installed
ubuntu on a windows machine
fastsetup on ubuntu
jupyterlabs
and am trying to run 02-production.ipynb

Have also attempted un/reinstalling some packages as per [python - kernel keeps dying in jupyter notebook - Stack Overflow]

thanks!

Hi , I am a beginner in this space. I need to setup a laptop to get started. The current one I use gets heated up a lot so I think it will not be suitable for fast ai course learning.

I need to buy a new one, so wondering about these.

  1. Does it help to buy a new laptop with GPU?. I understand we will be mostly using online resources, but is there any scenario that I might come across later where having a local one helps?.
    if yes, how much video RAM is decent enough.
  2. any recommendations on RAM size
  3. Any recommendations on OS?

Sorry , if you find that I am starting it all over again. Thanks in advance for your time helping me through this.

I would just do it one at a time and see what is the best after each iteration, and replace. becuase I don’t want to train multiple models at once, if you have to deal with gpu’s

In the “Is it a bird?” notebook, learner.predict returns a probability value that indicates how likely the model thinks the picture is to be a bird.

print(f"Probability it’s a bird: {probs[0]:.4f}")

How would you change the code if you want instead the probability that the picture is a forest? How does the learner object know you are looking for birds and not forests? Is something in the DataBlock parameters saying “bird” is the thing to look for, rather than forest?

heyy, this one might be too basic but i am facing an error while using ImageDataLoaders.from_name_func


heres the code
please help


source incase needed

two changes…
print(f"Probability it's a forest: {probs[1]:.4f}")

and more generally…

pred,pred_ndx,probs = learn.predict()
print(f"Probability it's a {pred}: {probs[pred_ndx]:.4f}")`
2 Likes

Thank you! I get it now.

1 Like

I am also getting the same error .I tried using the below install

!pip install -Uqq fastcore     
!pip install -Uqq fastai  duckduckgo_search

surprisingly it worked for the first time . can someone please advice on this name error issue

I’m working through chapter 4 and I’m struggling with this passage. I’m sure I’m suffering from some basic misunderstanding.

“To decide if an output represents a 3 or a 7, we can just check whether it’s is greater than 0.0.” Do I understand correctly? Is that saying if a “preds” value is > 0 then the prediction is 3, and if it’s < 0, then the prediction is 7? I don’t understand why that should be so. We’re just multiplying pixel color values by random weights. Why should that result in negative values meaning one thing and positive values something else?

1 Like

This is not unrelated to my previous question. Here’s another quotation from Ch 4:

We need a label for each image. We'll use 1for 3s and0 for 7s:

That’s on intuitive. Why not 3 for threes and 7 for sevens? Or why not make the labels be “three” and “seven”? I think the choice of 1 for 3 has something to do with convenience later on when the predictions are interpreted as boolean values and True converts to 1. Basically the label for threes (“1”) can also be interpreted as “True”, so there’s a little sleight-of-hand in choosing the label values here. Am I on the right track?

What happens if you try to deal with more than two categories? What if we had threes, sevens, and nines? or all nine digits? I’m having trouble generalizing this procedure.

1 Like

The initial examples are binary classification / single label. The course and book moves onto multiclass later.

This tutorial covers (cat vs Dog) & (Pet breeds multiclass example) fastai - Computer vision intro

Greetings Forum’s Members

I have quick doubt based on the context below:
Could you please share an example where loss and metric are same and one example where loss and metric used are not same as i am still getting a little confused between the two.

A metric is a function that measures quality of the model’s predictions using the validation set. This is similar to the ­ loss , which is also a measure of performance of the model. However, loss is meant for the optimization algorithm (like SGD) to efficiently update the model parameters, while metrics are human-interpretable measures of performance. Sometimes, a metric may also be a good choice for the loss.

One example where loss and metric are not the same is when training a very basic MNIST digit classifier. In most (tutorial) projects, you’ll have as the cross entropy loss function and accuracy as a metric. That means, cross entropy loss (or mean squared error for simpler cases, a number that should be ideally 0) is being used internally to adjust the weights of the model, while the accuracy (measured in percent) is being displayed to you to show evaluate the models overall performance.

1 Like