Wiki: Lesson 2

I have found a useful practice to create new ipynb notebook and write code from video by myself - got better understanding whats going on

10 Likes

When I run lesson1-rxt50.ipynb I got the error about missing resnext_50_32x4d.pth file (I am using my PC for learning). Where can I download this file?

P.S. Found it: http://files.fast.ai/models/weights.tgz

20 Likes

Do we have some kind of homework assignments for this lessons?

2 Likes

It’s there in the fast ai lessons folder when you clone the repo from github.

Ive been following the videos… but any update on what the homework is for this week?

In another post Jeremy mentioned the homework for Lesson 2 v2: Clarity on homework / "what's due"

1 Like

Not sure if this is the right place to raise a question / observation - if it’s wrong, point me in the right way please!

This is the results I get when running the “differential learning rate” section from lesson 1 (shown around 45m, lesson 2).

learn.fit(lr, 3, cycle_len=1, cycle_mult=2)
[ 0. 0.04332 0.02481 0.98975]
[ 1. 0.04008 0.02324 0.99268]
[ 2. 0.03363 0.02141 0.99316]
[ 3. 0.03325 0.02021 0.99121]
[ 4. 0.02273 0.0233 0.99072]
[ 5. 0.02452 0.02243 0.99268]
[ 6. 0.0247 0.02174 0.9917 ]

I notice the training loss drops every epoch … is this a sign of over-fitting? Notice in the video Jeremy’s training loss seems to go even lower, lower than the validation loss even. Over-fitting?

If the training loss is very much lower than the validation loss i.e the difference is notable, then we can say the model is overfitting…

And if we have them nearby, then the model is still underfitting…

I have studied this from the famous repo…

@ratio_an

1 Like

Hi, I’m new to this course. I got some trouble when I try to understand the parameter transforms_side_on of the function tfms_from_model. Since I’m not very familiar with English, so I didn’t quite understand the difference between the parameters : side on and top down. Can anybody do me a favor to understand it ?

Like just flipping the image right to left and vice versa…?

Thanks! So side_on means flipping horizontally, and top_down means flipping vertically? Is that right?

Sounds perfect…
It can be verified on a image if you want…
Just collect the pixel values in the left and right by dividing them in between and reverse them to create a new image and then plot them…

I guess this will do what we want…

Hi to everyone! Please, help me with some error with notebook lesson2 “Multi-label classification”.

I am using a paperspace, so run cell

# Data preparation steps if you are using Crestle:
os.makedirs('data/planet/models', exist_ok=True)
os.makedirs('/cache/planet/tmp', exist_ok=True)
!ln -s /datasets/kaggle/planet-understanding-the-amazon-from-space/train-jpg {PATH}
!ln -s /datasets/kaggle/planet-understanding-the-amazon-from-space/train_v2.csv {PATH}
!ln -s /cache/planet/tmp {PATH}

Here is error output.

Any ideas? Thank you in advance!

2 Likes

@GregFet
if you’re not using crestle, you dont need to run that block…you can safely ignore it :slight_smile:

Thanks for your patience, got it !

@ibunny, @ecdrid

You’re correct that the transforms_side_on flips the image left and right.

However, transforms_top_down is more than just vertical flipping. It’s vertical flips + horizontal flips + every possible 90-degree rotation.

I believe the naming comes from the idea that some images you would capture from the side (like taking a photo of a cat or dog) vs some you take top-down (like satellite images, or food photos on instagram…). In the side-on case, realistic data augmentations would be flipping horizontally (except in the occasional case of the sidewise or upside-down hanging cat/dog…). In top-down imaging like with satellites, you can rotate and flip the image in every direction and it could still look like a plausible training image.

Here are some examples generated using the transform functions with cat/dog lesson1 images:

original cat image:

orig_cat2

transforms_side_on, 12 examples:

side_on_cat

transforms_top_down, 12 examples (note the mirror images + rotations):

top_down_cat

Here’s a look at transforms.py:

transforms_basic    = [RandomRotateXY(10), RandomLightingXY(0.05, 0.05)]
transforms_side_on  = transforms_basic + [RandomFlipXY()]
transforms_top_down = transforms_basic + [RandomDihedralXY()]

class RandomDihedralXY(CoordTransform):
  def set_state(self):
    self.rot_times = random.randint(0,4)
    self.do_flip = random.random()<0.5

  def do_transform(self, x):
    x = np.rot90(x, self.rot_times)
    return np.fliplr(x).copy() if self.do_flip else x

class RandomFlipXY(CoordTransform):
  def set_state(self):
    self.do_flip = random.random()<0.5

  def do_transform(self, x):
    return np.fliplr(x).copy() if self.do_flip else x

Note that with both settings, there’s a bit of slight rotation and brightness adjustments included by default as well.

14 Likes

Thanks a lot @daveluo
Can you go further as to how you got those plots(insights)?
By calling the transformation functions and then plotting?
Thanks…

1 Like

wow, thanks a million! That’s pretty straight forward:heart_eyes:

1 Like

Yup, exactly. Slightly adjusted lesson1.py code in the augmentation section.

I changed the index from [1] to [0] to show a different cat than the original notebook. For the augmented images, switched aug_tfms=transforms_side_on and aug_tfms=transforms_top_down and the range/num of rows:

def get_augs():
    data = ImageClassifierData.from_paths(PATH, bs=2, tfms=tfms, num_workers=1)
    x,_ = next(iter(data.aug_dl))
    return data.trn_ds.denorm(x)[0]

tfms = tfms_from_model(resnet34, sz, aug_tfms=transforms_side_on, max_zoom=1.0)
data = ImageClassifierData.from_paths(PATH, bs=2, tfms=tfms, num_workers=1)
x,_ = next(iter(data.aug_dl))
ims = np.stack([get_augs() for i in range(12)])
plots(ims, rows=3)

For the original photo, removed all transforms:

tfms = tfms_from_model(resnet34, sz, aug_tfms=[])
data = ImageClassifierData.from_paths(PATH, bs=2, tfms=tfms, num_workers=1)
plt.imshow(data.trn_ds.denorm(x)[0])
6 Likes

Thanks a lot…
Makes things transparent…

1 Like