Lesson 3 In-Class Discussion ✅

Hi Peter I am using LIDAR images which sounds similar. Height as greyscale works just fine. Just 40 training images sounds challenging though. I use 13,000 and got 92 percent. Consider getting more samples.

1 Like

Thank you!

Hi William,

is the second part of the unet the reverse of the CNN of the first part?

Cheers.

Functionally it is meant to “reverse” the effect of the first part, which reduces the image dimensions, but the architecture may not be the exact reverse. Like if you adapt resnet34 for the first part, your second part will not be the exact reverse of resnet34.

Apologies if there is a similar question out there, can someone point me to a planet like dataset ? Want to try out my knowledge on another dataset.

Hi all,

Houston we got a problem. The train files for the kaggle planet dataset are non downloadable already, we can torrent if ourselves or you can follow the steps.

I hope this is useful for colab users it should work for the rest of the peeps ( minus those colab stuff )

https://pastebin.com/K9q2c7gY

1 Like

I had the same issue with the CamVid dataset as well.
This worked like a charm! Thank you for posting!!

Hi @tarun98601 . There is a similar dataset
here

Its a movie genre classification based on the poster.

Hi guys, I’ve been trying to torrent the planet datasets for two days - but seems like there aren’t any seeders available!

If anyone has any alternative download locations, I’d be grateful. Cheers!

Can someone please explain me that in the head-pose lesson, while we are creating the data bunch, we are writing something like ‘split_by_valid_func(lambda o: o.parent.name==‘13’)’ , so how the lambda function is working here to create a validation dataset for the model?
Thanks in advance.

Hi @jin_and_tonic, Were able to download the file? I have the same problem, no seeders available.

Hi Folks, Hope y’all are doing well.

I have this following metrics from training a learner on the BIWI head pose dataset. My training loss is higher that validation loss. I tried to increase the learning rate
(from 2e-2 to 8e-2 ), still behaves the same. Can someone comment on this behavior.

Thanks,

/bin/bash: move: command not found

I am getting this error when I try to execute the following command from Planet competition code in lesson 3:

! mkdir %userprofile%.kaggle
! move kaggle.json %userprofile%.kaggle

What is the issue? Could anyone please help to resolve?

Hi saurabh_wadhawan I hope you are having a beautiful day!

Have you tried mv instead of move :frowning_face:

Cheers mrfabulous1 :grinning: :grinning:

Thanks. This worked!

But facing another error now.

! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train-jpg.tar.7z -p {path}
! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train_v2.csv -p {path}
! unzip -q -n {path}/train_v2.csv.zip -d {path}

When I execute these, I get this error:-
Could not find kaggle.json. Make sure it’s located in /root/.kaggle. Or use the environment method… But I have already uploaded kaggle.json

Also, this command doesn’t work:
! conda install --yes --prefix {sys.prefix} -c haasad eidl7zip
Error: /bin/bash: conda: command not found

Kindly help to resolve. Thanks.

Can you try to run only the non-windows commands:

! mkdir -p ~/.kaggle/
! mv kaggle.json ~/.kaggle/
# For Windows, uncomment these two commands
# ! mkdir %userprofile%\.kaggle
# ! move kaggle.json %userprofile%\.kaggle

I believe it should solve your problem.

1 Like

Hi, I’m having trouble creating a databunch for image classification. The dataset is called MIT indoor scenes.
The images are under folders which contain the labels. But the problem is that the file names for which should be taken as train and test set is in a text file. How do I create the data bunch? Please help.

Hi All,

I have tried to do multi-label image classification on the dataset below:

While training for I am getting negative training and validation loss. What is could be going wrong here? I am running on collab.

Thanks
Mainak

The code is available here:

I try to dig deep about mixed training precision - how new it is.
Found at Nvidia docs:

|Model|Speedup| - with Pytorch
|—|---|
|NVIDIA Sentiment Analysis| 4.5X speedup|
|FAIRSeq| 3.5X speedup|
|GNMT| 2X speedup|
|ResNet-50| 2X speedup|

Q: Is Automatic Mixed Precision (AMP) dependent on a PyTorch version or can any PyTorch version enable AMP?

A: AMP with CUDA and CPP extensions requires PyTorch 1.0 or later. The Python-only build might be able to work with PyTorch 0.4, however, 1.0+ is strongly recommended.

https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html#faq-pytorch

My question is concerning the image segmentation lecture in lesson 3. I have created my own datasets. How are you able to determine the number of neurons that are in your input layer of the Neural Network? Saying that the train data:

ImageDataBunch;

Train: LabelList (160 items)
x: SegmentationItemList
Image (3, 128, 128),Image (3, 128, 128),Image (3, 128, 128),Image (3, 128, 128),Image (3, 128, 128)
y: SegmentationLabelList
ImageSegment (1, 128, 128),ImageSegment (1, 128, 128),ImageSegment (1, 128, 128),ImageSegment (1, 128, 128),ImageSegment (1, 128, 128)