Platform: Colab ✅

Yes when i try to untar_data of URLs.PETS in path variable.

And print path i get the following output <bound method of PosixPath(’/root/.fastai/data/oxford-iiit-pet’)>

Not sure how to proceed further

Oh I got it. There’s nothing wrong, the data is being extracted to desired location, you just need to make a soft link to that location to be able to browse it. Execute the following line and you’ll be able to see the directory :slightly_smiling_face:

!ln -s /root/.fastai/data /content
2 Likes

Hi,
I am new to colab and facing an issue. Shouldn’t there be a folder called ‘fastai-v3’ created in my google drive after the authentication? But there is no such folder present.

Also, as I understand this should save the progress of my notebook and I should be able to pick up from the cell where I left off after starting a new session but I am not able to do so, the previous instance is wiped completetly. Is this correct or not?

Hi yashbansal6 hope you are having a beautiful day.

When you run the notebook nothing is saved permanently, anything you physicallly save to your drive will be saved everything else is just saved while you are using the session. When that session ends your are not able to start where you left off.

You must save the notebook as you go. Google colab normally times out after 90 minutes of no mouse activity and session itself is killed every 12 hours. As you stated the previous session is wiped out completely.

However when using a notebook I normally rename it and save it intermittently to my gdrive as I am working on it. I normally find if I leave the browser running and my account logged in. I can just reconnect and all the cells are there, I just need to re run them.

This process can be a bit tedious but its free, I believe they do a payed version in the states not sure if that works the same.

Cheers mrfabulous1 :smiley: :smiley:

If anyone has long time required for training on Google Colab Check if you are connected to Hosted Runtime rather than Local Runtime. (On the top right).

to be able to use " / ", it need to be an instance of Path. So simply wrap your path like so:

path = Path("/content/drive/My Drive/Umes")

this will make all the subsequent variables of type Path if you use the “/” operator. So update path_img and path_lbl to:

path_img = path/'Images'
path_lbl = path/'labels

The main issue was path_lbl not being an instance of Path.

1 Like

Hello!
I was able to take the datasets I created for image segmentation and run the training using the unet model. Also I was able to load the model and get good predictions of the input images. My question is I am still unfamiliar with in depth concept of the loaded model. I run learn.load and learn.summary() to see the architecture. How am i able to determine the number of neurons, weights, biases, and layers used in UNET model that the fastai library offers.

1 Like

I’m not sure if you’re talking about fastai v1 or v2 but in general:

  • Try to understand DynamicUnet Module in fastai
  • Learn how PyTorch hooks are used to build the decoder part of Unet
  • A simple exercise would be, modify Unet to AutoEncoder (no skip connections)
  • You can customize backbone of the network, so try different flavors of Resnet

docs would be an ideal place to learn more about them.
fastai v1: https://docs.fast.ai/vision.models.unet.html
fastai v2: https://dev.fast.ai/vision.models.unet

Hello @kshitijpatil09 Thank you! for your response. Is it possible to use the saved model and test it on Images that were not used in the training. I was studying the inference learning. How do I go about this process

I’m not well versed with fastai v1. Could you be specific about which version you’re using ?

Im going through Practical Deep learning for Coders v3, yes I believe this is fastai v1

Yes you’d use Inference Learner. It’s covered in the second lesson, Jeremy shows an example but the key steps are:

  1. learn.export('mymodelname.pkl)
  2. learn = load_learner('mymodelname.pkl)
  3. learn.predict('SomeFileName.jpg')

If we presume an image model

2 Likes

Thank You! Last question. In this lesson(image segmentation) you can see the prediction of the single image that you want the prediction for. Also in the training you can see the metrics. On a new test set using the loaded model is it possible to determine the accuracy of the prediction of the new image. I can visualize the metrics but what is the process of determining its specific accuracy

How do I get these images to show up in the notebooks? Thanks! :slight_smile:

Here’s an example from 00_notebook_tutorial.ipynb:

I was also not able to show them up in colab. Have you tried specifying the full path instead of absolute? Like /content/images/notebook_tutorial/add.png

Also, try replacing it with the html tag:

<img src="/content/images/notebook_tutorial/add.png" alt="add">

Hi deep-learner hope all is well!

Not sure how you loaded the notebook, but image directories often don’t get loaded automatically.
Have you checked the images are actually in that location by using the !ls -al command.
On Colab you can drag images into the correct directory and double click them to view them to confirm they are there.

cheers mrfabulous1 :smiley: :smiley:

Its better to first change the directory using !cd /content/drive/My Drive and then download the data

1 Like

did you get the solution? I’m facing the same issue

Hi! I have an “error” when running Lesson 2’s code.

I’ve copy and pasted the code into a new document and everything is fine except when I run the code to find the learning rate.

I run: learn.lr_find() and I get: image
It’s just a bunch of #na#'s

The graph looks normal: image

Is there something wrong with the code? Everything seems to be working fine except for the #na# part.

Thanks! :slight_smile:

Hi joegold Hope your having a jolly day!

It is my understanding that the validation loss is not calculated when using lr_find() hence #na#.

Cheers mrfabulous1 :smiley: :smiley:

1 Like