Platform: Colab ✅

even im experiencing the same problem, did you find the solution to it?

I’ve been looking for some solution but have not found one yet :cry:

I have also tried manually uninstalling (version 0.1.18) /installing (v- 0.1.19) the fastprogress package but it didn’t work either.

Yes, I am getting the same error regarding the version of fastprogress.

same thing here. I tried updating fastprogress to 0.1.19, but it didn’t help.

fastprogress 0.1.19 is now included in the dist. Run the install script again and it should be ok, if not restart your instance and run it.

edit: not my doing, someone in dev fixed it

2 Likes

Hello. I am having an issue that could be related to Google Colaboratory. Any help would be appreciated. Thanks.
https://forums.fast.ai/t/fastai-imagedownloader-widget-class-chromedriver-issue-lesson-1-2-downloader

!cp train_v2.csv content/gdrive/My Drive/fastai-v3/data/planet

whenever i try to copy a file to the my drive the following error occurs

cp: target ‘Drive/fastai-v3/data/planet’ is not a directory

how to resolve this issue

“My Drive” has a space in it, you need to escape or stringify it.

/content/gdrive/My\ Drive/
or
"/content/gdrive/My Drive/"

i tried it but it displays cannot create a file by that name file or directory doesnt exist

Make sure you use the full path and that the paths excist, use !mkdir.

!cp train_v2.csv content/gdrive/My Drive/fastai-v3/data/planet

should be something like

!cp /content/data/planet/train_v2.csv /content/gdrive/My\ Drive/fastai-v3/data/planet/

Here’s my notebook for reference. You may copy it your Drive run it.

I’ve just added two cells of code and made a minor change in the untar_data function call.

NOTE : The untar_data (and all other functions dealing with the data) will take much longer to execute than if the data was stored locally on your Colaboratory instance. Hence, there’s a tradeoff between speed and permanence.

does anyone run into this issue? the AWD_LSTM is not downloaded when I try to run lesson3-imdb.ipynb

it helped, thanks for the help

path
PosixPath(’/content/gdrive/My Drive/fastai-v3/data/planet’)

whenever i tried the command
! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train-jpg.tar.7z -p {path}
instead of downloading in the path it is creating a folder {path} and downloading in it
i also tried ! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train-jpg.tar.7z -p {’/content/gdrive/My Drive/fastai-v3/data/planet’}
it is creating a folder content and downloading in it
how to directly download into my drive from colab

Could be the space in “My Drive”, try escaping it.

In the second example you don’t need the curly brackets or quotation marks, just escape space.

Hi. I guess this is a bug in the notebook. It should be learn = language_model_learner(data_lm, "AWD_LSTM", drop_mult=0.3). With parentheses around AWD_LSTM.


UPDATE:

Thanks for @jbuzza reminding me the latest fastai library already fix the bug. So no need to use the quotes anymore.

I’m getting the following error when i tried to run the image segmentation code from lesson3
RuntimeError: The size of tensor a (11520) must match the size of tensor b (172800) at non-singleton dimension 1
i tried to reduce the bs to 4 and the same error is popping out

It’s happening because ..\My Drive\.. has a space in between the words.
Convert the path to string and it’ll work. Try this bit of code:

! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train-jpg.tar.7z -p "{path}"  
! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train_v2.csv -p "{path}"
! unzip -q -n "{path}"/train_v2.csv.zip -d "{path}"

The quotes should not be needed, the syntax text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5) works for me. Perhaps the latest fastai version was not loaded ?

I ran into an issue with the Lesson 2 notebook, since Colab doesn’t support the ImageCleaner widget. I hacked up an alternative that saves the top {n} lossy images to a folder called top_losses so they can be cleaned out-of-band instead, passing it along in case someone else finds it useful:

(copy/paste into a new cell just after the ImageCleaner intro blurb)

cleaner_tuples = []
for ds_type in [DatasetType.Valid, DatasetType.Train]:
  ds, idxs = DatasetFormatter().from_toplosses(learn, n_imgs=30, ds_type=ds_type)
  cleaner_tuples.append((ds_type, ds, idxs))

import ntpath

top_loss_path = path/'top_losses/'
top_loss_path.mkdir(parents=True, exist_ok=True)
top_loss_filenames = set()

for ct in cleaner_tuples:
  ds_type = ct[0]
  ds = ct[1]
  idxs = ct[2]

  for idx in idxs:
    full_filepath = ds.x.items[idx]
    _, filename = ntpath.split(full_filepath)
    
    image_data = ds[idx][0]
    image_category = ds[idx][1]
    image_fullpath = top_loss_path/f'{image_category}_{filename}'
    image_data.save(image_fullpath)
    top_loss_filenames.add(image_fullpath) 
    
print(f'{len(top_loss_filenames)} top loss images saved to {top_loss_path}')
4 Likes