Part 2 Lesson 8 wiki

I guess accessing images from internet to train a model would be very slow as we add up network latency. I would prefer to download it. Further, u may have to train test for multiple times. Accessing the data from internet in that case can be would not be recommendable. Furthermore, if there is a network issue during the training, the process will fail abruptly I believe. We may have to handle those cases.

I am having issue while running ImageClassifier.from_csv.
This is the line of codes I have

PATH=‘data/pascal-localisation’
JPEGS=‘JPEGImages’
CSV=‘data/pascal-localisation/tmp/lrg.csv’
tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_side_on, crop_type=CropType.NO)
md = ImageClassifierData.from_csv(PATH, JPEGS, CSV, tfms=tfms, bs=bs)

With error stack trace:

IndexError Traceback (most recent call last)
in ()
3 CSV=’…/data/pascal-localisation/tmp/lrg.csv’
4 tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_side_on, crop_type=CropType.NO)
----> 5 md = ImageClassifierData.from_csv(PATH, JPEGS, CSV, tfms=tfms, bs=bs)

~/anaconda3/lib/python3.6/site-packages/fastai/dataset.py in from_csv(cls, path, folder, csv_fname, bs, tfms, val_idxs, suffix, test_name, continuous, skip_header, num_workers)
351 “”"
352 fnames,y,classes = csv_source(folder, csv_fname, skip_header, suffix, continuous=continuous)
–> 353 ((val_fnames,trn_fnames),(val_y,trn_y)) = split_by_idx(val_idxs, np.array(fnames), y)
354
355 test_fnames = read_dir(path, test_name) if test_name else None

~/anaconda3/lib/python3.6/site-packages/fastai/dataset.py in split_by_idx(idxs, *a)
364 def split_by_idx(idxs, *a):
365 mask = np.zeros(len(a[0]),dtype=bool)
–> 366 mask[np.array(idxs)] = True
367 return [(o[mask],o[~mask]) for o in a]

IndexError: arrays used as indices must be of integer (or boolean) type

I felt that its due to the wrong folder location. I ran the code from github pointing to the correct folder, it worked. Later, I moved my ipynb to the same folder and ran one more time. It ran. But When I run from the actual location I am intending to run, it fails with the above error. I am truck with this error for a couple of days. Any help would help.

Looking into the VOC Documentation I received the impression the bbox was represented as such:
[x_min, y_min, x_max, y_max]
[155, 96, 196, 174] <- car bbox

How did you know that the last two items in the bounding box list represented width and height?

Hi Vijay,

Followed these exact steps and I’m still not able to use symbols. I extracted to C:\Program Files\Microsoft VS Code\bin which is in my PATH when I run set command at the terminal and I still can’t search for something like open_img. Any thoughts?

Hope you have selected the interpreter, with fastai environment (environment.yml) available with downloaded code.
Once interpreter set you are able to navigate.

[ EDIT ] : it works :slight_smile: The problem came from the selection of the python interpreter (ctrl+shift-p : Python: Select Interpreter).
The default path to my fastai environement is well setup in my user parameters in Visual Studio Code ("python.pythonPath": "C:\\Users\\username\\Anaconda3\\envs\\fastai\\python.exe",) but I have to select it (ctrl+shift-p) after each restart of Visual Studio Code. Any advice to avoid that ?


Hi @Vijay and @Patrick, I can not get “Go to symbol (ctr+t)” works in Visual Studio Code.

  • I’m using Windows 10 and Visual Studio Code is working.
  • I did open my fastai folder and select the python interpreter of my fastai virtual environment (I’m using an NVIDIA GPU on Windows).
  • I downloaded ctags (universal ctags and I tried as well exuberant ctags) and unzip it in my fastai folder in a folder called ctags : C:\Users\username\fastai\ctags\ctags.exe
  • I updated my Windows PATH with the path to ctags.exe
  • I updated my user parameters in Visual Studio Code with : "python.workspaceSymbols.ctagsPath": "C:\\Users\\username\\fastai\\ctags\\ctags.exe",

What else can I do ? Thanks.

Hi, my notes on lesson 8. Hope it can help new fastai fellows :slight_smile:

3 Likes

Hey guys, check out my new blog on Introduction to Object Detection. Hope you enjoy it and feel free to comment in case of any queries or suggestions.

1 Like

Hi,

I’m trying to extend the bounding boxes in lesson 8 by ‘rotated bounding boxes’. I’m doing this by passing in four coordinates top-right(x,y), bottom-right(x,y, bottom-left(x,y) and top-left(x,y).

By using the same code, only the first four values get passed through the different datasets and loaders:

tfms = tfms_from_model(f_model, sz, crop_type=CropType.NO, tfm_y=tfm_y, aug_tfms=augs)
md = ImageClassifierData.from_csv(PATH, JPEGS, BB_CSV, tfms=tfms, continuous=True, bs=4)
x,y=next(iter(md.aug_dl))
bbox = to_np(y[1])
print(bbox)
[194. 368. 217. 400. 0. 0. 0. 0.]

How come? I’m trying to understand but I can’t figure it out. Hints are very appreciated, thanks in advance!

1 Like

Hey @ramon,

I was starting with lecture 8 today and I guess it only needs 2x x,y = 4 coordinates for the box to be sufficiently defined.

Do you use training data with rotated bounding boxes?
With the Pascal dataset I would guess that it will learn the “aligned” bounding boxes.

Best regards
Michael

1 Like

Hi @MicPie, thanks for your reply. Yes I know the bounding boxes in the lecture only use two coordinates (top-left and bottom-right). But I’m trying to extend this for a different dataset that is using rotated bounding boxes. It’s defined by 8 values, the csv looks like ImageId, y1 x1 y2 x2 y3 x3 y4 x4

1 Like

I am currently watching lesson 9 and there Jeremy talks about the transformations in the “Bbox only” section in the jupyter notebook. As you can see at the transformed/rotated pictures of the woman the bounding box is not rotated and is only resized (and still aligned vertically and horizontally).

I guess to rotate the bounding box you have to adapt the class “CoordTransform”.
In addition, you have to use “tfm_y=TfmType.COORD” to transform the coordinates of the bounding box too.

However, maybe the pixel transformation can be of use with TfmType.PIXEL (see class “TfmType”, should be covered in later lessons on image segmentation).

2 Likes

Hi @MicPie, Yes indeed, class "CoordTransform’ is the part of the code which is replacing the values by zero’s. That’s what I was looking for, thanks for your help. I’m trying to add a new TfmType to handle rotated boxes. Maybe in the end it’s better to use the PIXEL type, but in this way I’m getting to understand the fast.ai frame more.

2 Likes

Fixed it, if anyone is interested let me know :wink:

3 Likes

Nice work, looks very nice!

Do you also were able to train the neural network?
I would be interested if you use the coordinates of the four corners or maybe the two corners with a angle.

Best regards
Michael

1 Like

Hi @MicPie,

Not yet, but Ill continue next weekend. Ill keep you updated.
By the way, eventually I’m trying to solve the Kaggle airbus challenge (https://www.kaggle.com/c/airbus-ship-detection), interested in teaming up?

Kind regards,
Ramon

3 Likes

I am interested

Hi @tcapelle,

Thanks for your response. Interested in the code and/or teaming up?

Regards,
Ramon

Would be interested in teaming up too!

Great, I added you to the conversation, did you get that message?