I got it dinning table too. the highest two probabilities were dining table ( 5.9479e-01) and chair (3.5335e-01).
Seconded! Really great book.
I tried multiple times, but couldn’t make out the name of the debugger one of the students (Elsa I think) mentioned in the video https://youtu.be/Z0ssNAbe81M?t=6551. Did anyone manage to capture the name of the debugger he uses?
I think he said something like:
ipython.core.debugger import tracer
Is that what you are looking for?
Indeed, Tracer takes color as a parameter so it makes sense in the context of what’s mentioned in the video. Thank you!
I guess accessing images from internet to train a model would be very slow as we add up network latency. I would prefer to download it. Further, u may have to train test for multiple times. Accessing the data from internet in that case can be would not be recommendable. Furthermore, if there is a network issue during the training, the process will fail abruptly I believe. We may have to handle those cases.
I am having issue while running ImageClassifier.from_csv.
This is the line of codes I have
tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_side_on, crop_type=CropType.NO)
md = ImageClassifierData.from_csv(PATH, JPEGS, CSV, tfms=tfms, bs=bs)
With error stack trace:
IndexError Traceback (most recent call last)
4 tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_side_on, crop_type=CropType.NO)
----> 5 md = ImageClassifierData.from_csv(PATH, JPEGS, CSV, tfms=tfms, bs=bs)
~/anaconda3/lib/python3.6/site-packages/fastai/dataset.py in from_csv(cls, path, folder, csv_fname, bs, tfms, val_idxs, suffix, test_name, continuous, skip_header, num_workers)
352 fnames,y,classes = csv_source(folder, csv_fname, skip_header, suffix, continuous=continuous)
–> 353 ((val_fnames,trn_fnames),(val_y,trn_y)) = split_by_idx(val_idxs, np.array(fnames), y)
355 test_fnames = read_dir(path, test_name) if test_name else None
~/anaconda3/lib/python3.6/site-packages/fastai/dataset.py in split_by_idx(idxs, *a)
364 def split_by_idx(idxs, *a):
365 mask = np.zeros(len(a),dtype=bool)
–> 366 mask[np.array(idxs)] = True
367 return [(o[mask],o[~mask]) for o in a]
IndexError: arrays used as indices must be of integer (or boolean) type
I felt that its due to the wrong folder location. I ran the code from github pointing to the correct folder, it worked. Later, I moved my ipynb to the same folder and ran one more time. It ran. But When I run from the actual location I am intending to run, it fails with the above error. I am truck with this error for a couple of days. Any help would help.
Looking into the VOC Documentation I received the impression the bbox was represented as such:
[x_min, y_min, x_max, y_max]
[155, 96, 196, 174] <- car bbox
How did you know that the last two items in the bounding box list represented width and height?
Followed these exact steps and I’m still not able to use symbols. I extracted to C:\Program Files\Microsoft VS Code\bin which is in my PATH when I run
set command at the terminal and I still can’t search for something like
open_img. Any thoughts?
Hope you have selected the interpreter, with fastai environment (environment.yml) available with downloaded code.
Once interpreter set you are able to navigate.
[ EDIT ] : it works The problem came from the selection of the python interpreter (
Python: Select Interpreter).
The default path to my fastai environement is well setup in my user parameters in Visual Studio Code (
"python.pythonPath": "C:\\Users\\username\\Anaconda3\\envs\\fastai\\python.exe",) but I have to select it (
ctrl+shift-p) after each restart of Visual Studio Code. Any advice to avoid that ?
- I’m using Windows 10 and Visual Studio Code is working.
- I did open my fastai folder and select the python interpreter of my fastai virtual environment (I’m using an NVIDIA GPU on Windows).
- I downloaded ctags (universal ctags and I tried as well exuberant ctags) and unzip it in my fastai folder in a folder called ctags :
- I updated my Windows PATH with the path to
- I updated my user parameters in Visual Studio Code with :
What else can I do ? Thanks.
Hey guys, check out my new blog on Introduction to Object Detection. Hope you enjoy it and feel free to comment in case of any queries or suggestions.
I’m trying to extend the bounding boxes in lesson 8 by ‘rotated bounding boxes’. I’m doing this by passing in four coordinates top-right(x,y), bottom-right(x,y, bottom-left(x,y) and top-left(x,y).
By using the same code, only the first four values get passed through the different datasets and loaders:
tfms = tfms_from_model(f_model, sz, crop_type=CropType.NO, tfm_y=tfm_y, aug_tfms=augs)
md = ImageClassifierData.from_csv(PATH, JPEGS, BB_CSV, tfms=tfms, continuous=True, bs=4)
bbox = to_np(y)
[194. 368. 217. 400. 0. 0. 0. 0.]
How come? I’m trying to understand but I can’t figure it out. Hints are very appreciated, thanks in advance!
I was starting with lecture 8 today and I guess it only needs 2x x,y = 4 coordinates for the box to be sufficiently defined.
Do you use training data with rotated bounding boxes?
With the Pascal dataset I would guess that it will learn the “aligned” bounding boxes.
Hi @MicPie, thanks for your reply. Yes I know the bounding boxes in the lecture only use two coordinates (top-left and bottom-right). But I’m trying to extend this for a different dataset that is using rotated bounding boxes. It’s defined by 8 values, the csv looks like ImageId, y1 x1 y2 x2 y3 x3 y4 x4
I am currently watching lesson 9 and there Jeremy talks about the transformations in the “Bbox only” section in the jupyter notebook. As you can see at the transformed/rotated pictures of the woman the bounding box is not rotated and is only resized (and still aligned vertically and horizontally).
I guess to rotate the bounding box you have to adapt the class “CoordTransform”.
In addition, you have to use “tfm_y=TfmType.COORD” to transform the coordinates of the bounding box too.
However, maybe the pixel transformation can be of use with TfmType.PIXEL (see class “TfmType”, should be covered in later lessons on image segmentation).
Hi @MicPie, Yes indeed, class "CoordTransform’ is the part of the code which is replacing the values by zero’s. That’s what I was looking for, thanks for your help. I’m trying to add a new TfmType to handle rotated boxes. Maybe in the end it’s better to use the PIXEL type, but in this way I’m getting to understand the fast.ai frame more.
Nice work, looks very nice!
Do you also were able to train the neural network?
I would be interested if you use the coordinates of the four corners or maybe the two corners with a angle.