A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

absolutely, thank you :slight_smile:

[Lesson6 - CustomUnet]

Hi! I have a question according your custom UNET. After adding the new head to the model, you initialize the layers with apply_init(nn.Sequential(layers[3], layers[-2]), init)

Why do you pass in this new Sequential-model: nn.Sequential(layers[3], layers[-2])? I am a bit confused with the indices. Which layers do you address?

[Lesson6 - RetinaNet] - Inference

Hi! I walked through your functions for inference in the github repo. But I am not quite sure, what is stored in the output parameter of your functions and where does it come from?

`def process_output(output, i, scales, ratios, detect_thresh=0.25):``

def show_preds(img, output, idx, scales, ratios, detect_thresh=0.25, classes=None):

def get_predictions(output, idx, detect_thresh=0.05):

Which function should I use for inference get_predictions or process_output?

That’s taken from the fastai source code. The simplest way to debug this is to simply add a line that prints them out in that __init__ function :wink:

Output comes from the model., and if you notice get_predictions calls process output, so use get_predictions

1 Like

Thanks for your answer! Have tried this on your own? When I do learner.predict I get an error:

samples = [(s[0], *clip_remove_empty(*s[1:])) for s in samples]
TypeError: clip_remove_empty() missing 2 required positional arguments: 'bbox' and 'label'

learn.predict will not work. It (predict) is not setup for object detection yet, hence why we are doing this. You can read more from this discussion: Object detection using fastai v2

2 Likes

Thanks a lot!

this is a regression problem so using accuracy as the metric is not ideal. Say for a particular input if the predictions were 0.4, 0.3, 0.1 for three epochs the loss decreases but the accuracy remains same(Threshold being 0.5). Can take this of your to do list @muellerzr :slight_smile: (I should have realised that accuracy and regression don’t go together)

1 Like

That’s okay, it’s easy to think the opposite (I didn’t even realize it reading what’s in front of me!) Great job investigating :slight_smile:

1 Like

Just a heads up, I’ve added a snippet into the Segmentation notebook discussing weight loss functions :slight_smile:

1 Like

What is the reason for specifying size=224 for most of the datasets? Any particular study showing why it’s effective?

I’d go watch last years course (first two lessons IIRC), Jeremy goes over it and why it ‘just works’

1 Like

I agree it just works. In recent BengaliAI kaggle competition, lot of people used that image size to get over 0.98 mark in public LB

In the the fastai book it’s explained as follows:

Why 224 pixels? This is the standard size for historical reasons (old pretrained models require this size exactly), but you can pass pretty much anything. If you increase the size, you’ll often get a model with better results (since it will be able to focus on more details) but at the price of speed and memory consumption; or vice versa if you decrease the size.

1 Like

stuck with the multimodal notebook because i’m not able download/unzip the data.
The data seems to be downloaded but is throwing an error when i try to unzip it.


Thank you.

1 Like

Not actually the steps to do it :wink: Try to follow the steps in this notebook https://github.com/muellerzr/Practical-Deep-Learning-for-Coders-2.0/blob/master/Tabular%20Notebooks/02_Regression_and_Permutation_Importance.ipynb specifically the video linked, and if that doesn’t work download the zip and upload it to google drive

let me try this
thanks @muellerzr
downloading and uploading will take forever i guess haha

Hi!

Trying to follow apply 03_Cross_validation in SIIM pneumotorax kaggle dataset for classification (not segmentation). Following the tutorial for medical imaging from fastai2 nbs I generate a dataframe like this:

However I am stucked at the generation of Datasets, I do not know how to add a tranform to get the labels using df. My code so far:

items = get_dicom_files(pneumothorax_source/f"dicom-images-train/")
start_val = len(items) - int(len(items)*.2)
idxs = list(range(start_val, len(items)))
splits = IndexSplitter(idxs)
split = splits(items)
split_list = [split[0], split[1]]
dsrc = Datasets(items, tfms=[[PILDicom.create][???, Categorize]],
                splits = split_list)

Any ideas?

Hi Joan!

Why are you using Datasets instead of DataBlock?

I think if you use DataBlock with Getters get_x and get_y you can read the labels from a df.

2 Likes

Also, Cross Validation can be done in the DataBlock like so:

dblock = DataBlock(blocks=(ImageBlock, CategoryBlock),
                   get_items=get_image_files,
                   get_y=parent_label,
                   splitter=IndexSplitter(val_idx),

(if looking at the CV code)

2 Likes