Fastai_v1, adding features

I hope to add GAN support to fastai by the time we get to the mid-point of the course, including CycleGAN. Ideally we’ll add it as a loss function, although I haven’t fully thought it through yet.

4 Likes

I think we should also have a dataclass which supports something like pascal multi (I couldn’t find such a notebook in dev_nb). To be precise, it should take a csv file with
image_id bbox cls for each bounding box (multiple bbox goes to different rows).

In the previous version, it was done via ConcatLibDataset, however that requires pre-processing of csv files, and I believe the image_id bbox cls is a standard format and it would be nice to support that.

We’ll be adding that after Tuesday. Some initial steps for it are already in place.

2 Likes

Not exactly related to fastai, but fastprogress. Not a very crucial thing but currently the output looks like this:
image

Would be great if the values are in a more tabular format, like having only the value of training loss below training loss. It is slightly tough on the eye to decipher the values.

Though at the end of it becomes tabular
image

1 Like

I’ve tried my best, but the widget is in HTML, so the spaces are all messed up. Since it’s only during training and the final output is clean, we decided it wasn’t worth spending more time on for now.

Fair point.

Can you wrap each line in a <pre> or <code> block?

<pre> didn’t work. I’ll try <code> to see if it does, thanks for the suggestion!

Another interesting addition could be a good debugger. I see there is a debugger defined here: https://github.com/fastai/fastai_pytorch/blob/feae73216b36e27effffd649664c533dc628385f/fastai/layers.py#L69. However, it seems to just add set trace in the forward pass.

This post https://medium.com/@keeper6928/how-to-unit-test-machine-learning-code-57cf6fd81765 has some outlines to unit test deep learning modules. While many of the problems that have been stated would be easy to find if one uses a good IDE, there are still quite a few bugs which go un-noticed. A few examples could be vanishing/exploding gradients, models not being initialized properly, loss becoming negative etc.

Does the fastai library have place for VAEs/Autoencoders and autoregressive model implementations?

Autoencoders are just a custom head. Nothing to support variational models yet. For autoregressive models you can use an RNN.

Thank you for the wonderful courses, Jeremy, Rachel, and Sylvain!

I am very interested in applying what I have learned from the fast.ai courses to medical imaging. In such applications, I often face inputs with an arbitrary number of channels, for example, 1-channel gray images, N-channel 3D data set, M-channel multi-parametric maps, K-channel multi-contrast images, L-channel complex-part and real-part of data acquired from multiple receivers, and etc. (where N, M, K, L, … are any arbitrary integer such as 1, 2, 3, 4, 5, … and up to 128 or even 256).

I was wondering if the fast.ai library could be supporting data with an arbitrary number of channels input data?

Thanks a lot!

2 Likes

Sure. As you try it out, let us know if you find places where it’s making assumptions about # channels. (Other than model definitions, of course).

1 Like

Yes, I will Jeremy

What were your initial thoughts on how to implement tta for segmentation?

To implement TTA to segmentation, you need to keep the transformation params and a way of applying the inverse of the transformation. There may be transformations that are not inversible.

So that would entail adding in a separate set of only inversible transforms for the test dl instead of utilizing the validation ones, modifying the test dl to capture the transform parameters, and then the tta function to do the inversion on the prediction?

Here was another thread on the topic:

And also, not all of them would need to be inversed, only those that change pixel locations. Crops probably wouldn’t work.

Maybe there is an easier way, but that was what I thought was needed when I had this problem. If you want, we could work together on trying to solve this.

Here is another discussion on subject: https://github.com/fastai/fastai/issues/646

I’d be up for that. Shoot me an email with a good time to connect live. cakeb@calebeverett.io.

This is all sounding about right to me. @sgugger is likely to be working on some related stuff next week BTW so he might have some useful updates. The basic pieces are already started here: https://github.com/fastai/fastai_docs/blob/master/dev_nb/x_006a_coords.ipynb