Lesson 1 In-Class Discussion ✅

I attended a DL Meetup with focus on audio.
Here you find the slides with a nice overview: https://github.com/vdlm/meetups/tree/master/Meetups/Meetup_21

In general, very interesting approaches for sequence models.

Best,
Michael

6 Likes

I think you should post this query in unofficial setup guide thread.

ok. Thank you

We will be providing a tutorial on this tomorrow.

3 Likes

What’s the paper for fit_one_cycle that Jeremy mentions during the lecture?

Here are the links if others are interested as well: paper
Sylvain Gugger’s blog post on 1 cycle policy

3 Likes

Here’s another forum where you have a lot of people interested in applying NNs to music.
https://groups.google.com/a/tensorflow.org/forum/#!forum/magenta-discuss

1 Like

My medium post

I just wrote a post on Medium briefly explaining the regular expression that Jeremy used in the notebook to extract labels from filenames. I only just figured out how they work awhile ago, and decided that it would be a good idea to share what I found. It is my first post on Medium and I appreciate criticism of any sort.

11 Likes

What is the official fast.ai v1 way to evaluate a runtime, on cpu, the prediction on a custom image on a trained model?

There is a Medium article & short PDF presentation on understanding the Terminology used in the Residual Block (Figure 2) of the original Resnet paper. Welcome all feedback on the same. Thanks.

It would be super interesting. I hope I am not late. If you could please include me as well.
Regards

Yes I am facing similar issue while running it on Colab.
Is the issue resolved for you??

Why do you want to use 0.7 version for V3??

It would help if you show us what exactly the error is

I am trying to download a dataset from kaggle using untar_data
path = untar_data('https://www.kaggle.com/alxmamaev/flowers-recognition/downloads/flowers-recognition.zip/2')
but I am getting
ReadError: not a gzip file

Looks like untar_data uses tarfile under the hood to decompress files*, but that module does not handle zip files.

* https://github.com/fastai/fastai/blob/master/fastai/datasets.py#L106

How can i download it to custom directory?

Hey, can you share your notebook, i’m running into dataloader issue.

I don’t know how it’s done within fastai. However, assuming that you want to do it programmatically, you could write a function, or a pair of functions, using the urllib/requests and zipfile modules, to download and unzip the file.

@sgugger I believe you mean for each feature we normalize using its(features) mean and std, when you said for the whole dataset. Correct me if I am wrong.
Regards,

[batch size] is typically chosen between 1 and a few hundreds, e.g. [batch size] = 32 is a good default value, with values above 10 taking advantage of the speedup of matrix-matrix products over matrix-vector products.

Here is a paper
Revisiting Small Batch Training for Deep Neural Networks(https://arxiv.org/abs/1804.07612)

1 Like