Lesson 1 - Official topic

What should we “do for homework”?

2 Likes

https://github.com/tzutalin/labelImg is also useful for labeling quickly. If doing a box label, you can move it VERY quickly with a keyboard and mouse. I think when I used it over the summer it was around 1000 labeled frames / images per hour.

2 Likes

For windows, you have to install it using the .exe available online and then add the path to where it got installed to the ENV_PATH as well;

Sending everyone a hermetically sealed hug!

1 Like

Thank you all for the wonderful lesson :slight_smile:

1 Like

You can install all the necessary packages in the fastbook repo to have everything in order:
pip install -r requirements.txt

3 Likes

I suppose just ensure your GPU setup is up and running, and that you can run code from the first notebook

1 Like

Thanks everyone, see you next week! (and before that on whichever threads I read)

I’ve been meaning to mention since that podcast that a Snorkel study group would be interesting.

1 Like

Is there a separate thread for debugging questions for Lesson 1?

Cheers Jeremy, Rachel, Sylvain, and team for putting together a fantastic first lecture and these course materials. Looking forwards to learning with you all in the coming weeks… :smile:

3 Likes

I think it is fine to just create your own thread for debugging questions. Best not to have a giant thread with all questions so others reading can understand what is going on.

Thank you!

Here are some passages from the book you pointed to:

While our neural network gives impressive performance, that performance is somewhat mysterious. The weights and biases in the network were discovered automatically. And that means we don’t immediately have an explanation of how the network does what it does.

And while that passage is followed by a lot more details, it just descriptive and does not quite add up to “an explanation”.

Admittedly, this subsequent passage is more than mere description:

It does this through a series of many layers, with early layers answering very simple and specific questions about the input image, and later layers building up a hierarchy of ever more complex and abstract concepts.

But that only explains how a fully trained network functions and it does even attempt to explain the processes (ie the training phase) that took us from the untrained network to the trained network.

Later there is a direct reference to the training (“learning”) phase processes,

… use learning algorithms so that the network can automatically learn the weights and biases - and thus, the hierarchy of concepts - from training data.

There is that word , “automatically” again… Very descriptive but very decidedly not an explanation of the training processes. It this context it appears that “automatically” is a synonym for “mysteriously”.

Is there really so little – any? – theoretical progress being made that sheds any light on the effectiveness of deep learning techniques?

I realize this course is focused on “practical”, but it’s good to be up front about the question, “Is there – or is there not – a generally accepted theory or explanation behind all of this?”

It appears that the correct take away, “There is still no generally accepted explanation for it,” or, “It remains a mystery”?

Please contribute pointers to “the latest” on this subject if you are aware of anything. Thanks.

‘o’ in command mode

So training loss and error rate is calculated on the training data and the validation loss is on the 20% of the data set aside. So since the validation loss isn’t used in updating the weights, thus calculating the validation loss in every forward pass won’t lead to overfitting, is that right?

I think this is more of how the class/book are meant to teach. These concepts are specifically not talked about in great detail, because they are the ones that require math, and that should be touched on later. Effectively this is an optimization problem, where we are using Calculus + Matrix multiples, and non linearity to make a prediction. Updating the network is done through finding the derivative. As you can see the explanation of this gets mathy very quickly, which is why it is avoided at this point. Jeremy wants you to understand why we use neural nets, before we go into too much detail. I think the next few lessons will go further into the details that you seek.

Jeremy will probably use pictorial examples to explain this without math, before going into the math.

Sorry if this is a dumb question – but what’s the best way to ask a troubleshooting question for course 1?

Adding a new thread in “Part 1” seems “wasteful” :slight_smile:

Is there a preferred way?

Thanks

There should be platform specific threads already created. Feel free to ask there.

Such as: https://forums.fast.ai/t/platform-colab-free-10-month-pro/65525/94

Also follow what harish3110 said below.

For general questions this thread was created: https://forums.fast.ai/t/setup-help/65529

For server setup, there’s a specific thread. As Jeremy said, search if your issue has been addressed before and if not go ahead and post it. There’s nothing like a dumb question and this community is the best in helping out as much as possible to get you to resolve your issue ASAP! :slight_smile:

4 Likes

Jeremy mentioned how an Italian hospital used 3D printing to saves lives, here is a link about the story


And he also mentioned them been sued by a patent troll, here is the link to the story:

2 Likes