Unofficial release of part 1 v2

Hi Jeremy, I am almost certainly more of a beginner than most to this topic. I am working on creating a server from home as I have a GeForce GTX 970 GPU.
I have installed Ubuntu to dual-boot with windows but I am struggling with the paperspace setup script. When I run the
"curl http://files.fast.ai/setup/paperspace | bash" command, it returns with
"rm: cannot remove ‘/etc/apt/apt.conf.d/.’: no such file or directory.

When I just run the script by pasting each line one by one, when I get to,
"sudo rm /etc/apt/apt.conf.d/."
the terminal just closes. Any idea of what could be going on or how to fix it? Thanks.

Hey Seth, in case you haven’t yet, I’d recommend either (a) familiarizing yourself with the very basics of what these setup commands do if you want to run your own server. Or (b) skip that, and go with the paperspace setup, to jump straight into the course.
If you prefer to or must go with (a), does the directory the script is trying to remove exist on your system?
What happens when you skip running that one line, and executing the next commands?

Linux Basics: https://www.digitalocean.com/community/tutorials/an-introduction-to-linux-basics

we had a group complete part 1 in Brisbane just before Christmas. Would be great if we can access the updated part 2 materials while you are teaching in person.

A few are/will-be starting on the current version of part 2. Building a solid group here.

Hi Jeremy
I have the same issue to setup paperspace script “curl http://files.fast.ai/setup/paperspace2 | bash” command in my local computer, not paperspace cloud. Actually, in apt.conf.d, I do not have any files like ‘.’ to delete. In this case, this command maybe has to be skipped!
If I am right, please confirm.

@Seperthar
if you don’t have any files in that folder it is OK, just comment out the command.
But even if you don’t comment out the command script should work fine.

Here’s a first draft of the Video Timelines for Part 1 v2:

I followed last year both part 1 and 2 MOOCs and I am all-in since last year about deep learning (MILA on site course, Kaggle, Coursera, CS231n, books, tons of papers, …).

The top-bottom approach used in the fast.ai course is unique and definitely caught my attention from the beginning in 2016. It was the only viable option for me back then to start learning about this incredible subject. Thanks again by the way to @jeremy and @rachel. Unfortunately, I had not enough time to register to P1 V2 as an international fellow even if I wanted to follow it.

I followed lesson 1 and 2 from part 2 videos and my first opinion coming out is that you should rebrand your course fast(er).ai instead of fast.ai ! With some borrowed reference to fast(er) R-CNN :wink:

Seriously, even if I didn’t setup explicitely on AWS, Paperspace or Crestle, the initial working setup looks faster than last year even if that was already fast.

The fast.ai library from the top view looks pretty clean and allow an even higher level API compared to the previous teaching wrap over keras used in V1.

On the technical side, I’ll remember some nice well implemented ideas from lesson 2 : learning rate finder, differential learning rate, and a personal favorite, progressive learning from lower to higher resolution to overcome overfitting at high res. Very clever ideas. I can’t wait to try this last trick on medical images …

Kudos for part 2 ! And you got my vote for rebranding to fast(er).ai ! Too bad the url is already taken …

5 Likes

Where can we get assignments and readings for part1 v2,
like it was for part1 v1 (http://wiki.fast.ai/index.php/Lesson_2 )
I’m pretty sure that tasks are changed just a little bit from v1 to v2,
but I assume that readings are extended a lot.

slack channel?

Extra Knowledge is never Harmful…

I am now planning to do part1v1…

1 Like

Hi,

I completed Part 1 v1, and started Part 2. I didnt know that there is a v2 of Part 1. Now should I continue watching Part 2, and come back to Part 1 v2 later or should I start doing Part 1 v2 first and then do the Part 2? What is the recommended way?

( I am familiar with most of the basic concepts already, completed 4 out of 5 courses in Coursera Andrew NG course. But I am not familiar with PyTorch yet)

Jamsheer

I am so super excited about finally grokking GRUs thx to the lesson 6 lecture that I wrote a tweet with a link to lesson 6 video… But several seconds after pressing the tweet button I remembered that Jeremy asked us not to share the links! And I take the tweet down.

I then checked Jeremy’s profile and found that he already tweeted out a link to one of the videos. So I tweet again. But then I realize that maybe that is not such a great idea still as I bet the request from Jeremy still holds to not publish this outside a close group of friends / work so I take the tweet down again :slight_smile:

@jeremy - I am very sorry for not thinking this through. The tweet existed for such a short period I am hoping it didn’t do any harm. Will refrain from posting links to this till the official public launch! :slight_smile:

And I better get back to studying those GRUs / LSTMs - once you start grokking something that seemed initially impenetrable you feel unstoppable :slight_smile: And the key ingredient to understanding this were the couple of minutes of video where Jeremy explains the GRU diagram.

1 Like

Hi everyone,

thanks a lot to Jeremy for the great course, it’s helping me a lot.

I tried to use the ideas from the Lesson 1 Notebook to participate in the IEEE Camera Model Identification Challenge at Kaggle. I get a validation accuracy around 50% which I think is not too bad. However, when I make and submit predictions on the test set, I seem to produce random noise and get only around 10% at the leaderboard.

Does anyone have any idea what I am doing wrong?

Thanks a lot!!

There are two things that can be happening - you either messed up in creating your submission or more likely your validation set is not representative of the test set.

From what I hear about the competition, likely it is the latter but don’t take my word for it :slight_smile:

Here is the ultimate resource imo on learning about the train - val - test splits. http://www.fast.ai/2017/11/13/validation-sets/

I don’t mean to sound like a dumb ass (saying this normally preceeds people sounding like a dumb ass so please accept my apologies for indeed being a dumb ass :wink: ) but in the future would you be so kind please and create a separate thread or find a thread which already discusses what you want to ask about? I am sure happy to help, but as it is right now, what we are discussing here has nothing to do with the original intent of this thread and it will make using this forum much harder and annoying for people who come here in the future

Many thanks and once again please accept my apologies! :slight_smile: Hate to be the thread police but maybe via slightly better house keeping we can all make this forum even more useful to others :slight_smile:

3 Likes

Hi Radek,

thank you very much for your answer. This was my very first post here and I was not quite sure whether I should really open a new thread for this or just ask here as I was using the Lesson 1 Notebook. Thanks for explaining that this is okay in the future. I now found a thread by someone with the same problem and I think you are right with your assumption.

Many thanks to you!

1 Like

It would do a world of good if you start watching Part 1 v2. Part1 v2 has many new things. You would learn new things that’s not taught in Coursera course…
hope it helps

Fantastic! I just started doing the course because I love the approach to teaching you set out in the intro.

I just signed in to the forums for the first time because I couldn’t get the Keras VGG16 to train properly as compared to the “Part 1 V1” VGG16 version, but now I’m super excited to see how the Version 2 turns out.

Hi everyone jeremy!
Wishing jeremy a good year full of health

Hey Nad,

Great that you are participating in this Kaggle competition! There is a post in the discussion section that says:

Test data is different than train in 3 big ways (then there’s smaller issues);

  1. 50% of it is manipulated as per the instructions
  2. Taken from a different device (BIG issue)
    
  3. Center-cropped, fixed res.
    

(source: https://www.kaggle.com/c/sp-society-camera-model-identification/discussion/47896)

I am also participating. Let me know if you wanna exchange ideas. I am currently working with something based on this code: https://www.kaggle.com/c/sp-society-camera-model-identification/discussion/47896

Good luck!

Hey fabsta,

thanks so much for your reply. Your code looks great, I’m far from producing anything like it :wink: I will definitely have a deeper look!