How has your journey been so far, learners?

@danaludwig Your post on medium is what brought me here! I’ve always said I’ve never met another man named Dana who wasn’t a jerk. Maybe at last I’ve found one. :wink:

Hi Everyone!

I’m Kris. Super excited to dive into the course. I have looked at it before, but wasn’t able to give it a serious go until now.

I was considering to just wait until the next iteration (it starts in October, right?), but I was impatient, and the course seemed so good that I thought I would just jump in. Does anybody (@jeremy) know if the next round is going to be substantially different?

So far I watched the image classification lectures, and thought I would apply what I learned to some automatically downloaded google images (of 10 different Harry Potter characters). Here is a blog post about it and the Jupyter notebook if someone is interested.

1 Like

@bluepapaya, Glad you made it! I’m starting session 9 (part 2, second session) and it just gets better and better! At the beginning, Jeremy warned us that we are big kids now and we are going to have to figure things out on our own, with many failures for each success. Then he proceeded to spoon-feed us on advanced debugging techniques and state-of-art code editors. The explanations are even more detailed than Part 1! For me this is just fantastic; it seems like there is nothing I can’t do with this knowledge. At the start of session 9, he even summarized the skills we should have mastered by now. This is by-far my most efficient path to learning, and I’m going to stay focused on the class until I complete it. It is like a “moment in history” that we have to grab while it is still available and timely.

Hi, same here, I have been wanting to do it for a long time but just starting. In fact, I am just started the first lecture today even though I worry that I am too late to the party and/or a new version of the course will be released soon. Anyway, you are ahead of me. It would be useful to know how strongly @jeremy or others would recommend that we wait til a new iteration is released or redo the course and rewatch the videos once the newer version is released.

Greetings from México :). I’m a Computer Engineering student.

Here from the SF Bay Area. Wrapping up Springboard, an online Data Science program. was recommended by my mentor. I want to develop new techniques in Deep Learning.

I am Bala from Singapore, I am a bit late for this course. But i am so excited to start now.

I am quite serious to transit my careers towards AI and Ml from software development. I did some courses in machine learning and some small projects. but i am lucky i found it. It is continuous source of motivation for me because of very good instructor and also a very big exciting community there.
I hope it prove a great learning and career booster for me.

Hello everyone!

I am an English Education major and I’m currently finishing up my undergrad degree at a CSU. I’m really excited for the technology and can’t wait to see where it goes.
I started my path to ML/DL about less than a year ago because the more I had looked into the field, the more exciting it became and I knew that I wanted to be a part of it.

Hi everyone

My name is Mpho and I am from Cape Town, South Africa - I am a Python developer part of the team building the largest Radio Telescope in the Southern Hemisphere called the Square Kilometer Array.
I am here to expand my knowledge and hopefully learn and give back.

To put it in haiku block format:
Hopefully also

My personal background:
I am a student while working full time during the day. It has been about 10 years since I got my MS in Environmental Systems. I have had short professional roles as GIS Tech, Sys Admin, Web Developer and Programmer. Mostly I have been a business analyst and administrator for enterprise applications. I am seeking to be in line with my company’s transitions from an old school system of Intranet servers and expensive customized enterprise applications to the next generation of cloud enterprise solutions. This path has led me to Python, AWS, and Deep Learning. I have been learning each of these things for the first time over the last 3+ months.

My experience:
Environment setup proved to be surprisingly difficult despite all of the resources provided. Each environment had a different challenge, because online instructions were nebulous and evolving and I am inexperienced:

  • Google Colab — I initially got everything working using Google Colab with the help of Clouderizer, however the session times out regularly and I couldn’t figure out persistent storage, which makes it hard to download datasets and work on them over days. I got it to work for Lesson 1, but Lesson 2 seemed like a no-go, considering that Lesson 1 processing time for took more time than I expected. I think it was a couple hours, start to finish.

  • Paperspace VM — Paperspace VM built on the template should have been the quick and easy transition onto Lesson 2, but for some reason I had trouble getting CUDA to always be recognized by the system when running through The Unofficial Setup Thread for Part 1 v3. Around that time I had a friend recommend I setup my home desktop with Ubuntu and do it all at home for free. Great idea, I thought.

  • Home Linux Box — My buddy sold me his GTX 1070 video card for a couple hundred bucks, and we went to work to get me setup with Ubuntu and FastAI. It took us a half a day to get the video card installed and running and create a separate partition and dual boot of for Ubuntu alongside my Windows 10. I don’t know if I simply mis-read, or if there is mis-information on the forum, but somehow I thought I should be using Ubuntu 18.04. I marched ahead and got blocked by drivers not loading correctly. After a day I stopped working on it and decided I would try another option and watch more videos.

  • Paperspace Gradient — I went back to Paperspace and thought I would try out Gradient. Everything looked like it was going to work perfectly. However, when it came to downloading the Kaggle data, there was no way to install 7zip on a gradient notebook instance, so there was no way to extract the files on the Gradient VM. I couldn’t extract them from my own machine and upload them to my Gradient VM because upload was limited to 15 MB files. And I couldn’t establish a VPN / SSH because I could not retreive a password for my Gradient console. Also, every time I stopped and re-started my notebook instance Gradient would create another notebook instance and tell me I have too many notebooks! I contacted support about both of these issues. They were not able to fix or guide me to a solution for either issue. Unable to extract the 7zip files I moved on.

  • AWS — AWS is the most appealing option to me. Yes, it is going to be expensive, but that’s the cost of education, right? I got it up and running quickly, though I couldn’t determine which AMI I should use. Forums recommended Versions 15.0 and 16.0. I guessed on Deep Learning AMI (Ubuntu) Version 17.0 (ami-0b63040ee445728bf) as the latest and greatest. After waiting hours for my Kaggle data to upload I found the upload failed due to space limitations after downloading 75% of the data. I started getting nervous about cost and went back to Paperspace.

  • Paperspace VM Take 2 — Frustrated by my experiences I returned to my Paperspace VM, created a fresh VM built on the FastAI template, tested jupyter notebooks, downloaded my Kaggle Data, and was finally off to the races!

… I just wasn’t expecting the races to be so slow…

I ran through the Lesson 2 Notebook this week, and found that it took over 10 hours to run just one step (The 2nd after first setting learning rate step as lrs = np.array([lr/9,lr/3,lr]))

Should I expect the DL process to always take so long? If it takes 15+ hours just to complete the second assignment, does that mean we are expected to spend 30 hours to complete both the sample run-though and one experiment on our own? Or should the homework not take that long? Should we expect the runtime to continue to increase or is this an anomaly related to this Lesson? Or am I doing it wrong?

To make a long story short (too late!) this class has taken much more time and effort than I expected, and I expected to work my tail off. I know that my background is not optimal, but I was not expecting to feel like I am behind when I started preparing for the course over a month prior to the course starting, by taking python and AWS prep courses. I feel like I needed a six month focused introductory regimen to prepare for this course. I have done my best to prepare and utilize the resources provided, but I constantly find myself discovering critical pieces of information a day or more after I needed it.

My buddy that was helping me with my Linux home box works for a local ML dataset development group, and when I told him about what I am going through he called it “the height of technical complexity.”

I am excited by the difficulty, though I may be a masochist. I just hope I haven’t gotten myself in too deep. I have already learned a great deal. If complexity and expectations don’t increase exponentially then I still hope to be successful. And I appreciate any suggestions on how to make the rest of my journey easier. Thank you!

1 Like

Oh that sounds awful. I’m sorry that you had such a bad experience setting up the cloud servers. For me paperspace was a one click thing. Also AWS runs like a charm, so maybe I can help you somehow.

And are you sure you have a paperspace machine with a GPU? lesson 2 took maybe 20 minutes or so on my paperspace machine. 15h seems to be much too much.

1 Like

Hello all,

I am now working in a finance industry as a developer. I have been doing programming in the past 10 years. I want to pick up ML/ DL to explore more in this area.

I am not sure if that’s my problem or not. I found that it is getting a bit frustrating for me to try practice the fast ai lessons. I am on a windows machine (XPS15 with 1050Ti graphics card). After a day I have managed to setup a 0.7 envrionment on my machine (as pytorch 1.0 is not available on windows yet). However, I found that I always hit problem here or there, like on lesson 4 IMDB when I tried to fit the model it throws me cuda illegal memory exception.

I am not sure if that’s why setup problem or not. I wonder do I really need move to cloud solution?

its good to have your own box. however for now at least things works more smoothly under linux where you can also use the new fastai v1

1 Like

Thank you for the response, empathy and offer for help. I would love to share my setup with you and see if I am doing something wrong. Receiving validation that it should not take 15 hours is useful to know. Yes, I am using a GPU+ Hourly machine configured in the CA1 Region (I’m in Oakland, CA). Standard 30 GB RAM, 8 CPUs and 8 GB GPU. I may try creating another machine to determine if somehow I screwed up my configuration somewhere along the way. I will DM you to see if there is someway that we can link up. I would love to take you up on your offer to help, but I will attempt a new VM first.

Hey there techjoey, I have the course working on ubuntu18.04 via nvidia-docker. I went this way as it isolates the entire fastai course environment while still giving full speed access to GPU. Assuming you already have the nvidia driver installed (I’m using v396.54)

You need about 3 more things :slight_smile:

Install docker:

And nvidia-docker runtime:

And the paperspace fastai container:

I ran the CUDA9.2 image with this command:

docker run --runtime=nvidia -d -p 8888:8888 paperspace/fastai:1.0-CUDA9.2

Find the CONTAINER ID with: docker container ls
And get a prompt within the container: docker container exec -it <ID> bash

Now you can get you jupyter notebook url:
root@ca6147ed79f4:/notebooks# jupyter notebook list
Currently running servers: :: /notebooks

Good luck!

1 Like

Oh, guess what… :slight_smile: Another fastai learner has a great blog post about this:

1 Like

Thank you @jasond! I have things up and running pretty well on my home computer (Other than a little hiccup with the LNP widget) I haven’t spent much time on VMs lately, though I did go back and confirm that my GPU does not seem to be playing nicely with nvidia in my paperspace VM (python -c ‘import torch; print(torch.cuda.device_count());’ returns 0)

I’ve been hearing a lot about dockers lately, but now that I have my local system up and running I’m not giving it much more of a thought.

If I find the need to also instantiate a cloud VM again I will probably explore Salamander. My colleague says he got up and running using Salamander in no time, and it looks like it costs about half the amount of paperspace… I would love to know more about your choice to use Dockers, how Dockers add benefits to VMs and local environments, etc. Thanks!

Thank you for your help! Let me know if you have any difficulty with your configuration

Hey techjoey, my previous posts weren’t explicit but this is for using nvidia-docker on your local machine. It’s pretty useful if you run linux, have a recent nvidia gpu, and happen to be studying DL/AI as you can isolate different learning environments.

As you’re already up and running locally it’s not worth the bother, but if you run into trouble later on with clashing dependencies, have another look at this :wink:

I’ve published a docker image to run Jupyter notebook, saving data and code changes for later use. It was mainly an exercise in Docker for me, but from now on I’ll be running fewer DL experiments outside a container.

Would love to hear your feedback: