How has your journey been so far, learners?

To put it in haiku block format:
Expensive
Hopefully also
Rewarding

My personal background:
I am a student while working full time during the day. It has been about 10 years since I got my MS in Environmental Systems. I have had short professional roles as GIS Tech, Sys Admin, Web Developer and Programmer. Mostly I have been a business analyst and administrator for enterprise applications. I am seeking to be in line with my company’s transitions from an old school system of Intranet servers and expensive customized enterprise applications to the next generation of cloud enterprise solutions. This path has led me to Python, AWS, and Deep Learning. I have been learning each of these things for the first time over the last 3+ months.

My experience:
Environment setup proved to be surprisingly difficult despite all of the resources provided. Each environment had a different challenge, because online instructions were nebulous and evolving and I am inexperienced:

  • Google Colab — I initially got everything working using Google Colab with the help of Clouderizer, however the session times out regularly and I couldn’t figure out persistent storage, which makes it hard to download datasets and work on them over days. I got it to work for Lesson 1, but Lesson 2 seemed like a no-go, considering that Lesson 1 processing time for learn.fit took more time than I expected. I think it was a couple hours, start to finish.

  • Paperspace VM — Paperspace VM built on the Fast.ai template should have been the quick and easy transition onto Lesson 2, but for some reason I had trouble getting CUDA to always be recognized by the system when running through The Unofficial Setup Thread for Part 1 v3. Around that time I had a friend recommend I setup my home desktop with Ubuntu and do it all at home for free. Great idea, I thought.

  • Home Linux Box — My buddy sold me his GTX 1070 video card for a couple hundred bucks, and we went to work to get me setup with Ubuntu and FastAI. It took us a half a day to get the video card installed and running and create a separate partition and dual boot of for Ubuntu alongside my Windows 10. I don’t know if I simply mis-read, or if there is mis-information on the forum, but somehow I thought I should be using Ubuntu 18.04. I marched ahead and got blocked by drivers not loading correctly. After a day I stopped working on it and decided I would try another option and watch more videos.

  • Paperspace Gradient — I went back to Paperspace and thought I would try out Gradient. Everything looked like it was going to work perfectly. However, when it came to downloading the Kaggle data, there was no way to install 7zip on a gradient notebook instance, so there was no way to extract the files on the Gradient VM. I couldn’t extract them from my own machine and upload them to my Gradient VM because upload was limited to 15 MB files. And I couldn’t establish a VPN / SSH because I could not retreive a password for my Gradient console. Also, every time I stopped and re-started my notebook instance Gradient would create another notebook instance and tell me I have too many notebooks! I contacted support about both of these issues. They were not able to fix or guide me to a solution for either issue. Unable to extract the 7zip files I moved on.

  • AWS — AWS is the most appealing option to me. Yes, it is going to be expensive, but that’s the cost of education, right? I got it up and running quickly, though I couldn’t determine which AMI I should use. Forums recommended Versions 15.0 and 16.0. I guessed on Deep Learning AMI (Ubuntu) Version 17.0 (ami-0b63040ee445728bf) as the latest and greatest. After waiting hours for my Kaggle data to upload I found the upload failed due to space limitations after downloading 75% of the data. I started getting nervous about cost and went back to Paperspace.

  • Paperspace VM Take 2 — Frustrated by my experiences I returned to my Paperspace VM, created a fresh VM built on the FastAI template, tested jupyter notebooks, downloaded my Kaggle Data, and was finally off to the races!

… I just wasn’t expecting the races to be so slow…

I ran through the Lesson 2 Notebook this week, and found that it took over 10 hours to run just one step (The 2nd learn.fit after first setting learning rate step as lrs = np.array([lr/9,lr/3,lr]))

Should I expect the DL process to always take so long? If it takes 15+ hours just to complete the second assignment, does that mean we are expected to spend 30 hours to complete both the sample run-though and one experiment on our own? Or should the homework not take that long? Should we expect the runtime to continue to increase or is this an anomaly related to this Lesson? Or am I doing it wrong?

To make a long story short (too late!) this class has taken much more time and effort than I expected, and I expected to work my tail off. I know that my background is not optimal, but I was not expecting to feel like I am behind when I started preparing for the course over a month prior to the course starting, by taking python and AWS prep courses. I feel like I needed a six month focused introductory regimen to prepare for this course. I have done my best to prepare and utilize the resources provided, but I constantly find myself discovering critical pieces of information a day or more after I needed it.

My buddy that was helping me with my Linux home box works for a local ML dataset development group, and when I told him about what I am going through he called it “the height of technical complexity.”

I am excited by the difficulty, though I may be a masochist. I just hope I haven’t gotten myself in too deep. I have already learned a great deal. If complexity and expectations don’t increase exponentially then I still hope to be successful. And I appreciate any suggestions on how to make the rest of my journey easier. Thank you!

1 Like