My take on why we I am doing this course what I achieved at the end of Lesson 1

I think it is useful to express in my own words why we discussed something and what a summary of what we discussed. Helps to reinforce it better. So here goes my take on this. I am not hoping for anyone to read this but if someone does and can add something to my understanding that would be great.

My motivation for this course

I am studying it because I am interested in ML. Going through Part 1 of Deep learning by Jeremy I was not always able to follow along with the reading materials’ Math. The same thing happened when I had to read sklearn’s documentation. I was able to use but not always be able to make the best decision of which algorithm to choose. I realize that this field requires some understanding of Maths.

One option is to read the Math part by part but that wasn’t working out so well for me. I was not able to connect the dots related to Maths part. This course’s syllabus had things like Eigen Values, QR decomposition, etc. which were mentioned in many of the reading materials. I think taking a comprehensive look at some (hopefully majority) of related Math parts will help me connect the Math dots together better and be able to lay down a good foundation of basics on which I can build a career path as a Data Scientist. I am not that much into research. I want to be able to make good decisions related to Data Science, ML, DL to be able to apply those.

What I understood/achieved at the end of lesson 1/notebook 0 and 1

I better understood that in the last century Science and Engineering were heavily affected by matrix decomposition algorithms. These mathematical concepts have practical applications including in the fields of ML and DL. I had seen some of it in Part 1 of DL course but here I understood it is everywhere.

  • I saw some of the applications of linear algebra.
  • I read about floating point arithmetic. I knew it was discrete but I did not know that it wasn’t evenly spaced.
  • I understood that there is always a possible error between what the number represents in my mind and what is possible to be represented in machine. That number (machine epsilon) is not very large. But that can quickly increase as errors multiply over operations. That is what we need to be aware of. Hopefully we will cover more on that later in the course.
  • Rachael went through some history of computers. I ignored that part. Probably just there as every course gets some history.
  • I came to know that sometimes approximations are good enough. When that was mentioned I remembered how Google is using approximations to speed up its online tensorflow model training. They are building TPUs that are basically approximation machines. So that connected here.
  • Some of the terms that I have seen but did not understand came up again - bloom filters, BLAS. Maybe we will cover it in this course later?
6 Likes