I believe that trying to explain something makes your understanding better. As I am doing this course alone I am writing after every video so that I can arrange my thoughts. Perhaps this may turn out to be useful to others too later.

Video 2 was about taking an example of matrix factorization (topic modeling) and exploring that via using 2 techniques - SVD and NMF. It was not about understanding how SVD and NMF works from a very detailed Maths perspective but an overview to show how to use them.

News group dataset was used through out this video to demonstrate that. This was my first time seeing clustering happening via code. So that is first use case of Matrix factorization that I can see which I think has many use cases. It would be interesting to come back to this at the end of this course to see if I can actually use this for some actual problem.

I found some interesting things that I can probably do

- Text Analysis with Topic Models for the Humanities and Social Sciences

https://de.dariah.eu/tatom/index.html - Face dataset decomposition http://scikit-learn.org/stable/auto_examples/decomposition/plot_faces_decomposition.html#sphx-glr-auto-examples-decomposition-plot-faces-decomposition-py
- Collaborative filtering http://www.quuxlabs.com/blog/2010/09/matrix-factorization-a-simple-tutorial-and-implementation-in-python/

The first one will take some time so I will do it later but will try and do the other two as it does not seem like they will take up a lot of time. The face dataset would be interesting as that is something that I had looked at earlier but did not completely understand. Also I think I heard eigen values (maybe that was Video 1) which I heard in last video which I want to understand what is as that I have heard in other places too many times now.

The parts where Rachel mentioned that SVD is characterized by trying to make things orthogonal and NMF is by the non-negative constraint that was helpful. Because if I look at wikipedia for any Maths article they are usually way too complicated to understand. There are so many variations of an algorithm that knowing what characterizes a family of algorithm becomes hard to find in all the equations. That is the one thing that I have found off-putting about going through the Maths while trying to go through ML/DL.

I am not 100% sure what each row/column of the decomposed matrices mean exactly right now. Probably need to read up a bit more and play around on that.

At the end of video 1 I have watched it completely more than 1 time. I am going to try not to be hung up on the things if I donâ€™t understand it 100% unless that is stopping me from going through the rest of the stuff completely. Going through half of Jeremyâ€™s course I have understood that is how I can expect fast ai courses to be on a per-video basis. Go through the next videos and I will probably understand. So this time I am going through a different way. I am making a spreadsheet containing things that I do not understand and deciding when I should come back to them at a later point in time.