I’m wanting to get better at implementing code on these papers without seeing the code first. Does anyone have recommendations for “easier” implementations to go through as practice, or general math resources for doing so? (Doesn’t have to be Deep Learning related, just math!) I’ve debated on finding a calc 1 or 2 textbook and trying to write the code out for it (or Maybe some discrete structures problems)
Implementing stuff from entry-level textbooks (calc 1/2, numerics, …) is probably a good place to start.
Do you have a specific paper in mind that gave you trouble? I have a math-background, maybe we can look over it together?
It could be worth the time to compile such math-programming-exercises into a repository, as help for other developers who probably face the same difficulties.
I was thinking along the lines of such a repository too for these exercises (in notebooks, of course!) No particular exact paper but for example all these optimization techniques I look at their math and I just get lost.
So good start would be dragging out the calc 1 textbook, move to calc 2? Or what are your thoughts (since you have the background )
might be helpful -
Listing of the mathematical notations used in the Mathematical Functions http://functions.wolfram.com/Notations/
This is from personal experience, you can try implementing regularizations and initilizations first. Usually they only require (in fastai terminology) very minimal changes in form on only a CallBack.
You can checkout my repo where I’ve implemented 3 papers which only required a CallBacks.
This sounds like a fabulous idea.
I know that there are many people like myself that took any other subject rather than Calculus at Uni.
This means that when we look at quite a few papers the maths can put us off going any further.
I’d personally pay for a book that converted Wolframs and similar equations to code.