What should be done in case someone is not able to understand the research papers?

I have been trying to follow along with the suggested reading materials with the lessons like this one on Exploring the limits of language modeling. But it happens quite often that I come across a lot of things that I have no idea what it is. Quoting from the mentioned paper

Count-based approaches (based on statistics of N-grams)
typically add smoothing which account for unseen (yet possible)
sequences, and have been quite successful.

I have no clue what “count-based approaches”, “statistics of N-grams”, “add smoothing” are. I understand that the simplest thing to do is to google the terms. But while reading the papers there are references to many other papers and many such terms. And if I search for them it could become recursive quickly. Seems that would be a bad way to proceed to fill the gaps in my knowledge.

Is it just me or do other people face this issue too? What do other people do when they are faced with this issue? Any advice on how I could proceed? I understand that fast ai is a top down approach and we shouldn’t be bogged down by details. But at some point you have to do that, right? Else my knowledge will be very superficial. No?

You surely can ask about any term you don’t understand here on the forums. In part 2 of the Deep Learning course here on fast.ai you get to implement a number of current research papers. Doing it there surely will teach you a lot about how to approach such a task.

join a couple of machine learning feeds on reddit. many papers are discussed there and often people offer a tl;dr

One thing that I thought of was that if there are related terms appearing a lot of time maybe just find the original source or a survey of research and go through it. I was seeing lot of vector space models so I looked up and found this paper. Currently reading it. I guess I can try and read more basic papers first and if something is still left out I can come here and ask on the forums about them.