I have written a “notation sympathetic” version of the [babi network implementation] (https://github.com/madaan/pe/blob/master/babi-memn2n-withpaper.ipynb). The idea is to use variable names and operations that are as close to the description in the paper as possible. I hope that reading a paper with a notebook that uses familiar variable names will help in cutting down the time it takes to understand it.
So if you have:
a) Not started looking into the babi-memN2N networks, and want to catch up quickly.
b) Are unable to map the code to the paper.
c) Want to refresh some NLP + DL concepts.
you may find the notebook useful.
The notebook starts by describing the input and the preprocessing steps taken (standard tokenization, word to id mapping and such). Since the primary goal of this implementation is pedagogical, terseness has given way to verbosity wherever needed.
If you are familiar with the standard NLP preprocessing, you will find it more useful to directly skip to the model section wherein the code is presented along with the relevant snippet from the paper.
Thanks @jeremy, a lot of what’s in this notebook has been derived from the one you had provided.
EDIT: The above is only for single hop networks.