Lesson 4 official topic

Thank you very much for this!

I like that explanation a lot. If I look at the distribution of predictions after training with the above loss function, I see

If I now tweak the loss function to something else (complete nonsense, of course!)

def mnist_loss(predictions, targets):
    predictions = predictions.sigmoid()
    # mind the change of the second argument from 1-predictions to 1+predictions
    return torch.where(targets==1, 1+predictions, predictions).mean() 

I see a different distribution:

As expected, changing the loss function will lead to the predictions being ā€œoptimizedā€ differently.

2 Likes

Hey gang!
I published my summary and quiz responses for lesson 4 on my blog.
This post includes another persistent animal for your enjoyment :grin:

This is in the lesson 4 notebook. This is probably a Python newbie question - definitely a pandas newbie question, but, Iā€™m trying to fundamentally understand how pandas modifies all records in this assignment and adds a new column:

import pandas as pd
df = pd.read_csv('train.csv')
print(df.head())
df['input'] = 'TEXT1: ' + df.context + ' TEXT2: ' + df.target + ' TEXT3: ' + df.anchor
print(df.head())

With what looks like a single concatenated string assignment, pandas has created a new column for all rows and modified the column according to the logic in the string concatenation. Iā€™m coming to Python from other languages, so is this a pythonic thing? Or has pandas overridden object property assignment and theyā€™re using the expression as a shortcut to modify all rows? In another language youā€™d just end up with df[ā€˜inputā€™] as a property with a single string value - or it might throw an error because e.g. df.context isnā€™t a variable that can be concatenated.

Iā€™ve had a look at the pandas docs and no luck, poked around regarding python overriding assignment operators and found that it is possible to do this for object properties but didnā€™t immediately see that in Pandas or an explanation of how it works. Just need a pointer on what to look up to understand this behavior.

Thanks in advance,

Mark.

Python is weird!

You can override operators, using what are called ā€œdunderā€ methods. In this case pandas dataframe defines __add(b)__, which overrides the + operator.

Hereā€™s the API: pandas.DataFrame.__add__ ā€” pandas 2.1.3 documentation

Thanks very much Zander, thatā€™s incredibly helpful.

1 Like

I donā€™t have a sufficient concise answer to elaborate on what zander posted, but here are some concepts that may help:

  • You can access columns in a DataFrame either by indexing (df['input']) or as its attribute (df.context).
  • pandas is built on top of NumPy, and the pandas Series is built on NumPyā€™s ndarray.
  • From the NumPy docs:

At the core of the NumPy package, is the ndarray object. This encapsulates n-dimensional arrays of homogeneous data types, with many operations being performed in compiled code for performance.

  • NumPy has the ufunc (universal function) which is:

a function that operates on ndarrays in an element-by-element fashion, supporting array broadcasting, type casting, and several other standard features. That is, a ufunc is a ā€œvectorizedā€ wrapper for a function that takes a fixed number of specific inputs and produces a fixed number of specific outputs.

  • Here are their docs on ufunc basics, API reference, NumPy C Code Explanation and a guide on how to write your own ufunc (the introductory text is somewhat helpful, the rest of it I donā€™t understand).
    • As an example, here is the source code for the add method for character arrays, which returns the numpy.add function which is a ufunc. I canā€™t spell out exactly how this translates to the line of code you are referencing, but conceptually itā€™s related.
  • Here is NumPyā€™s description of broadcasting which comes up a lot when working with pandas (and also PyTorch). The single strings 'TEXT1: ', 'TEXT2: ' and 'TEXT3: ' are ā€œbroadcastedā€ to all elements in the columns df.context, df.target and df.anchor when they are concatenated with the + operator.

I also prompted ChatGPT with some questions around this topic and reading the responses may spark further inquiry.

I have a question regarding the MNIST deep learning model that was built in Chapter 4. As I understand it, our intent was to classify the input into on of two categories, threes and sevens, why then did we decide to use a linear model for that purpose? Personally, the first idea that came to my mind was a classification model such as logistic regression.

I attempted to run the chapter 10 notebook on Paperspace using one of the free machine configurations including GPU. It failed on the model tuning step, complaining about running out of GPU memory:

My question isnā€™t primarily about the advice the error text provided (although I would not turn down advice on the advice), but about the claim that PyTorch reserved 5.37 GiB of the 7.79 GiB GPU memory capacity. Is that usual, and if so, doesnā€™t that define a floor on GPU requirements that makes free resources not so useful?

NLP to unmask Satoshi Nakamoto?

I had this half baked idea after seeing a news piece on bitcoin. Could NLP help to identify Satoshi Nakamoto, the author of the original bitcoin whitepaper? Satoshi Nakamoto is a pseudonym, the real identify of the author remains a mystery.

Could NLP help identify the authorā€™s real name? A couple half baked approaches:

Use the Abstract as the known, author as unknown, Journal of Cryptography up to the publication date of the paper as the data set. The test dataset is just the single bitcoin paper. This relies on the assumption that the author has published in Journal of Cryptography previous to the bitcoin paper, and abstracts being unique enough to different authors. Multiple authors on a paper would be a sticky point, treat them as one author/token in the vocabulary? Use ULMfit due to the size/number of taokens required for abstracts. Getting all of the abstracts and authors quickly and easily is another sticky spot - does the Journal of Cryptography offer an API?

Or, use the authors in References as the known, paper author as unknown, again using the J Cryptography. This has the assumption that researchers tend to reference certain other researchers more frequently. And the same challenge here of getting all authors and the References.

1 Like

No such file or directory: ā€˜/root/.fastai/data/imdb/models/finetuned.pthā€™

Iā€™m not sure if this line: learn.save_encoder(ā€˜finetunedā€™)
is working properly, because later in the notebook this line: learn = learn.load_encoder(ā€˜finetunedā€™)
throws the error shown above.

Does anyone have an idea of what is going on, where my mistake may be?
Thanks,
Chris

Hi all, this is a silly question but Iā€™ve not been able to resolve it on my own. How do I save/load the encoder while in Kaggle?

I am trying out the ULM fit method on a dataset using Kaggle, and Iā€™m having issues saving and loading the finetuned encoder in Kaggle. Iā€™m quite certain itā€™s a directory issue due to the fact that the error shows itā€™s trying to pull from fastaiā€™s data location. I tried to set my model directory to kaggle/working using the below code.

code snippet:
learn_lm.model_dir = ā€˜/kaggle/working/ā€™
learn_lm.save_encoder(ā€˜finetunedā€™)

When I run the line below:
learn_lm = learn_lm.load_encoder(ā€˜finetunedā€™)

I get the following error:
FileNotFoundError: [Errno 2] No such file or directory: ā€˜/root/.fastai/data/imdb_sample/models/finetuned.pthā€™

link to my notebook:
IMDB ULMFit

Thank you in advance!

thanks. i just made sure i copied jeremyā€™s notebook (which is linked to the patent-competition already)

I guess you mean 0.2?

In the video Jeremy says that ULMfit might work better (cheaper on compute costs) when the documents are relatively large.