Lesson 14 in-class

Looking forward to your questions!

Remote participants, are you able to see and hear the live stream?

yes its good for me

2 Likes

for me, too

1 Like

Its good

1 Like

​what is his (the guy of SF Data institute) name/email?

@benediktschifferer David Uminsky duminsky@usfca.edu
Mindi Mysliwiec mmysliwiec@usfca.edu is also a useful contact

1 Like

Would you liken the use of embeddings (from a neural network) to extraction of implicit features? Or can we think of it more like what a PCA would do (i.e. dimensionality reduction)?

In this particular example, do you think the granularity of the data matter - as in per day or per week or per month? Is one better vs the other?

What is the test set? Is it from some time after the training data?

Do you know if there’s any work that compares (for structured data) supervised embeddings like these ones to embeddings that come from an unsupervised paradigm (e.g. autoencoder)? It seems like you’d get more useful-for-prediction embeddings with the former case, but if you wanted “general purpose” embeddings, you might prefer the latter.

@thunderingtyphoons yes, the test set is from after the training set

.ix is deprecated; use .loc

1 Like

When you use embeddings from a supervised model in another model, do you have to worry about data leakage?

How is the googletrend and weather datasets obtained. I dont see it in the Kaggle data.

ok I got it from one of the discussion post. https://www.kaggle.com/c/rossmann-store-sales/discussion/17229

2 Likes

@renjithmadhavan I was just searching for that link but you beat me to it!

1 Like

Is this similar to windowing function?

is there a reason to think that the current approach would be problematic with sparse data?

For the features that are “time until” an event, how do you deal with that given that not you might not know when the last event is in the data?