Another treat! Early access to Intro To Machine Learning videos


(Aditya) #374

proc_df is filling the median always…

And I guess what you suggest isn’t actually done…
(Jeremy explained this later)

Just think that what your model thought about a particular year(let’s say it the split point) in the training set,
It will be completely different to what it will be validated on…

That will do nothing beneficial but might make the model collapse…(especially if the size of the splits after the year are let’s say in the ratio 9:1 and our model will give wrong predictions)

Thanks…


(Jeremy Howard) #375

Your concern is quite right, in a strict mathematical sense. For most real-world datasets (including this one) this won’t be an issue. If you do it at a more granular level however it can become an issue, and my friends Nina Zumel and John Mount have written an excellent paper and library about how to handle that situation if you’re interested: https://arxiv.org/abs/1611.09477

It’s always possible to do smarter feature engineering, but the trick is to know when it’s helpful and worth the investment of time. In this case, as you’ll see later in the course, creating a time difference variable doesn’t generally improve the predictive accuracy of a random forest, but can help with interpretation.


(Axel Straminsky) #376

Jeremy, in Lecture 7, approximately at minute 17:20, you talk about what to do when you have an unbalanced dataset, and you refer to a paper that found that oversampling the less common class was the best approach. Do you remember which paper it was?


(Jeremy Howard) #377

No I’m afraid not. If anyone digs it up let me know! It’s probably in my twitter favorites or retweets, so that would be a good place to search.


(Axel Straminsky) #378

I think I found the paper: https://arxiv.org/pdf/1710.05381.pdf


(Jeremy Howard) #379

Yes that’s it! Nice search-foo :slight_smile:


(vinay varma) #380

hi,jeremy.lesson 2 of this playlist is not working.


#381

saw this and thought some of you might like it. Not sure a better place to post it


(An Vuong) #382

Hi everyone, I just got started in this ML course and I’m currently stuck at this Subsampling cell, does anyone know how to resolve this error?

This error appears both on my Google VM and local desktop .__.


(Alex L) #383

Looking at the API of the library, I think you have to change it to

df_trn, y_trn, _ = proc_df(df_raw, 'SalePrice').


(Aditya) #384

What is regularised target encoding?
Any tips on this?
@radek @jamesrequa @alessa (sorry all)

Is it like subtracting min values of each columns?


(An Vuong) #385

I already tried that myself, the cell will pass ok but later cells will yield different results compared to the videos :frowning:

Furthermore, the error stated that it expected 2 arguments, so it’s really confusing.


(Aditya) #386

What exactly you want to accomplish?

Have a look here


(Alex L) #387

Try

data = proc_df(df_raw, 'SalePrice')
df_trn, y_trn = data[0], data[1]

(An Vuong) #388

Hey this works, in fact df_trn, y_trn, _ = proc_df(df_raw, 'SalePrice') works too after I restarted my PC @_@. Thanks a lot.


(swetha Godi) #389

Hi everyone, I just got started in this ML course. Can someone help me understand the parameters passed in the below functions.

1.def fit(self, X, y, sample_weight=None): What are X and y here?

  1. m.fit(df, y) : what are df and y?

  2. def print_score(m):
    res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid),
    m.score(X_train, y_train), m.score(X_valid, y_valid)]
    if hasattr(m, ‘oob_score_’): res.append(m.oob_score_)
    print(res)

What are X-train, y_train here?

Thanks in advance!


(Aditya) #390

I guess you should dig in books a little bit as these are trivial notations used always in ML…(don’t take it otherwise)

  • DF is dataframe
  • Xtrain …etc are the training, validation, testing…
  • y here is the target variable in np darray

(Jeremy Howard) #391

This is designed to be a standalone intro to machine learning - we shouldn’t be asking people to read other books to understand it! It sounds like we may need to add more information to the notebooks to help people interpret them.


(Aditya) #392

Actually I am working on collecting different shorthands fast.ai uses…

Will share once I gather enough…


(Rishaan S Patel) #393

Is low accuracy when you OHE all variables because each tree selects a random subset of the features? Each tree would have less information to learn from.