Hey all, first post!
I’ve got some data that I’ve been toying around with and the dependent variable is either class A, B, C,
When I build a tabular learner:
g = (TabularList.from_df(df=df, cat names=cat_variables, cont_names=cont_variables, procs=procs)
.databunch( defaults=defaults.device, bs = 8196)
learn. tabular_learner(g, layers=[10000,5000], ps=[0.001, 0.01], emb_drop=0.04, metrics=accuracy)
I obviously get accuracy/predictions that have 4 outputs: A, B, C, and None. It’s pretty accurate too, around 90%. However, my question is as follows: Is there a way to force tabular learner to only pick A or B? This works if I remove all C and None examples from the data, but I’d like to leave them in. They have value since this is a demographics based dataset, I just specifically want to avoid the scenario of the model predicting C or None.
Edit: I’ve got a lot of learning to do, so, I’m sorry if this is a dumb question!
You do it just how you described. If you wanted a case for c then you do rest vs C. A “tree” ish design to a bunch of tabular models. I’ve done this with my research. Sometimes it gives quite good results.
Thanks so much for confirming my hunch on training multiple models!
I’m quite fortunate the size of my dataset is so big: 13 million. My C’s and None’s don’t total more than 2 million.
I was just hesitant because Jeremy seems big on don’t augment the data if you can avoid it.
Kinda halfway related to this topic, do you know what happens to procs when you export a model and try single inference?
newly_created_row = some_df_values #from a form input or something
learn = load_learner(some_exported_model.pkl)
edit: the relevant and not-clear-to-me part from the docs is: “As in the other applications, we just have to type
learn.export() to save everything we’ll need for inference (here it includes the inner state of each processor)”
Procs are the procs of our training data. So FillMissing, Normalize, etc are for our original training data
So, I need to rerun procs on an entirely new row that hasn’t been seen before?
Nope! Just learn.predict() with some dataframe row think of procs the same as our image’s transforms (or processor) So they’re stored in our learner already
damn, I was hoping you wouldn’t have said that. I’m just getting really crappy predictions on things that are intuitively true and was hoping it was related to procs
How similar is your test data to your training data? Perhaps Maybe there’s data missing in columns that isn’t missing in your training data? Few things to try
Dicts are not the same as dataframes.
Keep being awesome, dude @muellerzr .