Hyperparameter Optimization for binary classification on tabular data

Hi guys,

I am currently training a neural network for binary classification using a tabular dataset. I implemented the tabular learner. Now I want to tune the hyperparameter like number of layers, number of nodes in layer, dropout of each layer, etc.

Does somebody know an easy way for this?
I thought about Optuna, however I don’t find any examples/tutorials on that.
Any help would be really appreciated.

Thanks a lot!

If you’re using v1, here: https://github.com/muellerzr/Practical-Deep-Learning-For-Coders/blob/master/03b_Baysian.ipynb

If using v2, here: https://github.com/muellerzr/Practical-Deep-Learning-for-Coders-2.0/blob/master/Tabular%20Notebooks/02_Bayesian_Optimization.ipynb

Both are Bayesian Optimization with Tabular models.


Thank you very much! That looks really good.
Actually I don’t know if I use v1 or v2 and I can’t find a way how to find out.

Do you have a hint for me on that as well?

You use v1 if you call from fastai import x

You use v2 if you call from fastai2.tabular import x :wink:

perfect thank you :slight_smile:
I have just been reading an article about tabular data analysis where the author gives credit to you :smiley:

1 Like


I have another question: I used your v2 tutorial now, however I get the following error “AttributeError: ‘tuple’ object has no attribute ‘type_tfms’”, when I try to run this cell: “to = TabularPandas(df, procs=procs, cat_names=cat_names, cont_names=cont_names,
y_names=y_names, y_block=y_block, splits=splits)”

Is it possible that something with the y_block changed in fastai_v2?

Thank you very much!

I don’t see an issue on my end. What version of fastai2 and fastcore are you using? (pip show fastai2 fastcore). Also what environment?

so fastai2 is version 0.0.17 and fastcore is version 0.1.18.

I’m on Win10 using a jupyter notebook in with conda.

I haven’t tried fastai2 on windows (or subsystem), I use google colaboratory (so native linux) and don’t see that issue.

I solved it: After changing everything to Colab, I still had the same error, then I found out, that I accidently added a comma behind : “y_block = CategoryBlock()” like that “y_block = CategoryBlock(),”.

Anyways do I see that right, that fastai_v1 does not have an equivalent for y_block? Because I am very suprised: When I use v1 I get bad accuracy values for the default parameters, like around 10% accuracy, while I get 64% with the defaults from v2.
How does this big advantage come?

1 Like

If I had to guess, perhaps it thought you were doing regression instead? (just a possibility).

Probably yes, however I can’t find the option to choose between regression and classification in v1. If I am right there is no y_block parameter.
Where can I specify classification then?

I used your v2 versin now for bayesian optimization. After training, I want to use just the best model. How can I create a new model out of the best params to create some predictions? @muellerzr

Take the best options (returned from optim.max() and create a model with it and train just like it did.

yeah, I just tried to do like that:
opt = optim.max[“params”]
layers = [opt[“layer_1”], opt[“layer_2”], opt[“layer_3”]]
and then:
TabularLearner(dls, TabularModel(layers=layers), lr=opt[“lr”])
However then I get this error: "TypeError: init() missing 3 required positional arguments: ‘emb_szs’, ‘n_cont’, and ‘out_sz’ "

However I never specified ‘emb_szs’, ‘n_cont’, and ‘out_sz’ before so I don’t know what to put there?

And another question is: the optimum for n_layers was around 1.9. Since in the code it is implemented as int(n_layers) it should be just 1, but why does it give me recommendations for layer 2 and layer 3 then?

You should do:

learn = tabular_learner(dls, layers=layers)

(and if there are any specific hyper-parameters such as wd, those should be passed into a tabular_config)
And then fit via:

1 Like

Does this also work for regression tasks? The target remains the same for each iteration and I get the following error for the last iteration:

StopIteration: Queue is empty, no more objects to retrieve.

During handling of the above exception, another exception occurred:

ValueError: array must not contain infs or NaNs

@muellerzr Hi Zachary, I went over your super useful Bayesian_Optimization.ipynb and I have a question:
In the hps you specify the number of layers to be between 1 and 3, but in the fit_with function you apply int() to the randomly chosen (float) number of layers, meaning that: int(2.02) = 2 hidden layers, but also int(2.9) = 2 hidden layers, so you cannot get 3 hidden layers. Wouldn’t it be better to use round() instead of int()?


1 Like

I do agree, I think that would be a better alternative and is what should be used :slight_smile:

I’ll update that notebook/lesson here later today: