How does one edit the layers of a Tabular Learner? Specifically, I am doing multiout regression but I want the activations to sum to 1 (because in my case the targets are fractions that should add up to 1). By default (or at least when you specify a y_range) the Tabular Learner uses a Sigmoid as its final activation, but I would like to change this to a softmax, so that the outputs sum to 1 (model output shown below). How can I change this final layer to a softmax? Alternatively, could I not specify a y_range (and therefore not get a sigmoid at the end) and then append a softmax function as the final layer?
y_labels['fraction1', 'fraction2', 'fraction3', 'fraction4', 'fraction5']
procs = [Normalize]
to = TabularPandas(df,
procs=procs,
cat_names = [],
cont_names = feature_labels,
y_names=y_labels,
splits=splits,
y_block = RegressionBlock(n_out=len(y_labels)))
dls = to.dataloaders(bs=512, num_workers=0)
learn = tabular_learner(dls, layers=[500,250], metrics=[mse, rmse, mae], y_range = (0.0,1.0), n_out=len(y_labels))
learn.model
TabularModel(
(embeds): ModuleList()
(emb_drop): Dropout(p=0.0, inplace=False)
(bn_cont): BatchNorm1d(125, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layers): Sequential(
(0): LinBnDrop(
(0): BatchNorm1d(125, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): Linear(in_features=125, out_features=500, bias=False)
(2): ReLU(inplace=True)
)
(1): LinBnDrop(
(0): BatchNorm1d(500, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): Linear(in_features=500, out_features=250, bias=False)
(2): ReLU(inplace=True)
)
(2): LinBnDrop(
(0): Linear(in_features=250, out_features=5, bias=True)
)
(3): SigmoidRange(low=0.0, high=1.0)
)
)