I am currently about to take part 2 of the course soon, so the scale up in complexity is something I’m attempting to learn rather quickly as best I can. So when I create my model, this is what I get:
TabularModel(
(embeds): ModuleList()
(emb_drop): Dropout(p=0.0)
(bn_cont): BatchNorm1d(55, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layers): Sequential(
(0): Linear(in_features=55, out_features=221, bias=True)
(1): ReLU(inplace)
(2): BatchNorm1d(221, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): Linear(in_features=221, out_features=1500, bias=True)
(4): ReLU(inplace)
(5): BatchNorm1d(1500, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(6): Linear(in_features=1500, out_features=1500, bias=True)
(7): ReLU(inplace)
(8): BatchNorm1d(1500, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(9): Linear(in_features=1500, out_features=1500, bias=True)
(10): ReLU(inplace)
(11): BatchNorm1d(1500, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(12): Linear(in_features=1500, out_features=221, bias=True)
(13): ReLU(inplace)
(14): BatchNorm1d(221, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(15): Linear(in_features=221, out_features=55, bias=True)
)
)
To implement BatchSwapNoise into said generated model, would I need to do a model.layers.add_module()? Also what is the p in the parameters?
Thanks!