But it should show: Conv3d 1 x 64 x 5 x 304 x 160 1,792 True
I have hacked module_summary(learn,*xb) in a personal notebook to show the missing size. But there are so many undocumented operations and dependencies in the code that I’d surely break something by offering a fix.
Not sure what we are looking (or expecting to see) here… Do you have a self-contained code snippet or gist that can show more details? (e.g. what are all those numbers supposed to be…)
Sorry for the confusion. It is one line printed by learn.summary(). It shows that when the layer outputs the layer activations, there is not enough room in the Output Shape field to display its size.
Here is the complete output of learn.summary():
NetworkR (Input shape: ['1 x 1 x 5 x 608 x 320'])
================================================================
Layer (type) Output Shape Param # Trainable
================================================================
ReplicationPad3d 1 x 1 x 7 x 610 x 3 0 False
________________________________________________________________
Conv3d 1 x 64 x 5 x 304 x 1,792 True
________________________________________________________________
FixedBN3d 1 x 64 x 5 x 304 x 128 True
________________________________________________________________
Conv3d 1 x 128 x 5 x 304 x 221,312 True
________________________________________________________________
FixedBN3d 1 x 128 x 5 x 304 x 256 True
________________________________________________________________
Conv3d 1 x 128 x 5 x 304 x 442,496 True
________________________________________________________________
FixedBN3d 1 x 128 x 5 x 304 x 256 True
________________________________________________________________
Conv3d 1 x 256 x 5 x 152 x 884,992 True
________________________________________________________________
FixedBN3d 1 x 256 x 5 x 152 x 512 True
________________________________________________________________
Conv3d 1 x 256 x 5 x 152 x 1,769,728 True
________________________________________________________________
FixedBN3d 1 x 256 x 5 x 152 x 512 True
________________________________________________________________
Conv3d 1 x 256 x 5 x 152 x 1,769,728 True
________________________________________________________________
FixedBN3d 1 x 256 x 5 x 152 x 512 True
________________________________________________________________
Conv3d 1 x 256 x 5 x 152 x 1,769,728 True
________________________________________________________________
FixedBN3d 1 x 256 x 5 x 152 x 512 True
________________________________________________________________
Conv3d 1 x 256 x 5 x 152 x 1,769,728 True
________________________________________________________________
FixedBN3d 1 x 256 x 5 x 152 x 512 True
________________________________________________________________
Conv3d 1 x 128 x 5 x 304 x 884,864 True
________________________________________________________________
FixedBN3d 1 x 128 x 5 x 304 x 256 True
________________________________________________________________
Conv3d 1 x 64 x 5 x 304 x 221,248 True
________________________________________________________________
FixedBN3d 1 x 64 x 5 x 304 x 128 True
________________________________________________________________
Conv3d 1 x 64 x 5 x 304 x 110,656 True
________________________________________________________________
FixedBN3d 1 x 64 x 5 x 304 x 128 True
________________________________________________________________
Conv3d 1 x 16 x 5 x 608 x 27,664 True
________________________________________________________________
FixedBN3d 1 x 16 x 5 x 608 x 32 True
________________________________________________________________
Conv3d 1 x 1 x 5 x 608 x 3 433 True
________________________________________________________________
Total params: 9,878,113
Total trainable params: 9,878,113
Total non-trainable params: 0
Optimizer used: <function Adam at 0x7efb16983320>
Loss function: MSELoss()
Callbacks:
- TrainEvalCallback
- Recorder
- ProgressCallback
The notebook is too complex to post, but the problem con be reproduced with any 5d input image and conv3d in the model.
Ah. I see what you mean now. Thanks for the more detailed snippet.
I had a quick look at the code, in 15_callback.hook.ipynb. It seems like the printing of summary() table truncates the Output Shape column to 19 chars, with a column width of 20 chars (I guess to leave a space separator behind the shape output). The code I see is:
res += f'\n{"":<20} {_print_shapes(sz, bs)[:19]:<20}'
How did you make your modifications in your personal notebook to get it to not truncate?
(I think a quick solution would be to just increase the column width of Output Shape, to allow printing more of the shape output?)
If you make your changes, run the tests, and the tests all pass, I think it will be fine? Then you can create a PR, and see if the maintainers accept to merge, or they might suggest alternative/better ways of fixing this?