Sorry for the confusion. It is one line printed by learn.summary(). It shows that when the layer outputs the layer activations, there is not enough room in the Output Shape field to display its size.

Here is the complete output of learn.summary():

```
NetworkR (Input shape: ['1 x 1 x 5 x 608 x 320'])
================================================================
Layer (type) Output Shape Param # Trainable
================================================================
ReplicationPad3d 1 x 1 x 7 x 610 x 3 0 False
________________________________________________________________
Conv3d 1 x 64 x 5 x 304 x 1,792 True
________________________________________________________________
FixedBN3d 1 x 64 x 5 x 304 x 128 True
________________________________________________________________
Conv3d 1 x 128 x 5 x 304 x 221,312 True
________________________________________________________________
FixedBN3d 1 x 128 x 5 x 304 x 256 True
________________________________________________________________
Conv3d 1 x 128 x 5 x 304 x 442,496 True
________________________________________________________________
FixedBN3d 1 x 128 x 5 x 304 x 256 True
________________________________________________________________
Conv3d 1 x 256 x 5 x 152 x 884,992 True
________________________________________________________________
FixedBN3d 1 x 256 x 5 x 152 x 512 True
________________________________________________________________
Conv3d 1 x 256 x 5 x 152 x 1,769,728 True
________________________________________________________________
FixedBN3d 1 x 256 x 5 x 152 x 512 True
________________________________________________________________
Conv3d 1 x 256 x 5 x 152 x 1,769,728 True
________________________________________________________________
FixedBN3d 1 x 256 x 5 x 152 x 512 True
________________________________________________________________
Conv3d 1 x 256 x 5 x 152 x 1,769,728 True
________________________________________________________________
FixedBN3d 1 x 256 x 5 x 152 x 512 True
________________________________________________________________
Conv3d 1 x 256 x 5 x 152 x 1,769,728 True
________________________________________________________________
FixedBN3d 1 x 256 x 5 x 152 x 512 True
________________________________________________________________
Conv3d 1 x 128 x 5 x 304 x 884,864 True
________________________________________________________________
FixedBN3d 1 x 128 x 5 x 304 x 256 True
________________________________________________________________
Conv3d 1 x 64 x 5 x 304 x 221,248 True
________________________________________________________________
FixedBN3d 1 x 64 x 5 x 304 x 128 True
________________________________________________________________
Conv3d 1 x 64 x 5 x 304 x 110,656 True
________________________________________________________________
FixedBN3d 1 x 64 x 5 x 304 x 128 True
________________________________________________________________
Conv3d 1 x 16 x 5 x 608 x 27,664 True
________________________________________________________________
FixedBN3d 1 x 16 x 5 x 608 x 32 True
________________________________________________________________
Conv3d 1 x 1 x 5 x 608 x 3 433 True
________________________________________________________________
Total params: 9,878,113
Total trainable params: 9,878,113
Total non-trainable params: 0
Optimizer used: <function Adam at 0x7efb16983320>
Loss function: MSELoss()
Callbacks:
- TrainEvalCallback
- Recorder
- ProgressCallback
```

The notebook is too complex to post, but the problem con be reproduced with any 5d input image and conv3d in the model.