Gridlike segmentation on CT scan slices

Hey all,

I’ve been getting strange results while training a unet to segment CT scan slices (.png format). First off, the vidcam accuracy described in lesson 3 results in “nan”. Secondly, the loss initially goes down, but rather than plateauing, the loss then increases exponentially, resulting in segmentations like this:

Does anyone know what might be causing this? I suspect it has something to do with my training data, but I’m not sure what. There are some slices that only have void pixels (labelled as ‘0’). Perhaps this is confusing the model? Also: I’ve posted about this with more detail in an earlier thread

Hi Marc,

It’s not entirely clear what the problem could be based on the information you have provided. Are you using batchnorm in your architecture? Also, are you using one cycle policy and what is the initial learning rate? Those factors tend to influence the model training stability pretty significantly.

Also, did you try running the learning rate finder to see what would be an optimal learning rate?

It might be useful to share your notebook so the entire context could be provided.



1 Like


I really appreciate your reply. I have uploaded the notebook to GitHub, which can be found here:

I believe that I am using batchnorm, as is indicated by model.summary. I am using the fit_one_cycle method and have tried a number of learning rates, but am currently using 2e-5 as indicated by the steepest slope on learning rate finder.

Any ideas as to what might be causing this?

I’m not entirely sure how the fit_one_cycle function works, but it might have to do with the fact you are slicing a single lr value. That gives you a tuple of None, lr, None. Maybe try removing slice on the learning rate?

I found the solution to the problem! Setting size to 244 (rather than src_size//2) fixed it. The masks were of different sizes, so I think this may have been causing the issue.

1 Like