Has anyone tried to use the structural_similiarity() method from skimage package?

It basically asks for two images variables (I used matplotlib image variables), and then it calculates their ssim.

But on the unet_learner, given that I submit that function as a metrics=, there’s an error that types don’t fit. Could it be that the unet checks tensors (not images) on the metrics?

How can I convert them wisely just for the metric calculations every epoch?

thanks

I have not tried this package. You can write your own metrics functions, which it sounds like you’ve done. I have provided a pseudo-code example below of what that might look like. By default these metrics should be run on the GPU so the computation is faster, but you may have to run them on the CPU in your case if you’re using a 3rd party library. In this case you should be able to convert the tensors to np arrays and then to do your calculations. A few things to keep in mind:

- This will be relatively slow as the computation will not be done on the GPU
- You may need to multiply by 255 to convert the input/target tensors from range 0-1 to range 0-255 for a standard rgb image. This may already be done for you so you’ll need to check.
- You may need to convert the result (return value) back to a torch tensor. I don’t remember off the top of my head what types are expected to be returned by the metrics function. I’ve only done custom metrics using tensor computation.

Here’s a link to some custom metrics I implemented on another project which happened to be unet based: Multi-GPU w/ PyTorch? - #5 by matdmiller

Hopefully this helps get you pointed in the right direction.

```
def my_custom_metric(input, target):
#input and target are batch x n dimensional tensors (probably located on your gpu)
#https://docs.fast.ai/torch_core.html#to_np
input_np, target_np = to_np(input), to_np(target)
results = []
for i in range(input_np.shape[0]):
results.append(structural_similarity(input_np[i],target_np[i]))
return np.mean(np.array(results))
```