I am trying to use FastAI image segmentation on ultrasound data . It means the data have a range between, let’s say, -0.5 and +0.5 with high precision.
When using the SegmentationDataLoaders, all the values are truncated to 0. My assumption is that images are converted to uint8 when they are loaded.
In my case, it won’t be useful to scale the data between 0 and 255 as it will still lose too much precision, so I am wondering if it is possible to load and use the data in float32?
So far I was using the SegmentationDataLoaders, I tried to create a custom DataBlock and img_cls but I am not sure how to change the way the data are loaded.
Apart from the fact you have ultrasound provided in such data range, I suspect the true number of grey levels available in ultrasound is not millions -likely not even thousands. Clinical ultrasound, as far as I know, is limited to 256 grey levels. Unless you have some particular image source, this is -more or less- the number of levels you might recognize, and the rest is noise. Thus, I would convert the range in integers, after having checked the dynamic range of the device.
Thank you very much for your quick reply @VDM .
To give a little bit more context, the data are row data from sensors that can be represented as image, for instance one data point could look like this : 0.01234567 . I do agree that it could be scaled to a 256 grey levels and that maybe losing a bit of precision won’t have a huge impact in term of performance but just to be sure, I would like to run this experiment and see if we can change the way the data are loaded, I think it is quite interesting
I can understand, but if the sensor has specifications available, you may understand how many of those digits are really significant (so not necessarily 256 levels, but “the right amount”). In imaging you might have, in principle, a Full Well Capacity of say 20000e, but if the readout noise is 10e, the number of available levels is 2000.