Keras ImageDataGenerator() always outputs float64?

In keras.json you can set a floatx parameter, which you can also call in keras.backend.floatx().
I set it to ‘float16’ for testing, because my RAM sometimes overruns when I load large datasets.
Now, two things I found puzzling:

  1. Keras ImageDataGenerator always returns float64, no matter what I set in keras.json
  2. In the keras source code on GitHub the function img_to_array (which in turn gets called by gen.next()) appears to always return float32, no matter what. But I still get float64 out.
  3. I did set the vgg_preprocess function from the course as the preprocessing function in ImageDataGenerator, cause I couldn’t use a Lambda layer in the keras built-in VGG16 model. I tried setting return x.astype(keras.backend.floatx()), but still I keep getting float64 arrays, whenever I run my_batch.next()

It bothers me, because float64 takes WAY too much memory for otherwise 8-bit images and it also slows down the computation.
Any ideas? Anyone with similar observations?

I believe I can shed some light on it myself: The default precision in numpy is float64. So if I go

img = train_batch.next()[0]

the next() method returns a float32, but it immediately gets casted to float64.
The bad news is, that, whenever I use ImageDataGenerator() I can’t control the bit-depth as it’ll always return float32.
The good news is internally keras will use float32 from the looks of it.
Now, when you use get_data() from utils.py on my system you also get a float64 returned. So, I use it with

np.concatenate(...).astype('float32') # float16 will save lots of memory

there are papers out there proposing float 16, float 10 and crazy idea on float 1

@geniusgeek, could you post links and elaborate why they propose different precisions? Computation speed? Memory conservation?
At any rate: if someone figures out an end-to-end modeling scenario with keras I’d care to get a hint. Right now I just store the arrays in float16 to save memory, but the computations are all done in float32.

better precision and computation speed, including but not limited to fast learning rate, with improved convergence rate for the gradient descent. Invidia currently support 16 bit floating point.


http://www.jmlr.org/proceedings/papers/v37/gupta15.pdf



then the crazy 2 bit idea (1 and 0)

they claim that their approach can reduce the memory usage and improve computational efficiency significantly while achieving good performance in terms of classification accuracy, thus representing a reasonable tradeoff between model size and performance.

Thanks, @geniusgeek. Did you see any implementation using keras and float16 so far?

From the papers i gave you, you can search for their implementation on github

Just as an aside, non-professional nvidia cards do not all run 16bit fp ops very fast. Check the web to see if your GPU does or does not.

@MPJ, yes that’s true. I think the Pascal GTX units aren’t particularly well suited for FP16 performance. This is something NVIDIA keeps for its pro-line, mostly the Teslas, really.