Getting worse results on gcloud vs local training

EDIT: After a bunch of digging, I opened an issue for tensorflow:

EDIT-END

Not sure if anybody here uses google cloud’s ml-engine for training, but I’m getting ramped up and trying to understand why my local results are better than the gcloud results.

The only thing I’ve found in the internet so far is this SO question with no answer: https://stackoverflow.com/questions/51047041/results-of-training-a-keras-model-different-on-google-cloud

In local, I run a job like this:

gcloud ml-engine local train --module-name trainer.task --package-path trainer -- --vocabulary-file trainer/data/vocab.txt --class-files $CLASS_FILES --job-dir trainer/lr0001 --num-epochs 5000 --learning-rate 0.0001 --train-batch-size 4 --eval-batch-size 64 --export-format CSV

And for gcloud I run

gcloud ml-engine jobs submit training $JOBNAME --job-dir gs://.../lr0001 --module-name trainer.task --package-path trainer --region us-west1 --runtime-version 1.10 -- --vocabulary-file gs://.../vocab.txt --class-files $GS_CLASS_FILES --num-epochs 5000 --learning-rate 0.0001 --train-batch-size 4 --eval-batch-size 64 --export-format CSV

I’ve fixed the seed, ran it multiple times, checked python 2 vs python 3, but the gcloud results are still worse than my local run.

One last bit of clue that I’ve found is that local logs look like this:

INFO:tensorflow:loss = 0.63639945, step = 401 (0.170 sec)
INFO:tensorflow:global_step/sec: 485.821
INFO:tensorflow:loss = 0.61793035, step = 501 (0.206 sec)
INFO:tensorflow:global_step/sec: 490.795
INFO:tensorflow:loss = 0.5869169, step = 601 (0.204 sec)
INFO:tensorflow:global_step/sec: 619.825
INFO:tensorflow:loss = 0.5738391, step = 701 (0.161 sec)
INFO:tensorflow:global_step/sec: 605.698
INFO:tensorflow:loss = 0.51589084, step = 801 (0.165 sec)

whereas gcloud logs look like they are doubling up or something

I  master-replica-0 loss = 0.40115586, step = 2202 (0.367 sec) master-replica-0 
I  master-replica-0 global_step/sec: 555.434 master-replica-0 
I  master-replica-0 global_step/sec: 498.601 master-replica-0 
I  master-replica-0 loss = 0.4367655, step = 2402 (0.470 sec) master-replica-0 
I  master-replica-0 global_step/sec: 366.906 master-replica-0 
I  master-replica-0 global_step/sec: 408.556 master-replica-0 
I  master-replica-0 loss = 0.41198668, step = 2602 (0.492 sec) master-replica-0 
I  master-replica-0 global_step/sec: 388.73 master-replica-0 
I  master-replica-0 global_step/sec: 380.982 master-replica-0 
I  master-replica-0 loss = 0.35386887, step = 2802 (0.522 sec) master-replica-0 
I  master-replica-0 global_step/sec: 401.002 master-replica-0 
I  master-replica-0 global_step/sec: 465.647 master-replica-0 
I  master-replica-0 loss = 0.4420835, step = 3002 (0.417 sec) master-replica-0 

Any pointers would be appreciated!

After a bunch of digging, I opened an issue for tensorflow: