Learning rate decay and the global step

I am following this tutorial:

and I am rewriting some of the code on Google Colab.

They are using the following for a learning rate decay:

initial_lr = 0.096505
learning_decay_rate = 0.7

lr_schedule = tf.compat.v1.train.exponential_decay(                    
    learning_rate = initial_lr,
    global_step = tf.compat.v1.train.get_global_step(),                                                                         
    decay_steps = checkpoint_steps,
    decay_rate = learning_decay_rate,
    staircase = True)

…and the following modell is which I need to rebuild:

estimator = tf.estimator.DNNRegressor(
    feature_columns = dnn_features,
    hidden_units = [128, 64, 32, 16],
    config = tf.estimator.RunConfig(
      save_checkpoints_steps = checkpoint_steps),
    model_dir = model_dir,
    batch_norm = True,
    dropout = 0.843251,
    optimizer = tfa.optimizers.ProximalAdagrad(
        learning_rate = lr_schedule,                                                
        l1_regularization_strength = 0.0026019,
        l2_regularization_strength = 0.0107146))

tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)

I cannot run the model like this, because I get a

ValueError: None values not supported.

…and the reason is the function get_global_step. My results are pretty bad compared to theirs, when I use i.e.:

lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(...)

My questions are:

  1. What exactly is the global_step?
  2. Is it crucial for the model to get better?
  3. In case that I need it: How can I make it work like this?