Restoring a model trained with tf.estimator and feeding input through feed_dict

I trained a resnet with tf.estimator, the model was saved during the training process. The saved files consist of .data, .index and .meta. I’d like to load this model back and get predictions for new images. The data was fed to the model during training using tf.data.Dataset. I have closely followed the resnet implementation given here.

I would like to restore the model and feed inputs to the nodes using a feed_dict.

Following was my attempt
#rebuild input pipeline
images, labels = input_fn(data_dir, batch_size=32, num_epochs=1)

  #rebuild graph
  prediction= imagenet_model_fn(images,labels,{'batch_size':32,'data_format':'channels_first','resnet_size':18},mode = tf.estimator.ModeKeys.EVAL).predictions 

  saver  = tf.train.Saver()
  with tf.Session() as sess:
    ckpt = tf.train.get_checkpoint_state(r'./model')
    saver.restore(sess, ckpt.model_checkpoint_path)
    while True:
    try:
        pred,im= sess.run([prediction,images])
        print(pred)
    except tf.errors.OutOfRangeError:
      break

I fed a dataset which was evaluated on the same model using classifier.evaluate, but the above method gives wrong predictions. The model gives same class and probability, 1.0, for all images.

The code I used for training and building the model is as below:
Specification for parsing the dataset:

def parse_record(raw_record, is_training):
  keys_to_features = {
      'image/encoded':
          tf.FixedLenFeature((), tf.string, default_value=''),
      'image/class/label':
          tf.FixedLenFeature([], dtype=tf.int64, default_value=-1),
  }
  parsed = tf.parse_single_example(raw_record, keys_to_features)
  image = tf.image.decode_image(
      tf.reshape(parsed['image/encoded'], shape=[]),3)
  image = tf.image.convert_image_dtype(image, dtype=tf.float32)
  label = tf.cast(
      tf.reshape(parsed['image/class/label'], shape=[]),
      dtype=tf.int32)
  return image, tf.one_hot(label,2)

The following function parses the data and creates batches for training

def input_fn(is_training, data_dir, batch_size, num_epochs=1):
  dataset = tf.data.Dataset.from_tensor_slices(
      filenames(is_training, data_dir))
  if is_training:
     dataset = dataset.shuffle(buffer_size=_FILE_SHUFFLE_BUFFER)
  dataset = dataset.flat_map(tf.data.TFRecordDataset)
  dataset = dataset.map(lambda value: parse_record(value, is_training),
                        num_parallel_calls=5)
  dataset = dataset.prefetch(batch_size)
  if is_training:
      dataset = dataset.shuffle(buffer_size=_SHUFFLE_BUFFER)
  dataset = dataset.repeat(num_epochs)
  dataset = dataset.batch(batch_size)

  iterator = dataset.make_one_shot_iterator()
  images, labels = iterator.get_next()
  return images, labels

What is the right way to restore a saved model and perform inference on it. I want to feed images using a feed_dict, i.e.without using tf.data.Dataset.