I am trying to load a previously trained tensor trained model from checkpoint files, now these checkpoint files have op variables in them so to load the graph I have to first load graph_def from **ckpt.meta file:
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
saver = tf.train.import_meta_graph('/data/model_cache/model.ckpt-39.meta')
ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
if os.path.isabs(ckpt.model_checkpoint_path):
saver.restore(sess, ckpt.model_checkpoint_path)
After I have loaded the models I have a method that uses this model for inference to implement deep-dream too. The problem is when I call eval with the default session I get the error below:
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 555, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework /ops.py", line 3495, in _eval_using_default_session
raise ValueError("Cannot use the given session to evaluate tensor: "
ValueError: Cannot use the given session to evaluate tensor: the tensor's graph is different from the session's graph.
I have confirmed that tf.get_default_graph()
and sess.graph
are pointing to the same memory address. There has to be something very basic I am missing.
It is very likely that the meta graph that you're importing, i.e. /data/model_cache/model.ckpt-39.meta is different form the one that checkpoint
tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
was using.The usual practice is to have
get_checkpoint_state()
call (ortf.train.latest_checkpoint(FLAGS.checkpoint_dir)
) and use it's output inimport_meta_graph()
call and then, with the same checkpoint name (and returned saver) restore the variables in the session. This, of course, can be done if meta graph is saved in each checkpoint.