This function takes a MetaGraphDef protocol buffer as input. If
the argument is a file containing a MetaGraphDef protocol buffer ,
it constructs a protocol buffer from the file content. The function
then adds all the nodes from the graph_def field to the
current graph, recreates all the collections, and returns a saver
constructed from the saver_def field.
In combination with export_meta_graph(), this function can be used to
Serialize a graph along with other Python objects such as QueueRunner,
Variable into a MetaGraphDef.
Restart training from a saved graph and checkpoints.
Run inference from a saved graph and checkpoints.
...# Create a saver.saver=tf.compat.v1.train.Saver(...variables...)# Remember the training_op we want to run by adding it to a collection.tf.compat.v1.add_to_collection('train_op',train_op)sess=tf.compat.v1.Session()forstepinrange(1000000):sess.run(train_op)ifstep%1000==0:# Saves checkpoint, which by default also exports a meta_graph# named 'my-model-global_step.meta'.saver.save(sess,'my-model',global_step=step)
Later we can continue training from this saved meta_graph without building
the model from scratch.
withtf.Session()assess:new_saver=tf.train.import_meta_graph('my-save-dir/my-model-10000.meta')new_saver.restore(sess,'my-save-dir/my-model-10000')# tf.get_collection() returns a list. In this example we only want# the first one.train_op=tf.get_collection('train_op')[0]forstepinrange(1000000):sess.run(train_op)
Example:
Variables, placeholders, and independent operations can also be stored, as
shown in the following example.
# Saving contents and operations.v1=tf.placeholder(tf.float32,name="v1")v2=tf.placeholder(tf.float32,name="v2")v3=tf.math.multiply(v1,v2)vx=tf.Variable(10.0,name="vx")v4=tf.add(v3,vx,name="v4")saver=tf.train.Saver([vx])sess=tf.Session()sess.run(tf.global_variables_initializer())sess.run(vx.assign(tf.add(vx,vx)))result=sess.run(v4,feed_dict={v1:12.0,v2:3.3})print(result)saver.save(sess,"./model_ex1")
Later this model can be restored and contents loaded.
# Restoring variables and running operations.saver=tf.train.import_meta_graph("./model_ex1.meta")sess=tf.Session()saver.restore(sess,"./model_ex1")result=sess.run("v4:0",feed_dict={"v1:0":12.0,"v2:0":3.3})print(result)
Args
meta_graph_or_file
MetaGraphDef protocol buffer or filename (including
the path) containing a MetaGraphDef.
clear_devices
Whether or not to clear the device field for an Operation
or Tensor during import.
import_scope
Optional string. Name scope to add. Only used when
initializing from protocol buffer.
**kwargs
Optional keyed arguments.
Returns
A saver constructed from saver_def in MetaGraphDef or None.
A None value is returned if no variables exist in the MetaGraphDef
(i.e., there are no variables to restore).
Raises
RuntimeError
If called with eager execution enabled.
eager compatibility
Exporting/importing meta graphs is not supported. No graph exists when eager
execution is enabled.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-10-06 UTC."],[],[],null,["# tf.compat.v1.train.import_meta_graph\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.13.1/tensorflow/python/training/saver.py#L1473-L1585) |\n\nRecreates a Graph saved in a `MetaGraphDef` proto. \n\n tf.compat.v1.train.import_meta_graph(\n meta_graph_or_file, clear_devices=False, import_scope=None, **kwargs\n )\n\nThis function takes a `MetaGraphDef` protocol buffer as input. If\nthe argument is a file containing a `MetaGraphDef` protocol buffer ,\nit constructs a protocol buffer from the file content. The function\nthen adds all the nodes from the `graph_def` field to the\ncurrent graph, recreates all the collections, and returns a saver\nconstructed from the `saver_def` field.\n\nIn combination with `export_meta_graph()`, this function can be used to\n\n- Serialize a graph along with other Python objects such as `QueueRunner`,\n `Variable` into a `MetaGraphDef`.\n\n- Restart training from a saved graph and checkpoints.\n\n- Run inference from a saved graph and checkpoints.\n\n ...\n # Create a saver.\n saver = tf.compat.v1.train.Saver(...variables...)\n # Remember the training_op we want to run by adding it to a collection.\n tf.compat.v1.add_to_collection('train_op', train_op)\n sess = tf.compat.v1.Session()\n for step in range(1000000):\n sess.run(train_op)\n if step % 1000 == 0:\n # Saves checkpoint, which by default also exports a meta_graph\n # named 'my-model-global_step.meta'.\n saver.save(sess, 'my-model', global_step=step)\n\nLater we can continue training from this saved `meta_graph` without building\nthe model from scratch. \n\n with tf.Session() as sess:\n new_saver =\n tf.train.import_meta_graph('my-save-dir/my-model-10000.meta')\n new_saver.restore(sess, 'my-save-dir/my-model-10000')\n # tf.get_collection() returns a list. In this example we only want\n # the first one.\n train_op = tf.get_collection('train_op')[0]\n for step in range(1000000):\n sess.run(train_op)\n\n| **Note:** Restarting training from saved `meta_graph` only works if the device assignments have not changed.\n\n#### Example:\n\nVariables, placeholders, and independent operations can also be stored, as\nshown in the following example. \n\n # Saving contents and operations.\n v1 = tf.placeholder(tf.float32, name=\"v1\")\n v2 = tf.placeholder(tf.float32, name=\"v2\")\n v3 = tf.math.multiply(v1, v2)\n vx = tf.Variable(10.0, name=\"vx\")\n v4 = tf.add(v3, vx, name=\"v4\")\n saver = tf.train.Saver([vx])\n sess = tf.Session()\n sess.run(tf.global_variables_initializer())\n sess.run(vx.assign(tf.add(vx, vx)))\n result = sess.run(v4, feed_dict={v1:12.0, v2:3.3})\n print(result)\n saver.save(sess, \"./model_ex1\")\n\nLater this model can be restored and contents loaded. \n\n # Restoring variables and running operations.\n saver = tf.train.import_meta_graph(\"./model_ex1.meta\")\n sess = tf.Session()\n saver.restore(sess, \"./model_ex1\")\n result = sess.run(\"v4:0\", feed_dict={\"v1:0\": 12.0, \"v2:0\": 3.3})\n print(result)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------------|----------------------------------------------------------------------------------------------|\n| `meta_graph_or_file` | `MetaGraphDef` protocol buffer or filename (including the path) containing a `MetaGraphDef`. |\n| `clear_devices` | Whether or not to clear the device field for an `Operation` or `Tensor` during import. |\n| `import_scope` | Optional `string`. Name scope to add. Only used when initializing from protocol buffer. |\n| `**kwargs` | Optional keyed arguments. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A saver constructed from `saver_def` in `MetaGraphDef` or None. \u003cbr /\u003e A None value is returned if no variables exist in the `MetaGraphDef` (i.e., there are no variables to restore). ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|----------------|-----------------------------------------|\n| `RuntimeError` | If called with eager execution enabled. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\neager compatibility\n-------------------\n\n\u003cbr /\u003e\n\nExporting/importing meta graphs is not supported. No graph exists when eager\nexecution is enabled.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e"]]