TensorFlow, combine two checkpoint values into one and restore - tensorflow

I have two models (A and B) with the same architecture, both A and B have the same variables names and model settings, for example
['A1\B1\C1', 'A2\B2\C2', 'A3\B3\C3']
I have got checkpoint files for A and B, and I want to combine ['A1\B1\C1', 'A2\B2\C2'] in A with 'A3\B3\C3' in B int to a checkpoint file and restore it to model A. How can I do that with saver.restor()?

You can do it with init_from_checkpoint. After defining current model, create assignment map.
dir = 'path_to_A_and_B_checkpoint_files'
vars_to_load = [i[0] for i in tf.train.list_variables(dir)]
assignment_map = {variable.op.name: variable for variable in tf.global_variables() if variable.op.name in vars_to_restore}
This creates a dict that has variables from current graph as key and variables from checkpoints as values
tf.train.init_from_checkpoint(dir, assignment_map)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
#do_usual_stuff
This function is placed before declaring a session and substitutes saver.restore

Answering my question on my own.
import tensorflow as tf
from tensorflow.python import pywrap_tensorflow
def load_weights(ckpt_path, prefix_list):
vars_weights = {}
reader = pywrap_tensorflow.NewCheckpointReader(ckpt_path)
var_to_shape_map = reader.get_variable_to_shape_map()
for key in sorted(var_to_shape_map):
for _pref in prefix_list:
if key.startswith(_pref):
vars_weights[key+':0'] = reader.get_tensor(key)
return vars_weights
# Build model
...
# Init variables
sess.run(tf.global_variables_initializer())
# Restore model
saver.restore(sess, load_dir_A)
prefix = ['A3\B3\C3']
# Get weights from ckpt of B
B_weights = load_weights(load_dir_B, prefix)
# Assign weights from B to A
assign_ops = [tf.assign(tf.get_default_graph().get_tensor_by_name(_name, _value)
for _name, _value in opponent_weights.items()]
sess.run(assign_ops)

Related

In Tensorflow, is it possible to append some summaries to already-merged summary_op?

Let's say, some built-in function returns train_op and summary_op where summary_op is defined by tf.summary.merge(summaries, name='summary_op'), and I cannot touch the function.
Also, let's say, I am going to use the built-in slim.learning.train which takes train_op and summary_op as input arguments.
# -- typical
train_op, summary_op = model_fn(image)
slim.learning.train(train_op, summary_op=summary_op)
# -- my question
train_op, summary_op = model_fn(image)
some_other_summary_list = some_another_function()
summary_op_ = ... # is it possible to append some_other_summary_list to summary_op?
slim.learning.train(train_op, summary_op=summary_op_)
How I can combine summaries in already-merged summary_op and newly-collected summaries some_other_summary_list?
-- If I do tf.merge_all(tf.GraphKeys.SUMMARIES) actually there will be too many summaries since, in model_fn() collect only useful and necessary summaries.
-- I can think of defining separate summary_op2 and define train_step_fn as in:
from tensorflow.contrib.slim.python.slim.learning import train_step
def train_step_fn(...):
... = train_step(...)
if iteration % 100 == 0:
summaries = session.run(summary_op2)
summary_writer.add_summary(summaries, iteration)
slim.learning.train(train_op, summary_op=summary_op, train_step_fn=train_step_fn)
However, this seems too much if I can simply somehow append new summaries to summary_op. Is it possible?
If both "summary_op and newly-collected summaries some_other_summary_list" are created by tf.summary.merge, you can simply merge them again by tf.summary.merge([summary_op, summaries some_other_summary_list]), as demonstrated by this code:
import tensorflow as tf
a = tf.summary.scalar('a', tf.constant(0))
b = tf.summary.scalar('b', tf.constant(1))
c = tf.summary.scalar('c', tf.constant(2))
d = tf.summary.scalar('d', tf.constant(3))
ab = tf.summary.merge([a, b])
cd = tf.summary.merge([c, d])
abcd = tf.summary.merge([ab, cd])
with tf.Session() as sess:
writer = tf.summary.FileWriter('.', sess.graph)
summary = sess.run(abcd)
writer.add_summary(summary)

How to only restore variables in the checkpoint in Tensorflow?

In Tensorflow, my model is based on a pre-trained model, and I added a few more variables and remove some in the pre-trained model. When I restore the variables from the checkpoint file, I have to explicitly specify all variables I added to the graph that need to be excluded. For example, I did
exclude = # explicitly list all variables to exclude
variables_to_restore = slim.get_variables_to_restore(exclude=exclude)
saver = tf.train.Saver(variables_to_restore)
Is there a simpler way to do this? Namely, as long as a variable is not in checkpoint, then don't try to restore.
You should first find out all those variable that are useful(meaning also in your graph) and then add the joint set of the intersection of the two from the checkpoint rather than all from it.
variables_can_be_restored = list(set(tf.get_collection_ref(tf.GraphKeys.GLOBAL_VARIABLES)).intersection(tf.train.list_variables(checkpoint_dir)))
then restore it after defining a saver like this:
temp_saver = tf.train.Saver(variables_can_be_restored)
ckpt_state = tf.train.get_checkpoint_state(checkpoint_dir, lastest_filename)
print('Loading checkpoint %s' % ckpt_state.model_checkpoint_path)
temp_saver.restore(sess, ckpt_state.model_checkpoint_path)
The only thing that you can do is firstly having the same model as in the checkpoint, secondly restoring the checkpoint values to the same model. After restoring the variables for the same model, you can add new layers, delete existing layers or change the weights of the layers.
But there is an important point that you need to be careful. After added new layers you need to initialize them. If you use tf.global_variables_initializer(), you will lose the values of reloaded layers. So you should only initialize the uninitialized weights, you can use following function for this.
def initialize_uninitialized(sess):
global_vars = tf.global_variables()
is_not_initialized = sess.run([tf.is_variable_initialized(var) for var in global_vars])
not_initialized_vars = [v for (v, f) in zip(global_vars, is_not_initialized) if not f]
# for i in not_initialized_vars: # only for testing
# print(i.name)
if len(not_initialized_vars):
sess.run(tf.variables_initializer(not_initialized_vars))
This is more full answer, that works for not-distributed setting:
from tensorflow.contrib.framework.python.framework import checkpoint_utils
slim = tf.contrib.slim
def scan_checkpoint_for_vars(checkpoint_path, vars_to_check):
check_var_list = checkpoint_utils.list_variables(checkpoint_path)
check_var_list = [x[0] for x in check_var_list]
check_var_set = set(check_var_list)
vars_in_checkpoint = [x for x in vars_to_check if x.name[:x.name.index(":")] in check_var_set]
vars_not_in_checkpoint = [x for x in vars_to_check if x.name[:x.name.index(":")] not in check_var_set]
return vars_in_checkpoint, vars_not_in_checkpoint
def create_easy_going_scaffold(vars_in_checkpoint, vars_not_in_checkpoint):
model_ready_for_local_init_op = tf.report_uninitialized_variables(var_list = vars_in_checkpoint)
model_init_vars_not_in_checkpoint = tf.variables_initializer(vars_not_in_checkpoint)
restoration_saver = tf.train.Saver(vars_in_checkpoint)
eg_scaffold = tf.train.Scaffold(saver=restoration_saver,
ready_for_local_init_op = model_ready_for_local_init_op,
local_init_op = model_init_vars_not_in_checkpoint)
return eg_scaffold
all_vars = slim.get_variables()
ckpoint_file = tf.train.latest_checkpoint(output_chkpt_dir)
vars_in_checkpoint, vars_not_in_checkpoint = scan_checkpoint_for_vars(ckpoint_file, all_vars)
is_checkpoint_complete = len(vars_not_in_checkpoint) == 0
# Create session that can handle current checkpoint
if (is_checkpoint_complete):
# Checkpoint is full - all variables can be found there
print('Using normal session')
sess = tf.train.MonitoredTrainingSession(checkpoint_dir = output_chkpt_dir,
save_checkpoint_secs = save_checkpoint_secs,
save_summaries_secs = save_summaries_secs)
else:
# Checkpoint is partial - some variables need to be initialized
print('Using easy going session')
eg_scaffold = create_easy_going_scaffold(vars_in_checkpoint, vars_not_in_checkpoint)
# Save all variables to next checkpoint
saver = tf.train.Saver()
hooks = [tf.train.CheckpointSaverHook(checkpoint_dir = output_chkpt_dir,
save_secs = save_checkpoint_secs,
saver = saver)]
# Such session is a little slower during the first iteration
sess = tf.train.MonitoredTrainingSession(checkpoint_dir = output_chkpt_dir,
scaffold = eg_scaffold,
hooks = hooks,
save_summaries_secs = save_summaries_secs,
save_checkpoint_secs = None)
with sess:
.....

Tensorflow - saving the checkpoint files as .pb, but with no output node names

I have the following files:
model.ckpt-2400.data-00000-of-00001
model.ckpt-2400.index
model.ckpt-2400.meta
And I would like to save them in the form of a .pb with the following function:
def freeze_graph(model_dir, output_node_names):
"""Extract the sub graph defined by the output nodes and convert all its variables into constant
Args:
model_dir: the root folder containing the checkpoint state file
output_node_names: a string, containing all the output node's names,
comma separated
"""
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
"directory: %s" % model_dir)
if not output_node_names:
print("You need to supply the name of a node to --output_node_names.")
return -1
# We retrieve our checkpoint fullpath
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
# We precise the file fullname of our freezed graph
absolute_model_dir = "/".join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + "/frozen_model.pb"
# We clear devices to allow TensorFlow to control on which device it will load operations
clear_devices = True
# We start a session using a temporary fresh Graph
with tf.Session(graph=tf.Graph()) as sess:
# We import the meta graph in the current default Graph
saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=clear_devices)
# We restore the weights
saver.restore(sess, input_checkpoint)
# We use a built-in TF helper to export variables to constants
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess, # The session is used to retrieve the weights
tf.get_default_graph().as_graph_def(), # The graph_def is used to retrieve the nodes
output_node_names.split(",") # The output node names are used to select the usefull nodes
)
# Finally we serialize and dump the output graph to the filesystem
with tf.gfile.GFile(output_graph, "wb") as f:
f.write(output_graph_def.SerializeToString())
print("%d ops in the final graph." % len(output_graph_def.node))
return output_graph_def
The problem is that when I use tf.get_default_graph().as_graph_def().node, it returns []. An empty array. There are no output node names I can use for this.
So how else can I save them as .pb? Should I just refer to the tf.python.tools.freeze_graph.freeze_graph() function?
Turns out all I needed to do is to supply the name of the output node... that I, in another part of my code, designated as the node to log to check the results.
predictions = {
# Generate predictions (for PREDICT and EVAL mode)
"classes": tf.argmax(input=logits, axis=1),
# Add `softmax_tensor` to the graph. It is used for PREDICT and by the
# `logging_hook`.
"probabilities": tf.nn.softmax(logits, name="softmax_tensor") #This one
}
In my case it's softmax_tensor.

tensorflow serving uninitialized

Hello I want to initialize variable named result in the code below.
I tried to initialize with this code* when I tried to serving.
sess.run(tf.global_variables_initializer(),feed_dict=
{userLat:0,userLon:0})
I just want to initialize the variable.
The reason for using the variable is to write validate_shape = false.
The reason for using this option is to resolve error 'Outer dimension for outputs must be unknown, outer dimension of 'Variable:0' is 1' when deploying the model version to the Google Cloud ml engine.
Initialization with the following code will output a value when feed_dict is 0 when attempting a prediction.
sess.run(tf.global_variables_initializer(),feed_dict=
{userLat:0,userLon:0})
Is there a way to simply initialize the value of result?
Or is it possible to store the list of stored tensor values as a String with a comma without shape?
It's a very basic question.
I'm sorry.
I am a beginner of the tensor flow.
I need help. Thank you for reading.
import tensorflow as tf
import sys,os
#define filename queue
filenameQueue =tf.train.string_input_producer(['./data.csv'],
shuffle=False,name='filename_queue')
# define reader
reader = tf.TextLineReader()
key,value = reader.read(filenameQueue)
#define decoder
recordDefaults = [ ["null"],[0.0],[0.0]]
sId,lat, lng = tf.decode_csv(
value, record_defaults=recordDefaults,field_delim=',')
taxiData=[]
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
for i in range(18):
data=sess.run([sId, lat, lng])
tmpTaxiData=[]
tmpTaxiData.append(data[0])
tmpTaxiData.append(data[1])
tmpTaxiData.append(data[2])
taxiData.append(tmpTaxiData)
coord.request_stop()
coord.join(threads)
from math import sin, cos,acos, sqrt, atan2, radians
#server input data
userLat = tf.placeholder(tf.float32, shape=[])
userLon = tf.placeholder(tf.float32, shape=[])
R = 6373.0
radian=0.017453292519943295
distanceList=[]
for i in taxiData:
taxiId=tf.constant(i[0],dtype=tf.string,shape=[])
taxiLat=tf.constant(i[1],dtype=tf.float32,shape=[])
taxiLon=tf.constant(i[2],dtype=tf.float32,shape=[])
distanceValue=6371*tf.acos(tf.cos(radian*userLat)*
tf.cos(radian*taxiLat)*tf.cos(radian*taxiLon-
radian*126.8943311)+tf.sin(radian*37.4685225)*tf.sin(radian*taxiLat))
tmpDistance=[]
tmpDistance.append(taxiId)
tmpDistance.append(distanceValue)
distanceList.append(tmpDistance)
# result sort
sId,distances=zip(*distanceList)
indices = tf.nn.top_k(distances, k=len(distances)).indices
gather=tf.gather(sId, indices[::-1])[0:5]
result=tf.Variable(gather,validate_shape=False)
print "Done training!"
# serving
import os
from tensorflow.python.util import compat
model_version = 1
path = os.path.join("Taximodel", str(model_version))
builder = tf.saved_model.builder.SavedModelBuilder(path)
with tf.Session() as sess:
builder.add_meta_graph_and_variables(
sess,
[tf.saved_model.tag_constants.SERVING],
signature_def_map= {
"serving_default":
tf.saved_model.signature_def_utils.predict_signature_def(
inputs= {"userLat": userLat, "userLon":userLon},
outputs= {"result": result})
})
builder.save()
print 'Done exporting'
You can try to define the graph so that the output tensor preserves the shape (outer dimension) of the input tensor.
For example, something like:
#server input data
userLoc = tf.placeholder(tf.float32, shape=[None, 2])
def calculate_dist(user_loc):
distanceList = []
for i in taxiData:
taxiId=tf.constant(i[0],dtype=tf.string,shape=[])
taxiLat=tf.constant(i[1],dtype=tf.float32,shape=[])
taxiLon=tf.constant(i[2],dtype=tf.float32,shape=[])
distanceValue=6371*tf.acos(tf.cos(radian*user_loc[0])*
tf.cos(radian*taxiLat)*tf.cos(radian*taxiLon-
radian*126.8943311)+tf.sin(radian*37.4685225)*tf.sin(radian*taxiLat))
tmpDistance=[]
tmpDistance.append(taxiId)
tmpDistance.append(distanceValue)
distanceList.append(tmpDistance)
# result sort
sId,distances=zip(*distanceList)
indices = tf.nn.top_k(distances, k=len(distances)).indices
return tf.gather(sId, indices[::-1])[0:5]
result = tf.map_fn(calculate_dist, userLoc)

Permanently Inject Constant into Tensorflow Graph for Inference

I train a model with a placeholder for is_training:
is_training_ph = tf.placeholder(tf.bool)
however once training and validation are done, I would like to permanently inject a constant of false in for this value and then "re-optimize" the graph (ie using optimize_for_inference). Is there something along the lines of freeze_graph that will do this?
One possibility is to use the tf.import_graph_def() function and its input_map argument to rewrite the value of that tensor in the graph. For example, you could structure your program as follows:
with tf.Graph().as_default() as training_graph:
# Build model.
is_training_ph = tf.placeholder(tf.bool, name="is_training")
# ...
training_graph_def = training_graph.as_graph_def()
with tf.Graph().as_default() as temp_graph:
tf.import_graph_def(training_graph_def,
input_map={is_training_ph.name: tf.constant(False)})
temp_graph_def = temp_graph.as_graph_def()
After building temp_graph_def, you can use it as the input to freeze_graph.
An alternative, which might be more compatible with the freeze_graph and optimize_for_inference scripts (which make assumptions about variable names and checkpoint keys) would be to modify TensorFlow's graph_util.convert_variables_to_constants() function so that it converts placeholders instead:
def convert_placeholders_to_constants(input_graph_def,
placeholder_to_value_map):
"""Replaces placeholders in the given tf.GraphDef with constant values.
Args:
input_graph_def: GraphDef object holding the network.
placeholder_to_value_map: A map from the names of placeholder tensors in
`input_graph_def` to constant values.
Returns:
GraphDef containing a simplified version of the original.
"""
output_graph_def = tf.GraphDef()
for node in input_graph_def.node:
output_node = tf.NodeDef()
if node.op == "Placeholder" and node.name in placeholder_to_value_map:
output_node.op = "Const"
output_node.name = node.name
dtype = node.attr["dtype"].type
data = np.asarray(placeholder_to_value_map[node.name],
dtype=tf.as_dtype(dtype).as_numpy_dtype)
output_node.attr["dtype"].type = dtype
output_node.attr["value"].CopyFrom(tf.AttrValue(
tensor=tf.contrib.util.make_tensor_proto(data,
dtype=dtype,
shape=data.shape)))
else:
output_node.CopyFrom(node)
output_graph_def.node.extend([output_node])
return output_graph_def
...then you could build training_graph_def as above, and write:
temp_graph_def = convert_placeholders_to_constants(training_graph_def,
{is_training_ph.op.name: False})