I'm trying to benchmark the performance in the inference phase of my Keras model build with the TensorFlow backend. I was thinking that the the Tensorflow Benchmark tool was the proper way to go.
I've managed to build and run the example on Desktop with the tensorflow_inception_graph.pb and everything seems to work fine.
What I can't seem to figure out is how to save the Keras model as a proper .pbmodel. I'm able to get the TensorFlow Graph from the Keras model as follows:
import keras.backend as K
K.set_learning_phase(0)
trained_model = function_that_returns_compiled_model()
sess = K.get_session()
sess.graph # This works
# Get the input tensor name for TF Benchmark
trained_model.input
> <tf.Tensor 'input_1:0' shape=(?, 360, 480, 3) dtype=float32>
# Get the output tensor name for TF Benchmark
trained_model.output
> <tf.Tensor 'reshape_2/Reshape:0' shape=(?, 360, 480, 12) dtype=float32>
I've now been trying to save the model in a couple of different ways.
import tensorflow as tf
from tensorflow.contrib.session_bundle import exporter
model = trained_model
export_path = "path/to/folder" # where to save the exported graph
export_version = 1 # version number (integer)
saver = tf.train.Saver(sharded=True)
model_exporter = exporter.Exporter(saver)
signature = exporter.classification_signature(input_tensor=model.input, scores_tensor=model.output)
model_exporter.init(sess.graph.as_graph_def(), default_graph_signature=signature)
model_exporter.export(export_path, tf.constant(export_version), sess)
Which produces a folder with some files I don't know what to do with.
I would now run the Benchmark tool with something like this
bazel-bin/tensorflow/tools/benchmark/benchmark_model \
--graph=tensorflow/tools/benchmark/what_file.pb \
--input_layer="input_1:0" \
--input_layer_shape="1,360,480,3" \
--input_layer_type="float" \
--output_layer="reshape_2/Reshape:0"
But no matter which file I'm trying to use as the what_file.pb I'm getting a Error during inference: Invalid argument: Session was not created with a graph before Run()!
So I got this to work. Just needed to convert all variables in the tensorflow graph to constants and then save graph definition.
Here's a small example:
import tensorflow as tf
from keras import backend as K
from tensorflow.python.framework import graph_util
K.set_learning_phase(0)
model = function_that_returns_your_keras_model()
sess = K.get_session()
output_node_name = "my_output_node" # Name of your output node
with sess as sess:
init_op = tf.global_variables_initializer()
sess.run(init_op)
graph_def = sess.graph.as_graph_def()
output_graph_def = graph_util.convert_variables_to_constants(
sess,
sess.graph.as_graph_def(),
output_node_name.split(","))
tf.train.write_graph(output_graph_def,
logdir="my_dir",
name="my_model.pb",
as_text=False)
Now just call the TensorFlow Benchmark tool with my_model.pb as the graph.
You're saving the parameters of this model and not the graph definition; to save that use tf.get_default_graph().as_graph_def().SerializeToString() and then save that to a file.
That said I don't think the benchmark tool will work since it has no way to initialize the variables your model depends on.
Related
Problem Description
I'm using tensorflow Estimator API, and have encountered a weird phenomenon.
I'm passing the exact same input_fn to both training and evaluation, and for some reason the images which are provided to the network are not identical.
They seem similar, but after taking a closer look, it seems that evaluation images are ok, but train images are somewhat distorted.
After loading them both, I noticed that for some reason the training images go through some kind of ReLu. I affirmed it with this code, which operates on mat_eval and mat_train, which are tensors that input_fn provides in evaluation and train mode:
special_relu = lambda mat: ((mat - 0.5) / 0.5) * ((mat - 0.5) / 0.5 > 0)
np.allclose(mat_train, special_relu(mat_eval))
>>> True
What I thought and tried
My initial thought was that it is some form of BatchNormalization. But BatchNormalization is supposed to happen within the network, and not as some preprocess, shouldn't it?
What I recorded (using tf.summary.image) was the features['image'] object, passed to my model_fn. And if I understand correctly, the features object is passed to model_fn by the input_fn called by the Estimator object.
Regardless, I tried to remove the parts in the code which are supposed to call the BatchNormalization. This had no effect. Of course, I might have not done that in the right way, but as I said it I don't really think it is BatchNormalization.
Code
from datetime import datetime
from pathlib import Path
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.python.platform import tf_logging as logging
from dcnn import modeling
from dcnn.dv_constants import BATCH_SIZE, BATCHES_PER_EPOCH
from dcnn.variant_io import get_input_fn, num_variants_in_ds
logging.set_verbosity(logging.INFO)
new_checkpoint_name = lambda: f'./train_dir/' \
f'{datetime.now().strftime("%d-%m %H:%M:%S")}'
if __name__ == '__main__':
model_name = 'small_inception'
start_from_checkpoint = ''
# start_from_checkpoint = '/home/yonatan/Desktop/yonas_code/dcnn/train_dir' \
# '/2111132905/model.ckpt-256'
model_dir = str(Path(start_from_checkpoint).parent) if \
start_from_checkpoint else new_checkpoint_name()
test = False
train = True
predict = False
epochs = 1
train_dataset_name = 'same_example'
val_dataset_name = 'same_example'
test_dataset_name = 'same_example'
predict_dataset_name = 'same_example'
model = modeling.get_model(model_name=model_name)
estimator = model.make_estimator( \
batch_size=BATCH_SIZE,
model_dir=model_dir,
params=dict(batches_per_epoch=BATCHES_PER_EPOCH),
use_tpu=False,
master='',
# The target of the TensorFlow standard server to use. Can be the empty string to run locally using an inprocess server.
start_from_checkpoint=start_from_checkpoint)
if train:
train_input_fn = get_input_fn(train_dataset_name, repeat=True)
val_input_fn = get_input_fn(val_dataset_name, repeat=False)
steps = (epochs * num_variants_in_ds(train_dataset_name)) / \
BATCH_SIZE
train_spec = tf.estimator.TrainSpec(input_fn=val_input_fn,
max_steps=steps)
eval_spec = tf.estimator.EvalSpec(input_fn=val_input_fn,
throttle_secs=1)
metrics = tf.estimator.train_and_evaluate(estimator, train_spec,
eval_spec)
print(metrics)
I have plenty of more code to share, but I tried to be concise. If anyone has any idea why this behavior happens, or needs more information, let me know.
I followed the website: https://leimao.github.io/blog/Save-Load-Inference-From-TF2-Frozen-Graph/
However, I still do not know how to run inference with frozen_func(see my code below).
Please advise how to run inference using pb file in TensorFlow 2.2. Thanks.
import tensorflow as tf
def wrap_frozen_graph(graph_def, inputs, outputs, print_graph=False):
def _imports_graph_def():
tf.compat.v1.import_graph_def(graph_def, name="")
wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, [])
import_graph = wrapped_import.graph
print("-" * 50)
print("Frozen model layers: ")
layers = [op.name for op in import_graph.get_operations()]
if print_graph == True:
for layer in layers:
print(layer)
print("-" * 50)
return wrapped_import.prune(
tf.nest.map_structure(import_graph.as_graph_element, inputs),
tf.nest.map_structure(import_graph.as_graph_element, outputs))
# Load frozen graph using TensorFlow 1.x functions
with tf.io.gfile.GFile("/content/drive/My Drive/Model_file/froze_graph.pb", "rb") as f:
graph_def = tf.compat.v1.GraphDef()
loaded = graph_def.ParseFromString(f.read())
# Wrap frozen graph to ConcreteFunctions
frozen_func = wrap_frozen_graph(graph_def=graph_def,
inputs=["wav_data:0"],
outputs=["labels_softmax:0"],
print_graph=True)
You can use tf.graph_util.import_graph_def inside a tf.function to do that. For example, suppose you make a test GraphDef file my_func.pb like this:
import tensorflow as tf
# Test function to make into a GraphDef file
#tf.function
def my_func(x):
return tf.square(x, name='y')
# Get graph
g = my_func.get_concrete_function(tf.TensorSpec(None, tf.float32)).graph
# Write to file
tf.io.write_graph(g, '.', 'my_func.pb', as_text=False)
You can then load it and use it like this:
import tensorflow as tf
from tensorflow.core.framework.graph_pb2 import GraphDef
# Load GraphDef
with open('my_func.pb', 'rb') as f:
gd = GraphDef()
gd.ParseFromString(f.read())
#tf.function
def my_func2(x):
# Ensure the input is a tensor of the right type
x = tf.convert_to_tensor(x, tf.float32)
# Import the graph giving x as input and getting the output y
y = tf.graph_util.import_graph_def(
gd, input_map={'x:0': x}, return_elements=['y:0'])[0]
return y
tf.print(my_func2(2))
# 4
I have downloaded checkpoints along with model for Mobilenet v3. After extraction of rar file, I get two folders and two other files. Directory looks like following
Main Folder
ema (folder)
checkpoint
model-x.data-00000-of-00001
model-x.index
model-x.meta
pristine (folder)
model.ckpt-y.data-00000-of-00001
model.ckpt-y.index
model.ckpt-y.meta
.pb
.tflite
I have tried many codes among which few are below.
import tensorflow as tf
from tensorflow.python.platform import gfile
model_path = "./weights/v3-large-minimalistic_224_1.0_uint8/model.ckpt-3868848"
detection_graph = tf.Graph()
with tf.Session(graph=detection_graph) as sess:
# Load the graph with the trained states
loader = tf.train.import_meta_graph(model_path+'.meta')
loader.restore(sess, model_path)
The above code results in following error
Node {{node batch_processing/distort_image/switch_case/indexed_case}} of type Case has '_lower_using_switch_merge' attr set but it does not support lowering.
I tried following code:
import tensorflow as tf
import sys
sys.path.insert(0, 'models/research/slim')
from nets.mobilenet import mobilenet_v3
tf.reset_default_graph()
file_input = tf.placeholder(tf.string, ())
image = tf.image.decode_jpeg(tf.read_file('test.jpg'))
images = tf.expand_dims(image, 0)
images = tf.cast(images, tf.float32) / 128. - 1
images.set_shape((None, None, None, 3))
images = tf.image.resize_images(images, (224, 224))
model = mobilenet_v3.wrapped_partial(mobilenet_v3.mobilenet,
new_defaults={'scope': 'MobilenetEdgeTPU'},
conv_defs=mobilenet_v3.V3_LARGE_MINIMALISTIC,
depth_multiplier=1.0)
with tf.contrib.slim.arg_scope(mobilenet_v3.training_scope(is_training=False)):
logits, endpoints = model(images)
ema = tf.train.ExponentialMovingAverage(0.999)
vars = ema.variables_to_restore()
print(vars)
with tf.Session() as sess:
tf.train.Saver(vars).restore(sess, './weights/v3-large-minimalistic_224_1.0_uint8/saved_model.pb')
tf.train.Saver().save(sess, './weights/v3-large-minimalistic_224_1.0_uint8/pristine/model.ckpt')
The above code generates following error:
Unable to open table file ./weights/v3-large-minimalistic_224_1.0_uint8/saved_model.pb: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
[[node save/RestoreV2 (defined at <ipython-input-11-1531bbfd84bb>:29) ]]
How can I load Mobilenet v3 model along with the checkpoints and use it for my data?
try this
with tf.contrib.slim.arg_scope(mobilenet_v3.training_scope(is_training=False)):
logits, endpoints = mobilenet_v3.large_minimalistic(images)
instead of
model = mobilenet_v3.wrapped_partial(mobilenet_v3.mobilenet,
new_defaults={'scope': 'MobilenetEdgeTPU'},
conv_defs=mobilenet_v3.V3_LARGE_MINIMALISTIC,
depth_multiplier=1.0)
with tf.contrib.slim.arg_scope(mobilenet_v3.training_scope(is_training=False)):
logits, endpoints = model(images)
Right now we are successfully able to serve models using Tensorflow Serving. We have used following method to export the model and host it with Tensorflow Serving.
------------
For exporting
------------------
from tensorflow.contrib.session_bundle import exporter
K.set_learning_phase(0)
export_path = ... # where to save the exported graph
export_version = ... # version number (integer)
saver = tf.train.Saver(sharded=True)
model_exporter = exporter.Exporter(saver)
signature = exporter.classification_signature(input_tensor=model.input,
scores_tensor=model.output)
model_exporter.init(sess.graph.as_graph_def(),
default_graph_signature=signature)
model_exporter.export(export_path, tf.constant(export_version), sess)
--------------------------------------
For hosting
-----------------------------------------------
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=default --model_base_path=/serving/models
However our issue is - we want keras to be integrated with Tensorflow serving. We would like to serve the model through Tensorflow serving using Keras.
The reason we would like to have that is because - in our architecture we follow couple of different ways to train our model like deeplearning4j + Keras ,
Tensorflow + Keras, but for serving we would like to use only one servable engine that's Tensorflow Serving. We don't see any straight forward way to achieve that. Any comments ?
Thank you.
Very recently TensorFlow changed the way it exports the model, so the majority of the tutorials available on web are outdated. I honestly don't know how deeplearning4j works, but I use Keras quite often. I managed to create a simple example that I already posted on this issue in TensorFlow Serving Github.
I'm not sure whether this will help you, but I'd like to share how I did and maybe it will give you some insights. My first trial prior to creating my custom model was to use a trained model available on Keras such as VGG19. I did this as follows.
Model creation
import keras.backend as K
from keras.applications import VGG19
from keras.models import Model
# very important to do this as a first thing
K.set_learning_phase(0)
model = VGG19(include_top=True, weights='imagenet')
# The creation of a new model might be optional depending on the goal
config = model.get_config()
weights = model.get_weights()
new_model = Model.from_config(config)
new_model.set_weights(weights)
Exporting the model
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import build_signature_def, predict_signature_def
from tensorflow.contrib.session_bundle import exporter
export_path = 'folder_to_export'
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'images': new_model.input},
outputs={'scores': new_model.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={'predict': signature})
builder.save()
Some side notes
It can vary depending on Keras, TensorFlow, and TensorFlow Serving
version. I used the latest ones.
Beware of the names of the signatures, since they should be used in the client as well.
When creating the client, all preprocessing steps that are needed for the
model (preprocess_input() for example) must be executed. I didn't try
to add such step in the graph itself as Inception client example.
With respect to serving different models within the same server, I think that something similar to the creation of a model_config_file might help you. To do so, you can create a config file similar to this:
model_config_list: {
config: {
name: "my_model_1",
base_path: "/tmp/model_1",
model_platform: "tensorflow"
},
config: {
name: "my_model_2",
base_path: "/tmp/model_2",
model_platform: "tensorflow"
}
}
Finally, you can run the client like this:
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --config_file=model_config.conf
try this script i wrote, you can convert keras models into tensorflow frozen graphs, ( i saw that some models give rise to strange behaviours when you export them without freezing the variables).
import sys
from keras.models import load_model
import tensorflow as tf
from keras import backend as K
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
K.set_learning_phase(0)
K.set_image_data_format('channels_last')
INPUT_MODEL = sys.argv[1]
NUMBER_OF_OUTPUTS = 1
OUTPUT_NODE_PREFIX = 'output_node'
OUTPUT_FOLDER= 'frozen'
OUTPUT_GRAPH = 'frozen_model.pb'
OUTPUT_SERVABLE_FOLDER = sys.argv[2]
INPUT_TENSOR = sys.argv[3]
try:
model = load_model(INPUT_MODEL)
except ValueError as err:
print('Please check the input saved model file')
raise err
output = [None]*NUMBER_OF_OUTPUTS
output_node_names = [None]*NUMBER_OF_OUTPUTS
for i in range(NUMBER_OF_OUTPUTS):
output_node_names[i] = OUTPUT_NODE_PREFIX+str(i)
output[i] = tf.identity(model.outputs[i], name=output_node_names[i])
print('Output Tensor names: ', output_node_names)
sess = K.get_session()
try:
frozen_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), output_node_names)
graph_io.write_graph(frozen_graph, OUTPUT_FOLDER, OUTPUT_GRAPH, as_text=False)
print(f'Frozen graph ready for inference/serving at {OUTPUT_FOLDER}/{OUTPUT_GRAPH}')
except:
print('Error Occured')
builder = tf.saved_model.builder.SavedModelBuilder(OUTPUT_SERVABLE_FOLDER)
with tf.gfile.GFile(f'{OUTPUT_FOLDER}/{OUTPUT_GRAPH}', "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sigs = {}
OUTPUT_TENSOR = output_node_names
with tf.Session(graph=tf.Graph()) as sess:
tf.import_graph_def(graph_def, name="")
g = tf.get_default_graph()
inp = g.get_tensor_by_name(INPUT_TENSOR)
out = g.get_tensor_by_name(OUTPUT_TENSOR[0] + ':0')
sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
tf.saved_model.signature_def_utils.predict_signature_def(
{"input": inp}, {"outout": out})
builder.add_meta_graph_and_variables(sess,
[tag_constants.SERVING],
signature_def_map=sigs)
try:
builder.save()
print(f'Model ready for deployment at {OUTPUT_SERVABLE_FOLDER}/saved_model.pb')
print('Prediction signature : ')
print(sigs['serving_default'])
except:
print('Error Occured, please checked frozen graph')
I have recently added this blogpost that explain how to save a Keras model and serve it with Tensorflow Serving.
TL;DR:
Saving an Inception3 pretrained model:
### Load a pretrained inception_v3
inception_model = keras.applications.inception_v3.InceptionV3(weights='imagenet')
# Define a destination path for the model
MODEL_EXPORT_DIR = '/tmp/inception_v3'
MODEL_VERSION = 1
MODEL_EXPORT_PATH = os.path.join(MODEL_EXPORT_DIR, str(MODEL_VERSION))
# We'll need to create an input mapping, and name each of the input tensors.
# In the inception_v3 Keras model, there is only a single input and we'll name it 'image'
input_names = ['image']
name_to_input = {name: t_input for name, t_input in zip(input_names, inception_model.inputs)}
# Save the model to the MODEL_EXPORT_PATH
# Note using 'name_to_input' mapping, the names defined here will also be used for querying the service later
tf.saved_model.simple_save(
keras.backend.get_session(),
MODEL_EXPORT_PATH,
inputs=name_to_input,
outputs={t.name: t for t in inception_model.outputs})
And then starting a TF serving Docker:
Copy the saved model to the hosts' specified directory. (source=/tmp/inception_v3 in this example)
Run the docker:
docker run -d -p 8501:8501 --name keras_inception_v3 --mount type=bind,source=/tmp/inception_v3,target=/models/inception_v3 -e MODEL_NAME=inception_v3 -t tensorflow/serving
Verify that there's network access to the Tensorflow service. In order to get the local docker ip (172.*.*.*) for testing run:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' keras_inception_v3
(1) I'm trying to fine-tune a VGG-16 network using TFSlim by loading pretrained weights into all layers except thefc8 layer. I achieved this by using the TF-SLIm function as follows:
import tensorflow as tf
import tensorflow.contrib.slim as slim
import tensorflow.contrib.slim.nets as nets
vgg = nets.vgg
# Specify where the Model, trained on ImageNet, was saved.
model_path = 'path/to/vgg_16.ckpt'
# Specify where the new model will live:
log_dir = 'path/to/log/'
images = tf.placeholder(tf.float32, [None, 224, 224, 3])
predictions = vgg.vgg_16(images)
variables_to_restore = slim.get_variables_to_restore(exclude=['fc8'])
restorer = tf.train.Saver(variables_to_restore)
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
restorer.restore(sess,model_path)
print "model restored"
This works fine as long as I do not change the num_classes for the VGG16 model. What I would like to do is to change the num_classes from 1000 to 200. I was under the impression that if I did this modification by defining a new vgg16-modified class that replaces the fc8 to produce 200 outputs, (along with a variables_to_restore = slim.get_variables_to_restore(exclude=['fc8']) that everything will be fine and dandy. However, tensorflow complains of a dimensions mismatch:
InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [1,1,4096,200] rhs shape= [1,1,4096,1000]
So, how does one really go about doing this ? The documentation for TFSlim is really patchy and there are several versions scattered on Github - so not getting much help there.
You can try using slim's way of restoring — slim.assign_from_checkpoint.
There is related documentation in the slim sources:
https://github.com/tensorflow/tensorflow/blob/129665119ea60640f7ed921f36db9b5c23455224/tensorflow/contrib/slim/python/slim/learning.py
Corresponding part:
*************************************************
* Fine-Tuning Part of a model from a checkpoint *
*************************************************
Rather than initializing all of the weights of a given model, we sometimes
only want to restore some of the weights from a checkpoint. To do this, one
need only filter those variables to initialize as follows:
...
# Create the train_op
train_op = slim.learning.create_train_op(total_loss, optimizer)
checkpoint_path = '/path/to/old_model_checkpoint'
# Specify the variables to restore via a list of inclusion or exclusion
# patterns:
variables_to_restore = slim.get_variables_to_restore(
include=["conv"], exclude=["fc8", "fc9])
# or
variables_to_restore = slim.get_variables_to_restore(exclude=["conv"])
init_assign_op, init_feed_dict = slim.assign_from_checkpoint(
checkpoint_path, variables_to_restore)
# Create an initial assignment function.
def InitAssignFn(sess):
sess.run(init_assign_op, init_feed_dict)
# Run training.
slim.learning.train(train_op, my_log_dir, init_fn=InitAssignFn)
Update
I tried the following:
import tensorflow as tf
import tensorflow.contrib.slim as slim
import tensorflow.contrib.slim.nets as nets
images = tf.placeholder(tf.float32, [None, 224, 224, 3])
predictions = nets.vgg.vgg_16(images)
print [v.name for v in slim.get_variables_to_restore(exclude=['fc8']) ]
And got this output (shortened):
[u'vgg_16/conv1/conv1_1/weights:0',
u'vgg_16/conv1/conv1_1/biases:0',
…
u'vgg_16/fc6/weights:0',
u'vgg_16/fc6/biases:0',
u'vgg_16/fc7/weights:0',
u'vgg_16/fc7/biases:0',
u'vgg_16/fc8/weights:0',
u'vgg_16/fc8/biases:0']
So it looks like you should prefix scope with vgg_16:
print [v.name for v in slim.get_variables_to_restore(exclude=['vgg_16/fc8']) ]
gives (shortened):
[u'vgg_16/conv1/conv1_1/weights:0',
u'vgg_16/conv1/conv1_1/biases:0',
…
u'vgg_16/fc6/weights:0',
u'vgg_16/fc6/biases:0',
u'vgg_16/fc7/weights:0',
u'vgg_16/fc7/biases:0']
Update 2
Complete example that executes without errors (at my system).
import tensorflow as tf
import tensorflow.contrib.slim as slim
import tensorflow.contrib.slim.nets as nets
s = tf.Session(config=tf.ConfigProto(gpu_options={'allow_growth':True}))
images = tf.placeholder(tf.float32, [None, 224, 224, 3])
predictions = nets.vgg.vgg_16(images, 200)
variables_to_restore = slim.get_variables_to_restore(exclude=['vgg_16/fc8'])
init_assign_op, init_feed_dict = slim.assign_from_checkpoint('./vgg16.ckpt', variables_to_restore)
s.run(init_assign_op, init_feed_dict)
In the example above vgg16.ckpt is a checkpoint saved by tf.train.Saver for 1000 classes VGG16 model.
Using this checkpoint with all variables of 200 classes model (including fc8) gives the following error:
init_assign_op, init_feed_dict = slim.assign_from_checkpoint('./vgg16.ckpt', slim.get_variables_to_restore())
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
1 init_assign_op, init_feed_dict = slim.assign_from_checkpoint(
----> 2 './vgg16.ckpt', slim.get_variables_to_restore())
/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/framework/python/ops/variables.pyc in assign_from_checkpoint(model_path, var_list)
527 assign_ops.append(var.assign(placeholder_value))
528
--> 529 feed_dict[placeholder_value] = var_value.reshape(var.get_shape())
530
531 assign_op = control_flow_ops.group(*assign_ops)
ValueError: total size of new array must be unchanged