With reference to this post:
Using pre-trained inception_resnet_v2 with Tensorflow
i am trying to use the inception_resnet_v2 model to get predictions of images also. Therefore i looked at the snippet and tried to get it running, but it says "input_tensor" is not defined. Is there anything missing in the code mentioned or can anyone get me some hint to get it running / how to define the input_tensor variable?
Here is the snippet again:
import tensorflow as tf
slim = tf.contrib.slim
from PIL import Image
from inception_resnet_v2 import *
import numpy as np
checkpoint_file = 'inception_resnet_v2_2016_08_30.ckpt'
sample_images = ['dog.jpg', 'panda.jpg']
#Load the model
sess = tf.Session()
arg_scope = inception_resnet_v2_arg_scope()
with slim.arg_scope(arg_scope):
logits, end_points = inception_resnet_v2(input_tensor, is_training=False)
saver = tf.train.Saver()
saver.restore(sess, checkpoint_file)
for image in sample_images:
im = Image.open(image).resize((299,299))
im = np.array(im)
im = im.reshape(-1,299,299,3)
predict_values, logit_values = sess.run([end_points['Predictions'],logits], feed_dict={input_tensor: im})
print (np.max(predict_values), np.max(logit_values))
print (np.argmax(predict_values), np.argmax(logit_values))
Thanks
The code snippet appears to lack any definition for input_tensor. Looking at the definition of the inception_resnet_v2() function, the fact that the tensor is used in a feed_dict, and the fact that the size of your image is 299 x 299, you could define input_tensor as follows:
input_tensor = tf.placeholder(tf.float32, [None, 299, 299, 3])
Related
I would like to run a TFLite model that requires me to produce a 3d output (the sample code is a minimum example generating the error). Is there a tensorflow equivalent to gather_nd that does not reduce the dimension by one?
I've tried looking through the documentation for related functions that I can think of and haven't found a good option.
import tensorflow.compat.v1 as tf
import numpy as np
tf.disable_v2_behavior()
initial_input = tf.placeholder(dtype=tf.float32, shape=(None,5,1024))
cap_i = tf.gather_nd(initial_input, [[0,1]]) #[0,2],[0,3],[0,4],[0,5]
cap_i_broadcast = tf.broadcast_to(cap_i, [1,5,1024])
cap_iT = tf.transpose(cap_i_broadcast, perm=[0,2,1])
sess = tf.Session()
sess.run(tf.global_variables_initializer())
tf.io.write_graph(sess.graph_def, '', 'train.pbtxt')
converter = tf.lite.TFLiteConverter.from_session(sess, [initial_input], [cap_iT])
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open('converted_model.tflite', "wb").write(tflite_model)
sess.close()
Some of the operators in the model are not supported by the standard TensorFlow Lite runtime and are not recognized by TensorFlow. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: GATHER_ND, TRANSPOSE. Here is a list of operators for which you will need custom implementations: BroadcastTo.
The following code has a solution using strided slice with dimensionality reduction and then reshape to get back the correct dimension.
import tensorflow.compat.v1 as tf
import numpy as np
tf.disable_v2_behavior()
initial_input = tf.placeholder(dtype=tf.float32, shape=(None,5,1024))
cap_i = tf.strided_slice(initial_input, [0,0,0], [0,5,1024], [1,1,1],
shrink_axis_mask=1)
cap_i_reshaped =tf.reshape(cap_i,[1,5,1024])
cap_iT = tf.transpose(cap_i_reshaped, perm=[0,2,1])
sess = tf.Session()
sess.run(tf.global_variables_initializer())
tf.io.write_graph(sess.graph_def, '', 'train.pbtxt')
converter = tf.lite.TFLiteConverter.from_session(sess, [initial_input],
[cap_iT])
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open('converted_model.tflite', "wb").write(tflite_model)
sess.close()
Previously thought slice was supported in TFLite but only strided_slice is.
I am trying to create an image classifier that utilizes the pre-trained ResNet V2 model provided in the slim documentation.
Here is the code so far:
import tensorflow as tf
slim = tf.contrib.slim
from PIL import Image
from inception_resnet_v2 import *
import numpy as np
checkpoint_file = 'inception_resnet_v2_2016_08_30.ckpt'
sample_images = ['carrot.jpg']
input_tensor = tf.placeholder(tf.float32, shape=(None,299,299,3), name='input_image')
scaled_input_tensor = tf.scalar_mul((1.0/255), input_tensor)
scaled_input_tensor = tf.subtract(scaled_input_tensor, 0.5)
scaled_input_tensor = tf.multiply(scaled_input_tensor, 2.0)
variables_to_restore = slim.get_model_variables()
print(variables_to_restore)
init_fn = slim.assign_from_checkpoint_fn(
checkpoint_file,
slim.get_model_variables('InceptionResnetV2'))
sess = tf.Session()
init_fn(sess)
arg_scope = inception_resnet_v2_arg_scope()
with slim.arg_scope(arg_scope):
logits, end_points = inception_resnet_v2(scaled_input_tensor, is_training=False)
for image in sample_images:
im = Image.open(image).resize((299,299))
im = np.array(im)
im = im.reshape(-1,299,299,3)
predict_values, logit_values = sess.run([end_points['Predictions'], logits], feed_dict={input_tensor: im})
print (np.max(predict_values), np.max(logit_values))
print (np.argmax(predict_values), np.argmax(logit_values))
The problem is I keep getting this error:
Traceback (most recent call last):
File "./classify.py", line 21, in <module>
slim.get_model_variables('InceptionResnetV2'))
File "/home/ubuntu/tensorflow/local/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/variables.py", line 584, in assign_from_checkpoint_fn
saver = tf_saver.Saver(var_list, reshape=reshape_variables)
File "/home/ubuntu/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1040, in __init__
self.build()
File "/home/ubuntu/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1061, in build
raise ValueError("No variables to save")
ValueError: No variables to save
So it seems TF/Slim is unable to find any variables and this is made clear when I call:
variables_to_restore = slim.get_model_variables()
print(variables_to_restore)
As it outputs an empty array.
How can I go about using the pre-trained model?
This happens because you haven't constructed the model in your graph yet to have any variables starting with the name "InceptionResnetV2" to be captured and restored by the saver.
I believe you should put the model construction before using slim.get_variables_to_restore().
For instance:
with slim.arg_scope(arg_scope):
logits, end_points = inception_resnet_v2(scaled_input_tensor, is_training=False)
variables_to_restore = slim.get_model_variables()
This way, the Tensor variables will be constructed and you should see variables_to_restore is no longer empty.
You need to manually add the model variables.
Try this
with slim.arg_scope(arg_scope):
logits, end_points = inception_resnet_v2(scaled_input_tensor, is_training=False)
# Add model variables
for var in tf.global_variables(scope='inception_resnet_v2'):
slim.add_model_variable(var)
Tensorboard provides embedding visualization of tensorflow variables by using tf.train.Saver(). The following is a working example (from this answer)
import os
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
from tensorflow.contrib.tensorboard.plugins import projector
LOG_DIR = '/tmp/emb_logs/'
metadata = os.path.join(LOG_DIR, 'metadata.tsv')
mnist = input_data.read_data_sets('MNIST_data')
#Variables
images = tf.Variable(mnist.test.images, name='images')
weights=tf.Variable(tf.random_normal([3,3,1,16]))
biases=tf.Variable(tf.zeros([16]))
#Tensor from the variables
x = tf.reshape(images, shape=[-1, 28, 28, 1])
conv_layer=tf.nn.conv2d(x, weights, [1,1,1,1], padding="SAME")
conv_layer=tf.add(conv_layer, biases)
y = tf.reshape(conv_layer, shape=[-1, 28*28*16])
with open(metadata, 'wb') as metadata_file:
for row in mnist.test.labels:
metadata_file.write('%d
' % row)
with tf.Session() as sess:
saver = tf.train.Saver([images])
sess.run(images.initializer)
saver.save(sess, os.path.join(LOG_DIR, 'images.ckpt'))
config = projector.ProjectorConfig()
# One can add multiple embeddings.
embedding = config.embeddings.add()
embedding.tensor_name = images.name
# Link this tensor to its metadata file (e.g. labels).
embedding.metadata_path = metadata
# Saves a config file that TensorBoard will read during startup.
projector.visualize_embeddings(tf.summary.FileWriter(LOG_DIR), config)
How can I visualize embeddings from a tensorflow tensor, like y in the code above?
Simply replacing
saver = tf.train.Saver([images])
with
saver = tf.train.Saver([y])
doesn't work, because of the following error:
474 var = ops.convert_to_tensor(var, as_ref=True)
475 if not BaseSaverBuilder._IsVariable(var):
476 raise TypeError("Variable to save is not a Variable: %s" % var)
477 name = var.op.name
478 if name in names_to_saveables:
TypeError: Variable to save is not a Variable: Tensor("Reshape_11:0", shape=(10000, 12544), dtype=float32)
Does anyone know of an alternative to generate tensorboard embedding visualizations of a tf.tensor?
You could create a new variable you can assign the value of y to.
y_var = tf.Variable(tf.shape(y))
saver = tf.train.Saver([y_var])
assign_op = y_var.assign(y)
...
# Training
sess.run((loss, assign_op))
saver.save(...)
tf.summary.tensor_summary will save a summary for an arbitrary Tensor.
(1) I'm trying to fine-tune a VGG-16 network using TFSlim by loading pretrained weights into all layers except thefc8 layer. I achieved this by using the TF-SLIm function as follows:
import tensorflow as tf
import tensorflow.contrib.slim as slim
import tensorflow.contrib.slim.nets as nets
vgg = nets.vgg
# Specify where the Model, trained on ImageNet, was saved.
model_path = 'path/to/vgg_16.ckpt'
# Specify where the new model will live:
log_dir = 'path/to/log/'
images = tf.placeholder(tf.float32, [None, 224, 224, 3])
predictions = vgg.vgg_16(images)
variables_to_restore = slim.get_variables_to_restore(exclude=['fc8'])
restorer = tf.train.Saver(variables_to_restore)
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
restorer.restore(sess,model_path)
print "model restored"
This works fine as long as I do not change the num_classes for the VGG16 model. What I would like to do is to change the num_classes from 1000 to 200. I was under the impression that if I did this modification by defining a new vgg16-modified class that replaces the fc8 to produce 200 outputs, (along with a variables_to_restore = slim.get_variables_to_restore(exclude=['fc8']) that everything will be fine and dandy. However, tensorflow complains of a dimensions mismatch:
InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [1,1,4096,200] rhs shape= [1,1,4096,1000]
So, how does one really go about doing this ? The documentation for TFSlim is really patchy and there are several versions scattered on Github - so not getting much help there.
You can try using slim's way of restoring — slim.assign_from_checkpoint.
There is related documentation in the slim sources:
https://github.com/tensorflow/tensorflow/blob/129665119ea60640f7ed921f36db9b5c23455224/tensorflow/contrib/slim/python/slim/learning.py
Corresponding part:
*************************************************
* Fine-Tuning Part of a model from a checkpoint *
*************************************************
Rather than initializing all of the weights of a given model, we sometimes
only want to restore some of the weights from a checkpoint. To do this, one
need only filter those variables to initialize as follows:
...
# Create the train_op
train_op = slim.learning.create_train_op(total_loss, optimizer)
checkpoint_path = '/path/to/old_model_checkpoint'
# Specify the variables to restore via a list of inclusion or exclusion
# patterns:
variables_to_restore = slim.get_variables_to_restore(
include=["conv"], exclude=["fc8", "fc9])
# or
variables_to_restore = slim.get_variables_to_restore(exclude=["conv"])
init_assign_op, init_feed_dict = slim.assign_from_checkpoint(
checkpoint_path, variables_to_restore)
# Create an initial assignment function.
def InitAssignFn(sess):
sess.run(init_assign_op, init_feed_dict)
# Run training.
slim.learning.train(train_op, my_log_dir, init_fn=InitAssignFn)
Update
I tried the following:
import tensorflow as tf
import tensorflow.contrib.slim as slim
import tensorflow.contrib.slim.nets as nets
images = tf.placeholder(tf.float32, [None, 224, 224, 3])
predictions = nets.vgg.vgg_16(images)
print [v.name for v in slim.get_variables_to_restore(exclude=['fc8']) ]
And got this output (shortened):
[u'vgg_16/conv1/conv1_1/weights:0',
u'vgg_16/conv1/conv1_1/biases:0',
…
u'vgg_16/fc6/weights:0',
u'vgg_16/fc6/biases:0',
u'vgg_16/fc7/weights:0',
u'vgg_16/fc7/biases:0',
u'vgg_16/fc8/weights:0',
u'vgg_16/fc8/biases:0']
So it looks like you should prefix scope with vgg_16:
print [v.name for v in slim.get_variables_to_restore(exclude=['vgg_16/fc8']) ]
gives (shortened):
[u'vgg_16/conv1/conv1_1/weights:0',
u'vgg_16/conv1/conv1_1/biases:0',
…
u'vgg_16/fc6/weights:0',
u'vgg_16/fc6/biases:0',
u'vgg_16/fc7/weights:0',
u'vgg_16/fc7/biases:0']
Update 2
Complete example that executes without errors (at my system).
import tensorflow as tf
import tensorflow.contrib.slim as slim
import tensorflow.contrib.slim.nets as nets
s = tf.Session(config=tf.ConfigProto(gpu_options={'allow_growth':True}))
images = tf.placeholder(tf.float32, [None, 224, 224, 3])
predictions = nets.vgg.vgg_16(images, 200)
variables_to_restore = slim.get_variables_to_restore(exclude=['vgg_16/fc8'])
init_assign_op, init_feed_dict = slim.assign_from_checkpoint('./vgg16.ckpt', variables_to_restore)
s.run(init_assign_op, init_feed_dict)
In the example above vgg16.ckpt is a checkpoint saved by tf.train.Saver for 1000 classes VGG16 model.
Using this checkpoint with all variables of 200 classes model (including fc8) gives the following error:
init_assign_op, init_feed_dict = slim.assign_from_checkpoint('./vgg16.ckpt', slim.get_variables_to_restore())
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
1 init_assign_op, init_feed_dict = slim.assign_from_checkpoint(
----> 2 './vgg16.ckpt', slim.get_variables_to_restore())
/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/framework/python/ops/variables.pyc in assign_from_checkpoint(model_path, var_list)
527 assign_ops.append(var.assign(placeholder_value))
528
--> 529 feed_dict[placeholder_value] = var_value.reshape(var.get_shape())
530
531 assign_op = control_flow_ops.group(*assign_ops)
ValueError: total size of new array must be unchanged
Trying to run the Inceptionv3 Tensorflow model with the architecture and the checkpoint provided by Google here.
My issue is that my script crashes on saver.restore(sess, "./inception_v3.ckpt") with the following error:
tensorflow.python.framework.errors.NotFoundError: Tensor name "InceptionV3/Mixed_5b/Branch_1/Conv2d_0b_5x5/biases" not found in checkpoint files ./inception_v3.ckpt
Here is my code:
import tensorflow as tf
import inception_v3
with tf.Session() as sess:
image = tf.read_file('./file.jpg')
# code to decode, crop, convert jpeg
eval_inputs = tf.pack([image])
logits, _ = inception_v3.inception_v3(eval_inputs, num_classes=1001, is_training=False)
sess.run(tf.initialize_all_variables())
saver = tf.train.Saver()
saver.restore(sess, "./inception_v3.ckpt")
I get the same errors with the other checkpoint/model combinations so this must be an issue with my code. Not sure what I am doing wrong though.
Thank you
Indeed the checkpoint file does not contain this tensor. Can you file a bug on github?
You need to call inception_v3() within the arg_scope() returned by inception_v3_arg_scope() like this:
import tensorflow as tf
import tensorflow.contrib.slim as slim
from nets.inception_v3 import inception_v3, inception_v3_arg_scope
height = 299
width = 299
channels = 3
# Create graph
X = tf.placeholder(tf.float32, shape=[None, height, width, channels])
with slim.arg_scope(inception_v3_arg_scope()):
logits, end_points = inception_v3(X, num_classes=1001,
is_training=False)
predictions = end_points["Predictions"]
saver = tf.train.Saver()
X_test = ... # your images, shape [batch_size, 299, 299, 3]
# Execute graph
with tf.Session() as sess:
saver.restore(sess, "./inception_v3.ckpt")
predictions_val = predictions.eval(feed_dict={X: X_test})
predicted_classes = np.argmax(predictions_val, axis=1)
I recommend clearly separating the construction phase and the execution phase. Just tested on a random photo on the web, and it worked fine. :)