How to use GPU to detect object using pre-trained models in Tensorflow? - tensorflow

I'm using Tensorflow to detect object based on this tutorial. The reason why it's running so slow is this line output_dict =sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)}). And below is the whole function code:
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.device('/gpu:0'):
print('GPU is using')
with tf.Session() as sess:
time0 = datetime.datetime.now()
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
time1 = datetime.datetime.now()
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
time2 = datetime.datetime.now()
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)})
time3 = datetime.datetime.now()
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
time4 = datetime.datetime.now()
print(time1-time0, time2-time1, time3-time2, time4-time3)
return output_dict
I don't know how to use GPU with tf.session.run(). Anyone can teach me how to use GPU with session.run???

Related

Recurrent neural network, time series prediction with newer Tensorflow 1.14

How to use new tf.keras API with recurrent neural network? I have checked the documentation but there is no example of such a situation.
There is this great book Hands on machine learning from 2017. Since that year the API of tensorflow has evolved and I am trying to rewrite recurrent neural network for time series prediction with using version 1.14 code.
The code from the book is using older tf.nn.dynamic_rnn and tf.nn.rnn_cell.BasicRNNCell:
n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
rnn_outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 500
batch_size = 50
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
And this code works just fine (except that it throws warnings about deprecation left and right). I wanted to use tf.keras API as suggested in warning. My code is the same except:
cell = tf.keras.layers.SimpleRNNCell(units=n_neurons, activation=tf.nn.relu)
rnn_outputs = tf.keras.layers.RNN(cell,dtype=tf.float32, name="hidden1")(X)
But this yields following exception:
InvalidArgumentError: Input to reshape is a tensor with 50 values, but the requested shape requires a multiple of 20
[[node Reshape_1 (defined at <ipython-input-9-879361be49dd>:3) ]]
so I understand that the problematic line is
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
After checking and comparing documentation for both cells https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn and
https://www.tensorflow.org/api_docs/python/tf/keras/layers/RNN I can't find the culprit.
What is the difference with these two cells? How to use tf.keras API with time series?
Full old code: https://github.com/ageron/handson-ml/blob/master/14_recurrent_neural_networks.ipynb
Full "my" code:
import numpy as np
import tensorflow as tf
from datetime import datetime
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import pandas as pd
from utils import shuffle_batch, variable_summaries
import os
dir_path = os.getcwd()
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
logdir = "{}/run-{}/".format(root_logdir, now)
print(dir_path)
t_min, t_max = -5, 5
section_start = (t_max + t_min) / 2
resolution = 0.1
n_steps = 20
def time_series(t):
return np.sin(t)
def next_batch(batch_size, n_steps):
t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)
Ts = t0 + np.arange(0., n_steps + 1) * resolution
ys = time_series(Ts)
return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)
t = np.linspace(t_min, t_max, int((t_max - t_min) / resolution))
t_instance = np.linspace(start = section_start, stop = section_start + resolution * (n_steps + 1),num = n_steps + 1)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.title("A time series (generated)", fontsize=14)
plt.plot(t, time_series(t), label=r"original")
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "b-", linewidth=3, label="A training instance")
plt.legend(loc="lower left", fontsize=14)
#plt.axis([-10, 10, -17, 13])
plt.xlabel("Time")
plt.ylabel("Value")
plt.subplot(122)
plt.title("A training instance", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "c*", markersize=10, label="target")
plt.legend(loc="upper left")
plt.xlabel("Time")
# In[6]:
n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
# In[7]:
cell = tf.keras.layers.SimpleRNNCell(units=n_neurons, activation=tf.nn.relu)
rnn_outputs = tf.keras.layers.RNN(cell,dtype=tf.float32, name="hidden1")(X)
print(rnn_outputs.get_shape())
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons], name='reshape1')
stacked_outputs = tf.keras.layers.Dense(n_outputs,name="hidden2")(stacked_rnn_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs], name='reshape2')
learning_rate = 0.001
loss = tf.reduce_mean(tf.square(outputs - y)) # MSE
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 1500
batch_size = 50
save_path =os.path.join(dir_path,"model","recurrent_sinus_model")
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
saver.save(sess, save_path)
with tf.Session() as sess:
saver.restore(sess, save_path)
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
# In[ ]:
with tf.Session() as sess:
saver.restore(sess, save_path)
X_new = time_series(np.array(t.reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
plt.title("A time series (generated)", fontsize=14)
plt.plot(t, time_series(t), label=r"original",linewidth=5,c='r')
plt.plot(t[:-1], time_series(t[:-1]), "b-", linewidth=3, label="A training instance")
plt.legend(loc="lower left", fontsize=14)
plt.xlabel("Time")
plt.ylabel("Value")
So the answer is:
rnn_outputs, rnn_states = tf.keras.layers.RNN(cell,dtype=tf.float32, name="hidden1", return_state=True, return_sequences=True)(X)
instead of
rnn_outputs = tf.keras.layers.RNN(cell,dtype=tf.float32, name="hidden1")(X)
so the parameter return_sequences=True make the RNN return the time series as well, and well, this is the point.

is there a version of the inference example of the Tensorflow Object detection API that can run on batches of images simultaneously?

I have trained a faster rcnn model using the Tensorflow object detection API and am using this inference script with my frozen graph:
https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb
I intend to use it for object tracking in videos, but inference using this script is very slow since it only processes one image at a time instead of a batch of images. Is there any way to do inference on a batch of images at once ? The relevant inference function is here, I am wondering how to modify it to work with a stack of images
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in ['num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks']:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(tensor_name)
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(detection_masks, detection_boxes, image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict['detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
Instead of passing just one numpy array of the size (1, image_width, image_heigt, 3) you can pass a numpy array with your image batch of the size (batch_size, image_width, image_heigt, 3) to the sess.run command:
output_dict = sess.run(tensor_dict, feed_dict={image_tensor: image_batch})
The output_dict will be slightly different then before, still haven't figured out how exactly. Maybe someone can help furthermore?
Edit
It seems that the output_dict gets another index which corresponds to the image number in your batch. So you'll find the boxes for a certain image in:
output_dict['detection_boxes'][image_counter]
Edit2
For some reason this won't work with Mask RCNN...
If you run export_inference_graph.py, you should be able to input batches of images by default as it sets the image_tensor shape to [None, None, None, 3].
python object_detection/export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path ${PIPELINE_CONFIG_PATH} \
--trained_checkpoint_prefix ${TRAIN_PATH} \
--output_directory output_inference_graph.pb

Tensorflow, read tfrecord without a graph

I tried to write a good structured Neural network model with Tensorflow. But I met a problem about feed the data from tfrecord into the graph. The code is as below, it hangs on at the following function, how can I make it work?
images, labels = network.load_tfrecord_data(1)
this function can not get the features (images) and labels from my datafile, .tfrecords?
Any idea will be appreciated?
from __future__ import division
from __future__ import print_function
import datetime
import numpy as np
import tensorflow as tf
layers = tf.contrib.layers
losses = tf.contrib.losses
metrics = tf.contrib.metrics
LABELS = 10
WIDTH = 28
HEIGHT = 28
HIDDEN = 100
def read_and_decode_single_example(filename):
filename_queue = tf.train.string_input_producer([filename], num_epochs=None)
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
features={
'label': tf.FixedLenFeature([], tf.int64),
'image': tf.FixedLenFeature([50176], tf.int64)
})
label = features['label']
image = features['image']
image = tf.reshape(image, [-1, 224, 224, 1])
label = tf.one_hot(label - 1, 11, dtype=tf.int64)
return label, image
class Network:
def __init__(self, logdir, experiment, threads):
# Construct the graph
with tf.name_scope("inputs"):
self.images = tf.placeholder(tf.float32, [None, WIDTH, HEIGHT, 1], name="images")
self.labels = tf.placeholder(tf.int64, [None], name="labels")
# self.keep_prob = keep_prob
self.keep_prob = tf.placeholder(tf.float32, name="keep_prob")
flattened_images = layers.flatten(self.images)
hidden_layer = layers.fully_connected(flattened_images, num_outputs=HIDDEN, activation_fn=tf.nn.relu, scope="hidden_layer")
output_layer = layers.fully_connected(hidden_layer, num_outputs=LABELS, activation_fn=None, scope="output_layer")
loss = losses.sparse_softmax_cross_entropy(labels=self.labels, logits=output_layer, scope="loss")
self.training = layers.optimize_loss(loss, None, None, tf.train.AdamOptimizer(), summaries=['loss', 'gradients', 'gradient_norm'], name='training')
with tf.name_scope("accuracy"):
predictions = tf.argmax(output_layer, 1, name="predictions")
accuracy = metrics.accuracy(predictions, self.labels)
tf.summary.scalar("training/accuracy", accuracy)
self.accuracy = metrics.accuracy(predictions, self.labels)
with tf.name_scope("confusion_matrix"):
confusion_matrix = metrics.confusion_matrix(predictions, self.labels, weights=tf.not_equal(predictions, self.labels), dtype=tf.float32)
confusion_image = tf.reshape(confusion_matrix, [1, LABELS, LABELS, 1])
# Summaries
self.summaries = {'training': tf.summary.merge_all() }
for dataset in ["dev", "test"]:
self.summaries[dataset] = tf.summary.scalar(dataset + "/loss", loss)
self.summaries[dataset] = tf.summary.scalar(dataset + "/accuracy", accuracy)
self.summaries[dataset] = tf.summary.image(dataset + "/confusion_matrix", confusion_image)
# Create the session
self.session = tf.Session(config=tf.ConfigProto(inter_op_parallelism_threads=threads,
intra_op_parallelism_threads=threads))
self.session.run(tf.global_variables_initializer())
timestamp = datetime.datetime.now().strftime("%Y-%m-%d_%H%M%S")
self.summary_writer = tf.summary.FileWriter("{}/{}-{}".format(logdir, timestamp, experiment), graph=self.session.graph, flush_secs=10)
self.steps = 0
def train(self, images, labels, keep_prob):
self.steps += 1
feed_dict = {self.images: self.session.run(images), self.labels: self.session.run(labels), self.keep_prob: keep_prob}
if self.steps == 1:
metadata = tf.RunMetadata()
self.session.run(self.training, feed_dict, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE), run_metadata=metadata)
self.summary_writer.add_run_metadata(metadata, 'step1')
elif self.steps % 100 == 0:
_, summary = self.session.run([self.training, self.summaries['training']], feed_dict)
self.summary_writer.add_summary(summary, self.steps)
else:
self.session.run(self.training, feed_dict)
def evaluate(self, dataset, images, labels):
feed_dict ={self.images: images, self.labels: labels, self.keep_prob: 1}
summary = self.summaries[dataset].eval({self.images: images, self.labels: labels, self.keep_prob: 1}, self.session)
self.summary_writer.add_summary(summary, self.steps)
def load_tfrecord_data(self, training):
training = training
if training:
label, image = read_and_decode_single_example("mhad_Op_train.tfrecords")
# print(self.session.run(image))
else:
label, image = read_and_decode_single_example("mhad_Op_test.tfrecords")
# image = tf.cast(image, tf.float32) / 255.
images_batch, labels_batch = tf.train.shuffle_batch(
[image, label], batch_size=50, num_threads=2,
capacity=80,
min_after_dequeue=30)
return images_batch, labels_batch
if __name__ == '__main__':
# Fix random seed
np.random.seed(42)
tf.set_random_seed(42)
# Parse arguments
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--batch_size', default=256, type=int, help='Batch size.')
parser.add_argument('--epochs', default=50, type=int, help='Number of epochs.')
parser.add_argument('--logdir', default="logs", type=str, help='Logdir name.')
parser.add_argument('--exp', default="mnist-final-confusion_matrix_customized_loss", type=str, help='Experiment name.')
parser.add_argument('--threads', default=1, type=int, help='Maximum number of threads to use.')
args = parser.parse_args()
# Load the data
keep_prob = 1
# Construct the network
network = Network(logdir=args.logdir, experiment=args.exp, threads=args.threads)
# Train
for i in range(args.epochs):
images, labels = network.load_tfrecord_data(1)
network.train(images, labels, keep_prob)
print('current epoch', i)
You need to start the queue before using images, labels in your model.
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
images, labels = network.load_tfrecord_data(1)
...
coord.request_stop()
coord.join(threads)
Check this tutorial for a full example

How to pass images to the model for classification in Tensorflow

I have created a model using the following code below:
# Deep Learning
# In[25]:
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
# In[37]:
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
print(test_labels)
# Reformat into a TensorFlow-friendly shape:
# - convolutions need the image data formatted as a cube (width by height by #channels)
# - labels as float 1-hot encodings.
# In[38]:
image_size = 28
num_labels = 10
num_channels = 1 # grayscale
import numpy as np
def reformat(dataset, labels):
dataset = dataset.reshape(
(-1, image_size, image_size, num_channels)).astype(np.float32)
#print(np.arange(num_labels))
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
#print(labels[0,:])
print(labels[0])
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
#print(labels[0])
# In[39]:
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
# Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes.
# In[47]:
batch_size = 16
patch_size = 5
depth = 16
num_hidden = 64
graph = tf.Graph()
with graph.as_default():
# Input data.
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
layer1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1),name="layer1_weights")
layer1_biases = tf.Variable(tf.zeros([depth]),name = "layer1_biases")
layer2_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth, depth], stddev=0.1),name = "layer2_weights")
layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]),name ="layer2_biases")
layer3_weights = tf.Variable(tf.truncated_normal(
[image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1),name="layer3_biases")
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]),name = "layer3_biases")
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_labels], stddev=0.1),name = "layer4_weights")
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]),name = "layer4_biases")
# Model.
def model(data):
conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer1_biases)
conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer2_biases)
shape = hidden.get_shape().as_list()
reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden, layer4_weights) + layer4_biases
# Training computation.
logits = model(tf_train_dataset)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
# In[48]:
num_steps = 1001
#saver = tf.train.Saver()
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :, :, :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 50 == 0):
print('Minibatch loss at step %d: %f' % (step, l))
print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
save_path = tf.train.Saver().save(session, "/tmp/model.ckpt")
print("Model saved in file: %s" % save_path)
I have saved the model and wrote another python program where i am trying to restore the model and use it for classification of my images , but i am not being able to create a 4D tensor of the image , that i have to pass as input to the model.
The code of the python file is as follows :
# In[8]:
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
from scipy import ndimage
# In[9]:
image_size = 28
num_labels = 10
num_channels = 1 # grayscale
import numpy as np
# In[10]:
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
# In[15]:
batch_size = 16
patch_size = 5
depth = 16
num_hidden = 64
pixel_depth =255
graph = tf.Graph()
with graph.as_default():
'''# Input data.
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
#tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)'''
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
# Variables.
layer1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1),name="layer1_weights")
layer1_biases = tf.Variable(tf.zeros([depth]),name = "layer1_biases")
layer2_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth, depth], stddev=0.1),name = "layer2_weights")
layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]),name ="layer2_biases")
layer3_weights = tf.Variable(tf.truncated_normal(
[image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1),name="layer3_biases")
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]),name = "layer3_biases")
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_labels], stddev=0.1),name = "layer4_weights")
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]),name = "layer4_biases")
saver = tf.train.Saver()
tf_
# Model.
def model(data):
conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer1_biases)
conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer2_biases)
shape = hidden.get_shape().as_list()
reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden, layer4_weights) + layer4_biases
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
#test_prediction = tf.nn.softmax(model(tf_test_dataset))
# In[19]:
with tf.Session(graph=graph) as sess:
# Restore variables from disk.
saver.restore(sess, "/tmp/model.ckpt")
print("Model restored.")
image_data = (ndimage.imread('notMNIST_small/A/QXJyaWJhQXJyaWJhU3RkLm90Zg==.png').astype(float) -
pixel_depth / 2) / pixel_depth
data = [0:,image_data:,]
sess.run(valid_prediction,feed_dict={tf_valid_dataset:data})
# Do some work with the model
As you can see in ln[19] i have restored my model and want to pass an image to the model by creating a 4d Tensor , I am reading the image and then trying to convert it to a 4d tensor but the ysntax for creating it is wrong in my code , thus need help in correcting it .
Assuming that image_data is a grayscale image, it should be a 2-D NumPy array. You can convert it to a 4-D array with the following:
data = image_data[np.newaxis, ..., np.newaxis]
The np.newaxis adds a new dimension of size 1 in the first (batch size) and last (channels) dimensions. It is equivalent to the following, using np.expand_dims():
data = np.expand_dims(np.expand_dims(image_data, 0), -1)
On the other hand, if you are working with RGB data, you will need to convert it to fit the model. You could for example define a placeholder for the image input:
input_placeholder = tf.placeholder(tf.float32, shape=[None, image_size, image_size, 3])
input_grayscale = tf.image.rgb_to_grayscale(input_placeholder)
prediction = tf.nn.softmax(model(input_grayscale))
image_data = ... # Load from file
data = image_data[np.newaxis, ...] # Only add a batch dimension.
prediction_val = sess.run(prediction, feed_dict={input_placeholder: data})

No variable to save error in Tensorflow

I am trying to save the model and then reuse it for classifying my images but unfortunately i am getting errors in restoring the model that i have saved.
The code in which model has been created :
# Deep Learning
# =============
#
# Assignment 4
# ------------
# In[25]:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
# In[37]:
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
print(test_labels)
# Reformat into a TensorFlow-friendly shape:
# - convolutions need the image data formatted as a cube (width by height by #channels)
# - labels as float 1-hot encodings.
# In[38]:
image_size = 28
num_labels = 10
num_channels = 1 # grayscale
import numpy as np
def reformat(dataset, labels):
dataset = dataset.reshape(
(-1, image_size, image_size, num_channels)).astype(np.float32)
#print(np.arange(num_labels))
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
#print(labels[0,:])
print(labels[0])
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
#print(labels[0])
# In[39]:
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
# Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes.
# In[47]:
batch_size = 16
patch_size = 5
depth = 16
num_hidden = 64
graph = tf.Graph()
with graph.as_default():
# Input data.
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
layer1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1),name="layer1_weights")
layer1_biases = tf.Variable(tf.zeros([depth]),name = "layer1_biases")
layer2_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth, depth], stddev=0.1),name = "layer2_weights")
layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]),name ="layer2_biases")
layer3_weights = tf.Variable(tf.truncated_normal(
[image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1),name="layer3_biases")
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]),name = "layer3_biases")
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_labels], stddev=0.1),name = "layer4_weights")
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]),name = "layer4_biases")
# Model.
def model(data):
conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer1_biases)
conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer2_biases)
shape = hidden.get_shape().as_list()
reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden, layer4_weights) + layer4_biases
# Training computation.
logits = model(tf_train_dataset)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
# In[48]:
num_steps = 1001
#saver = tf.train.Saver()
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :, :, :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 50 == 0):
print('Minibatch loss at step %d: %f' % (step, l))
print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
save_path = tf.train.Saver().save(session, "/tmp/model.ckpt")
print("Model saved in file: %s" % save_path)
Everything works fine and the model is stored in the respective folder .
I have created one more python file where i have tried restoring the model but getting an error there
# In[1]:
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
# In[3]:
image_size = 28
num_labels = 10
num_channels = 1 # grayscale
import numpy as np
# In[4]:
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
# In[8]:
batch_size = 16
patch_size = 5
depth = 16
num_hidden = 64
graph = tf.Graph()
with graph.as_default():
'''# Input data.
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)'''
# Variables.
layer1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1),name="layer1_weights")
layer1_biases = tf.Variable(tf.zeros([depth]),name = "layer1_biases")
layer2_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth, depth], stddev=0.1),name = "layer2_weights")
layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]),name ="layer2_biases")
layer3_weights = tf.Variable(tf.truncated_normal(
[image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1),name="layer3_biases")
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]),name = "layer3_biases")
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_labels], stddev=0.1),name = "layer4_weights")
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]),name = "layer4_biases")
# Model.
def model(data):
conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer1_biases)
conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer2_biases)
shape = hidden.get_shape().as_list()
reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden, layer4_weights) + layer4_biases
'''# Training computation.
logits = model(tf_train_dataset)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)'''
# Predictions for the training, validation, and test data.
#train_prediction = tf.nn.softmax(logits)
#valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
#test_prediction = tf.nn.softmax(model(tf_test_dataset))
# In[17]:
#saver = tf.train.Saver()
with tf.Session() as sess:
# Restore variables from disk.
tf.train.Saver().restore(sess, "/tmp/model.ckpt")
print("Model restored.")
# Do some work with the model
error that i am getting is :
No variables to save
Any help would be appreciated
The error here is quite subtle. In In[8] you create a tf.Graph called graph and set it as default for the with graph.as_default(): block. This means that all of the variables are created in graph, and if you print graph.all_variables() you should see a list of your variables.
However, you exit the with block before creating (i) the tf.Session, and (ii) the tf.train.Saver. This means that the session and saver are created in a different graph (the global default tf.Graph that is used when you don't explicitly create one and set it as default), which doesn't contain any variables—or any nodes at all.
There are at least two solutions:
As Yaroslav suggests, you can write your program without using the with graph.as_default(): block, which avoids the confusion with multiple graphs. However, this can lead to name collisions between different cells in your IPython notebook, which is awkward when using the tf.train.Saver, since it uses the name property of a tf.Variable as the key in the checkpoint file.
You can create the saver inside the with graph.as_default(): block, and create the tf.Session with an explicit graph, as follows:
with graph.as_default():
# [Variable and model creation goes here.]
saver = tf.train.Saver() # Gets all variables in `graph`.
with tf.Session(graph=graph) as sess:
saver.restore(sess)
# Do some work with the model....
Alternatively, you can create the tf.Session inside the with graph.as_default(): block, in which case it will use graph for all of its operations.
You are creating a new session in In[17] which wipes your variables. Also, you don't need to use with blocks if you only have one default graph and one default session, you can instead do something like this
sess = tf.InteractiveSession()
layer1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1),name="layer1_weights")
tf.train.Saver().restore(sess, "/tmp/model.ckpt")