Related
I have a simple tensorflow model that consists of lstm layers - such as tf.contrib.rnn.LSTMBlockCell or tf.keras.layers.LSTM (I can provide the sample code also, if needed). I want to run the model on an iOS app. However, I have looked at several websites that say that presently there is no way to convert and run tensorflow model that consist LSTM layers on iOS apps.
I have tried these tools/libraries to convert the tensorflow model to .mlmodel format (or .tflite format)
1. Swift for Tensorflow
2. Tensorflow Lite for iOS
3. tfcoreml
However, these tools also does not seem to be able to convert the lstm layers model to .mlmodel format. These tools, however allow to use custom layers to be added. But I don't know how I can add LSTM custom layer.
Am I wrong in saying that there is no support to run tensorflow lstm model in iOS apps? If yes, then please guide me on how I can go ahead to include the model in iOS app. Is there any other tool/library that can be ued to convert it to .mlmodel format. If no, then are there any plans to include tensorflow support for iOS in future?
Model
import tensorflow as tf
from tensorflow.contrib import rnn
from tensorflow.contrib.rnn import *
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
#Summary parameters
logs_path = "logs/"
# Training Parameters
learning_rate = 0.001
training_steps = 1000
batch_size = 128
display_step = 200
# Network Parameters
num_input = 28 # MNIST data input (img shape: 28*28)
timesteps = 28 # timesteps
num_hidden = 128 # hidden layer num of features
num_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
X = tf.placeholder("float", [None, timesteps, num_input])
Y = tf.placeholder("float", [None, num_classes])
# Define weights
weights = {
'out': tf.Variable(tf.random_normal([num_hidden, num_classes]))
}
biases = {
'out': tf.Variable(tf.random_normal([num_classes]))
}
def RNN(x, weights, biases):
# Unstack to get a list of 'timesteps' tensors of shape (batch_size, n_input)
x = tf.unstack(x, timesteps, 1)
# Define a lstm cell with tensorflow
lstm_cell = rnn.LSTMBlockCell(num_hidden, forget_bias = 1.0)
#lstm_cell = tf.keras.layers.LSTMCell(num_hidden, unit_forget_bias=True)
# Get lstm cell output
outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)
# Linear activation, using rnn inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out']
logits = RNN(X, weights, biases)
with tf.name_scope('Model'):
prediction = tf.nn.softmax(logits, name = "prediction_layer")
with tf.name_scope('Loss'):
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y, name = "loss_layer"), name = "reduce_mean_loss")
with tf.name_scope('SGD'):
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate, name = "Gradient_Descent")
train_op = optimizer.minimize(loss_op, name = "minimize_layer")
with tf.name_scope('Accuracy'):
# Evaluate model (with test logits, for dropout to be disabled)
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1), name = "correct_pred_layer")
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name = "reduce_mean_acc_layer")
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
#Create a summary to monitor cost tensor
tf.summary.scalar("loss", loss_op)
#Create a summary to monitor accuracy tensor
tf.summary.scalar("accuracy", accuracy)
#Merge all summaries into a single op
merged_summary_op = tf.summary.merge_all()
saver = tf.train.Saver()
save_path = ""
model_save = "model.ckpt"
# Start training
with tf.Session() as sess:
# Run the initializer
sess.run(init)
# op to write logs to Tensorboard
summary_writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())
for step in range(1, training_steps+1):
total_batch = int(mnist.train.num_examples/batch_size)
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Reshape data to get 28 seq of 28 elements
batch_x = batch_x.reshape((batch_size, timesteps, num_input))
# Run optimization op (backprop)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
loss, acc, summary = sess.run([loss_op, accuracy, merged_summary_op], feed_dict={X: batch_x,
Y: batch_y})
# Write logs at every iteration
summary_writer.add_summary(summary, step * total_batch)
print("Step " + str(step) + ", Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc))
saver.save(sess, model_save)
tf.train.write_graph(sess.graph_def, save_path, 'save_graph.pbtxt')
#print(sess.graph.get_operations())
print("Optimization Finished!")
print("Run the command line:\n" \
"--> tensorboard --logdir=logs/ " \
"\nThen open http://0.0.0.0:6006/ into your web browser")
# Calculate accuracy for 128 mnist test images
test_len = 128
test_data = mnist.test.images[:test_len].reshape((-1, timesteps, num_input))
test_label = mnist.test.labels[:test_len]
print("Testing Accuracy:", \
sess.run(accuracy, feed_dict={X: test_data, Y: test_label}))
Code to generate the frozen model
import tensorflow as tf
import numpy as np
from tensorflow.python.tools import freeze_graph
save_path = ""
model_name = "test_model_tf_keras_layers_lstm"
input_graph_path = save_path + "save_graph.pbtxt"
checkpoint_path = save_path + "model.ckpt"
input_saver_def_path = ""
input_binary = False
output_node_names = "Model/prediction_layer" #output node's name. Should match to that mentioned in the code
restore_op_name = 'save/restore_all'
filename_tensor_name = 'save/const:0'
output_frozen_graph_name = save_path + 'frozen_model' + '.pb' # name of .pb file that one would like to give
clear_devices = True
freeze_graph.freeze_graph(input_graph_path, input_saver_def_path, input_binary, checkpoint_path, output_node_names, restore_op_name, filename_tensor_name, output_frozen_graph_name, clear_devices, "")
print("Model Freezed")
Conversion Code to generate .mlmodel format file
import tfcoreml
coreml_model = tfcoreml.convert(tf_model_path = 'frozen_model_test_model_tf_keras_layers_lstm.pb',
mlmodel_path = 'test_model.mlmodel',
output_feature_names = ['Model/prediction_layer:0'],
add_custom_layers = True)
coreml_model.save("test_model.mlmodel")
Error message shown with
lstm_cell = rnn.BasicLSTMCell(num_hidden, name = "lstm_cell")
Value Error: Split op case not handled. Input shape = [1, 512], output shape = [1, 128]
Error message shown with
lstm_cell = rnn.LSTMBlockCell(num_hidden, name = "lstm_cell")
InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'LSTMBlockCell' used by node rnn/lstm_cell/LSTMBlockCell (defined at /anaconda2/lib/python2.7/site-packages/tfcoreml/_tf_coreml_converter.py:153) with these attrs: [forget_bias=1, use_peephole=false, cell_clip=-1, T=DT_FLOAT]
Registered devices: [CPU]
Registered kernels:
<no registered kernels>
[[node rnn/lstm_cell/LSTMBlockCell (defined at /anaconda2/lib/python2.7/site-packages/tfcoreml/_tf_coreml_converter.py:153) ]]
I expect that the frozen tensorflow model can be converted to .mlmodel format.
to get the gradients of the output with respect to the input,
one can use
grads = tf.gradients(model.output, model.input)
where grads =
[<tf.Tensor 'gradients_81/dense/MatMul_grad/MatMul:0' shape=(?, 18) dtype=float32>]
This is a modell, where there are 18 continous inputs and 1 continous output.
I assume, this is a symbolic expression and that one needs a list of 18 entries to feed it to the tensor, such that it gives out the derivatives as floats.
I would use
Test =[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0]
with tf.Session() as sess:
alpha = sess.run(grads, feed_dict = {model.input : Test})
print(alpha)
But I get the error
FailedPreconditionError (see above for traceback): Error while reading resource variable dense_2/bias from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/dense_2/bias)
[[Node: dense_2/BiasAdd/ReadVariableOp = ReadVariableOp[dtype=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](dense_2/bias)]]
What is wrong?
EDIT:
This is, what has happened before:
def build_model():
model = keras.Sequential([
...])
optimizer = ...
model.compile(loss='mse'... )
return model
model = build_model()
history= model.fit(data_train,train_labels,...)
loss, mae, mse = model.evaluate(data_eval,...)
Progress so far:
Test =[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0]
with tf.Session() as sess:
tf.keras.backend.set_session(sess)
tf.initializers.variables(model.output)
alpha = sess.run(grads, feed_dict = {model.input : Test})
is also not working, giving the error:
TypeError: Using a `tf.Tensor` as a Python `bool` is not allowed. Use `if t is not None:` instead of `if t:` to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.
You're trying to use uninitialized variable. All you have to do is add
sess.run(tf.global_variables_initializer())
right after with tf.Session() as sess:
Edit:
You need to register session with Keras
with tf.Session() as sess:
tf.keras.backend.set_session(sess)
And use tf.initializers.variables(var_list) instead of tf.global_variables_initializer()
See https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html
Edit:
Test = np.ones((1, 18), dtype=np.float32)
inputs = layers.Input(shape=[18,])
layer = layers.Dense(10, activation='sigmoid')(inputs)
model = tf.keras.Model(inputs=inputs, outputs=layer)
model.compile(optimizer='adam', loss='mse')
checkpointer = tf.keras.callbacks.ModelCheckpoint(filepath='path/weights.hdf5')
model.fit(Test, nb_epoch=1, batch_size=1, callbacks=[checkpointer])
grads = tf.gradients(model.output, model.input)
with tf.Session() as sess:
tf.keras.backend.set_session(sess)
sess.run(tf.global_variables_initializer())
model.load_weights('path/weights.hdf5')
alpha = sess.run(grads, feed_dict={model.input: Test})
print(alpha)
This shows consistent result
I have written a simple classification program using Tensorflow and getting the output except I tried to print the shape of tensors for Model parameters, features & bias.
The function definations:
import tensorflow as tf, numpy as np
from tensorflow.examples.tutorials.mnist import input_data
def get_weights(n_features, n_labels):
# Return weights
return tf.Variable( tf.truncated_normal((n_features, n_labels)) )
def get_biases(n_labels):
# Return biases
return tf.Variable( tf.zeros(n_labels))
def linear(input, w, b):
# Linear Function (xW + b)
# return np.dot(input,w) + b
return tf.add(tf.matmul(input,w), b)
def mnist_features_labels(n_labels):
"""Gets the first <n> labels from the MNIST dataset
"""
mnist_features = []
mnist_labels = []
mnist = input_data.read_data_sets('dataset/mnist', one_hot=True)
# In order to make quizzes run faster, we're only looking at 10000 images
for mnist_feature, mnist_label in zip(*mnist.train.next_batch(10000)):
# Add features and labels if it's for the first <n>th labels
if mnist_label[:n_labels].any():
mnist_features.append(mnist_feature)
mnist_labels.append(mnist_label[:n_labels])
return mnist_features, mnist_labels
The graph creation :
# Number of features (28*28 image is 784 features)
n_features = 784
# Number of labels
n_labels = 3
# Features and Labels
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# Weights and Biases
w = get_weights(n_features, n_labels)
b = get_biases(n_labels)
# Linear Function xW + b
logits = linear(features, w, b)
# Training data
train_features, train_labels = mnist_features_labels(n_labels)
print("Total {0} data points of Training Data, each having {1} features \n \
Total {2} number of labels,each having 1-hot encoding {3}".format(len(train_features),len(train_features[0]),\
len(train_labels),train_labels[0]
)
)
# global variables initialiser
init= tf.global_variables_initializer()
with tf.Session() as session:
session.run(init)
The problem is here :
# shapes =tf.Print ( tf.shape(features), [tf.shape(features),
# tf.shape(labels),
# tf.shape(w),
# tf.shape(b),
# tf.shape(logits)
# ], message= "The shapes are:" )
# print("Verify shapes",shapes)
logits = tf.Print(logits, [tf.shape(features),
tf.shape(labels),
tf.shape(w),
tf.shape(b),
tf.shape(logits)],
message= "The shapes are:")
print(logits)
I looked at here, but didn't find much useful.
# Softmax
prediction = tf.nn.softmax(logits)
# Cross entropy
# This quantifies how far off the predictions were.
# You'll learn more about this in future lessons.
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
# You'll learn more about this in future lessons.
loss = tf.reduce_mean(cross_entropy)
# Rate at which the weights are changed
# You'll learn more about this in future lessons.
learning_rate = 0.08
# Gradient Descent
# This is the method used to train the model
# You'll learn more about this in future lessons.
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: train_features, labels: train_labels})
# Print loss
print('Loss: {}'.format(l))
The output I am getting is :
Extracting dataset/mnist/train-images-idx3-ubyte.gz
Extracting dataset/mnist/train-labels-idx1-ubyte.gz
Extracting dataset/mnist/t10k-images-idx3-ubyte.gz
Extracting dataset/mnist/t10k-labels-idx1-ubyte.gz
Total 3118 data points of Training Data, each having 784 features
Total 3118 number of labels,each having 1-hot encoding [0. 1. 0.]
Tensor("Print_22:0", shape=(?, 3), dtype=float32)
Loss: 5.339271068572998
Could anyone help me understand, Why I am not able to see the shapes of the tensors?
That is not how you use tf.Print. It is an op that does nothing on its own (simply returns the input) but prints the requested tensors as a side effect. You should do something like
logits = tf.Print(logits, [tf.shape(features),
tf.shape(labels),
tf.shape(w),
tf.shape(b),
tf.shape(logits)],
message= "The shapes are:")
Now, whenever logits is evaluated (as it will be for computing the loss/gradients), the shape information will be printed.
What you are doing right now is simply printing the return value of the tf.Print op, which is just its input (tf.shape(features)).
After #xdurch0 suggestion, I tried this
shapes = tf.Print(logits, [tf.shape(features),
tf.shape(labels),
tf.shape(w),
tf.shape(b),
tf.shape(logits)],
message= "The shapes are:")
# Run optimizer and get loss
_, l, resultingShapes = session.run( [optimizer, loss, shapes],
feed_dict={features: train_features, labels: train_labels})
print('The shapes are: '. resultingShapes.shape)
and it worked partially,
Extracting dataset/mnist/train-images-idx3-ubyte.gz
Extracting dataset/mnist/train-labels-idx1-ubyte.gz
Extracting dataset/mnist/t10k-images-idx3-ubyte.gz
Extracting dataset/mnist/t10k-labels-idx1-ubyte.gz
Total 3118 data points of Training Data, each having 784 features
Total 3118 number of labels, each having 1-hot encoding [0. 1. 0.]
The shapes are: (3118, 3)
Loss: 10.223002433776855
could #xdurch0 suggest something to get the desired results?
My DESIRED RESULTS are:
tf.shape(features): (3118, 784) tf.shape(labels) :(3118, 3) ,
tf.shape(w) : (784,3), tf.shape(b) : (3,1), tf.shape(logits):(3118,3)
these are my first tensorflow steps, and I would like if somebody else had the same issues as me and if there is a way around it.
I am coding the mnist tutorial and my current code-snippet is:
#placeholder for input
x = tf.placeholder(tf.float32,[None,784]) # None means a dimension can be of any length
#Weights for the model: 784 pixel maps to ten results
W = tf.Variable(tf.zeros([784,10]))
#bias
b = tf.Variable( tf.zeros([10]))
#implementing the model
y = tf.matmul(x,W) + b
#implementing cross-entropy
y_ = tf.placeholder(tf.float32,[None,10])
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess=tf.InteractiveSession()
tf.global_variables_initializer().run()
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
for _ in range(1000):
batch_xs, batch_xy64 = mnist.train.next_batch(100)
batch_xy = batch_xy64.astype(np.float32)
sess.run(train_step , feed_dict={x:batch_xs,y:batch_xy})
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
print (sess.run(accuracy,feed_dict={x:mnist.test.images, y_:mnist.test.labels}))
First I tried cross_entropy from the MNIST description and the the one in the provided source-code, which made no difference.
Note that I explicitly try to cast the batch_xy, as it is returned as a float 64.
This also seems to be the problem, as in the session.run float32 tensors and variables seem to be expected.
As far as I saw debugging the code, the labes in the mnist are returned as float64 - perhaps that explains my error:
...
File "/home/braunalx/python-workspace/LearnTensorFlow/firstSteps/MNIST_Start.py", line 40, in mnist_run
y_ = tf.placeholder(tf.float32,[None,10])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1548, in placeholder
return gen_array_ops._placeholder(dtype=dtype, shape=shape, name=name)
...
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,10]
[[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[?,10], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Is there any issue with the provided mnist data?
The error says that you didn't feed a value for a placeholder that is needed. Replace y with y_ on this line sess.run(train_step , feed_dict={x:batch_xs,y:batch_xy}).
Currently I have the following code:
init_state = tf.Variable(tf.zeros([batch_partition_length, state_size])) # -> [16, 1024].
final_state = tf.Variable(tf.zeros([batch_partition_length, state_size]))
And inside my inference method that is responsible producing the output, I have the following:
def inference(frames):
# Note that I write the final_state as a global valriable to avoid the shadowing issue, since it is referenced at the dynamic_rnn line.
global final_state
# .... Here we have some conv layers and so on...
# Now the RNN cell
with tf.variable_scope('local1') as scope:
# Move everything into depth so we can perform a single matrix multiply.
shape_d = pool3.get_shape()
shape = shape_d[1] * shape_d[2] * shape_d[3]
# tf_shape = tf.stack(shape)
tf_shape = 1024
print("shape:", shape, shape_d[1], shape_d[2], shape_d[3])
# So note that tf_shape = 1024, this means that we have 1024 features are fed into the network. And
# the batch size = 1024. Therefore, the aim is to divide the batch_size into num_steps so that
reshape = tf.reshape(pool3, [-1, tf_shape])
# Now we need to reshape/divide the batch_size into num_steps so that we would be feeding a sequence
rnn_inputs = tf.reshape(reshape, [batch_partition_length, step_size, tf_shape])
print('RNN inputs shape: ', rnn_inputs.get_shape()) # -> (16, 64, 1024).
cell = tf.contrib.rnn.BasicRNNCell(state_size)
# note that rnn_outputs are the outputs but not multiplied by W.
rnn_outputs, final_state = tf.nn.dynamic_rnn(cell, rnn_inputs, initial_state=init_state)
# linear Wx + b
with tf.variable_scope('softmax_linear') as scope:
weight_softmax = \
tf.Variable(
tf.truncated_normal([state_size, n_classes], stddev=1 / state_size, dtype=tf.float32, name='weight_softmax'))
bias_softmax = tf.constant(0.0, tf.float32, [n_classes], name='bias_softmax')
softmax_linear = tf.reshape(
tf.matmul(tf.reshape(rnn_outputs, [-1, state_size]), weight_softmax) + bias_softmax,
[batch_size, n_classes])
print('Output shape:', softmax_linear.get_shape())
return softmax_linear
# Here we define the loss, accuracy and the optimzer.
# now run the graph:
with tf.Session() as sess:
_, accuracy_train, loss_train, summary = \
sess.run([optimizer, accuracy, cost_scalar, merged], feed_dict={x: image_batch,
y_valence: valences,
confidence_holder: confidences})
....
Problem: How I would be able to assign initial_state the value stored in final_state? That is, how to more update a Variable value given the other?
I have used the following:
tf.assign(init_state, final_state.eval())
under session after running the sess.run command. But, this is throwing an error:
You must feed a value for placeholder tensor 'inputs' with dtype float
Where tf.Variable: "input" is declared as follows:
x = tf.placeholder(tf.float32, [None, 112, 112, 3], name='inputs')
And the feeding is done after reading the images from the tfRecords through the following command:
example = tf.train.Example()
example.ParseFromString(string_record)
height = int(example.features.feature['height']
.int64_list
.value[0])
width = int(example.features.feature['width']
.int64_list
.value[0])
img_string = (example.features.feature['image_raw']
.bytes_list
.value[0])
img_1d = np.fromstring(img_string, dtype=np.uint8)
reconstructed_img = img_1d.reshape((height, width, -1)) # Where this is added to the image_batch list, which is fed into the placeholder.
And if tried the following:
img_1d = np.fromstring(img_string, dtype=np.float32)
This will produce the following error:
ValueError: cannot reshape array of size 9408 into shape (112,112,newaxis)
Any help is much appreciated!!
So here are the mistakes that I have done so far. After doing some revision I figured out the following:
I shouldn't create the final_state as a tf.Variable. Since tf.nn.dynamic_rnn return tensors as ndarray, then, I should not instantiate the final_state int the beginning. And I should not use the global final_state under the function definition.
In order to assign the initial state the final_state, I used:
tf.assign(intial_state, final_state)
And things work out.
Note: in tensorflow, an operation returns the data as numpy array in python and as tensorflow::Tensor in C and C++.
Have a look at https://www.tensorflow.org/versions/r0.10/get_started/basic_usage for more informaiton.