TFlite: set_tensor() takes 3 positional arguments but 4 were given - numpy

I've written a simple program to calculate a quadratic equation with Tensorflow. Now, I'd like to transform the code for running on the Coral Dev Board by using Tensorflow lite.
The following code shows the generation of tflite-file:
# Define and compile the neural network
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
# Provide the data
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
# Generation TFLite Model
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the TFLite-Model
with open('mobilenet_v2_1.0_224.tflite', 'wb') as f:
f.write(tflite_model)
This code runs on the Coral Dev Board:
# Load TFLite model and allocate tensors.
interpreter = tflite.Interpreter(model_path="mobilenet_v2_1.0_224.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=np.float32)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], xs, ys)
...
The last codeline runs on error:
TypeError: set_tensor() takes 3 positional arguments but 4 were given
The output of 'input_details[0]['index']':
{'name': 'serving_default_dense_input:0',
'index': 0,
'shape': array([1, 1], dtype=int32),
'shape_signature': array([-1, 1], dtype=int32),
'dtype': <class 'numpy.float32'>,
'quantization': (0.0, 0),
'quantization_parameters':
{'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}
}
I' don't understand the cause of error. Has someone any idea?

You error is the following. You are passing a dictionary, to your set_tensor method.
That means when python, reads that line of code. It gives you a TypeError, since you are passing a interable with 2 concurrent values. So that is the why of you error!
Now to fix your code. First you need to understand that the set_tensor method, expects the index of the given tensor. What you are currently passing in the input_details[0]['index'] is something else entirely. What you want to pass is the index, of you tensor. Which is as your displayed data given by interpreter.get_input_details() showed is 0.
Also you are supposed to define the index of only one of the given data. Either the test data or the train data, not both at the same time. So eliminate either one of the xs or ys variables.
So just rewrite this line like this
interpreter.set_tensor(0, ys)
I hope this get right, usually is good to also take a look at documentation. So you understand what each method expects https://www.tensorflow.org/api_docs/python/tf/lite/Interpreter#set_tensor

My approach was wrong. In Xs are the X-values and in Ys are the Y-values (result values) of the quadratic equation. I was not aware that you cannot do training in Tflite. But thanks for the effort anyway.

Related

How to properly pass a string to a tflite model?

I have build a model for text classification (the input is a string, the output is a scalar) that I would like to quantize and deploy as a tensorflow lite model. I successfully converted the model to tflite and quantized it, but I'm unable to pass a string to the model for inference.
What I'm attempting to do, is create a minimally reproducible example of inference with the tflite model, which I can then provide to my company's engineering group so they can deploy it.
The following code works:
original_keras_model(tf.convert_to_tensor([testgood, testbad]))
It correctly outputs two scalars.
The following code does not work:
interpreter = tf.lite.Interpreter(model_path="./final_model.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
input_data = tf.convert_to_tensor([testgood, testbad])
#input_data2 = tf.reshape(input_data[0], input_shape)
interpreter.set_tensor(input_details[0]['index'], input_data)
In particular, set_tensor returns this error: ValueError: Cannot set tensor: Dimension mismatch. Got 2 but expected 1 for dimension 0 of input 0.
I have tried passing only one example instead of two. In that case I get the error: ValueError: Cannot set tensor: Dimension mismatch. Got 0 but expected 1 for input 0.
The input details are:
[{'name': 'serving_default_text_vectorization_input:0',
'index': 0,
'shape': array([1], dtype=int32),
'shape_signature': array([-1], dtype=int32),
'dtype': numpy.bytes_,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}}]
Any help is appreciated.

How to update the hmmlearn learned object when we have new samples?

I have implemented a simple code for Hidden Markov Model by hmmlearn and it is working well. I used fit() method, i.e. hmmlearn.fit to learn the hmm parameter based on my data. If I have more data and want to update previously fitted model without training/fitting from scratch, what can I do?
In other words, how can I initialize a new model based on what I know so far, and keep going with the new piece of observations/samples to fit a better model to my data
In hmmlearn you may have noticed that once you train with hmmlearn.fit, the model parameters update:
import numpy as np
import pickle
from hmmlearn import hmm
np.random.seed(42)
# initialize model
model = hmm.GaussianHMM(n_components=3, covariance_type="full")
model.startprob_ = np.array([0.33, 0.33, 0.34])
model.transmat_ = np.array([[0.1, 0.2, 0.7],
[0.3, 0.5, 0.2],
[0.5, 0.1, 0.4]])
model.means_ = np.array([[1.0, 1.0], [2.0, 1.0], [3.0, 1.0]])
model.covars_ = np.tile(np.identity(2), (3, 1, 1))
# generate "fake" training data
emissions1, states1 = model.sample(100)
print("Transition matrix before training: \n", model.transmat_)
# train
model.fit(emissions1)
print("Transition matrix after training: \n", model.transmat_)
# save model
with open("modelname.pkl", "wb") as f: pickle.dump(model, f)
#################################
>>> Transition matrix before training:
[[0.1 0.2 0.7]
[0.3 0.5 0.2]
[0.5 0.1 0.4]]
>>> Transition matrix after training:
[[0.19065325 0.50905216 0.30029459]
[0.41888047 0.39276483 0.18835471]
[0.44558543 0.13767827 0.4167363 ]]
This means that if you have a new training data (ie. emissions2), you can use the same updated model to train on the new emission sequence. You can either choose to save the entire model by pickling (as shown above), or you can save the numpy arrays of the transition matrix, emission matrix, etc.

What does mean_squared_error translate to in keras

while looking at tensorflow examples online , i'm seeing this
xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], dtype=float)
ys = np.array([1.0, 1.5, 2.0, 2.5, 3.0, 3.5], dtype=float)
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
I was trying to re-write this line so that it uses objects rather than string literals
model.compile(optimizer='sgd', loss='mean_squared_error')
So far i'm able to
model.compile(optimizer=keras.optimizers.SGD(), loss='mean_squared_error')
For mean_squared_error we have keras.losses.mean_squared_error(y_true, y_pred)
I'm unable to understand y_true, y_pred and what values needs to be provided for the example above.
In summary, from example above what is equivalent of
loss='mean_squared_error'
you need to pass simply
model.compile(optimizer=tf.keras.optimizers.SGD(), loss=tf.keras.losses.MeanSquaredError())
y_true and y_pred are handled automatically by keras

Can I use real probability distributions as labels for tf.nn.softmax_cross_entropy_with_logits?

In Tensorflow manual, description for labels is like below:
labels: Each row labels[i] must be a valid probability distribution.
Then, does it mean labels can be like below, if I have real probability distributions of classes for each input.
[[0.1, 0.2, 0.05, 0.007 ... ]
[0.001, 0.2, 0.5, 0.007 ... ]
[0.01, 0.0002, 0.005, 0.7 ... ]]
And, is it more efficient than one-hot encoded labels?
Thank you in advance.
In a word, yes, you can use probabilities as labels.
The documentation for tf.nn.softmax_cross_entropy_with_logits says you can:
NOTE: While the classes are mutually exclusive, their probabilities
need not be. All that is required is that each row of labels is
a valid probability distribution. If they are not, the computation of the
gradient will be incorrect.
If using exclusive labels (wherein one and only
one class is true at a time), see sparse_softmax_cross_entropy_with_logits.
Let's have a short example to be sure it works ok:
import numpy as np
import tensorflow as tf
labels = np.array([[0.2, 0.3, 0.5], [0.1, 0.7, 0.2]])
logits = np.array([[5.0, 7.0, 8.0], [1.0, 2.0, 4.0]])
sess = tf.Session()
ce = tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits).eval(session=sess)
print(ce) # [ 1.24901222 1.86984602]
# manual check
predictions = np.exp(logits)
predictions = predictions / predictions.sum(axis=1, keepdims=True)
ce_np = (-labels * np.log(predictions)).sum(axis=1)
print(ce_np) # [ 1.24901222 1.86984602]
And if you have exclusive labels, it is better to use one-hot encoding and tf.nn.sparse_softmax_cross_entropy_with_logits rather than tf.nn.softmax_cross_entropy_with_logitsand explicit probability representation like [1.0, 0.0, ...]. You can have shorter representation that way.

Google Cloud ml-engine fails predicting multiple inputs

Predictions only successful when providing a single instance instance.json.
Test 1: Contents of instance.json:
{"serving_input": [20.0, 0.0, 1.0 ... 0.16474569041197143, 0.04138248072194471], "prediction_id": 0, "keep_prob": 1.0}
Prediction (same output for local and online prediction)
gcloud ml-engine local predict --model-dir=./model_dir --json-instances=instances.json
Output:
SERVING_OUTPUT ARGMAX PREDICTION_ID SCORES TOP_K
[-340.6920166015625, -1153.0877685546875] 0 0 [1.0, 0.0] [1.0, 0.0]
Test 2: Contents of instance.json:
{"serving_input": [20.0, 0.0, 1.0 ... 0.16474569041197143, 0.04138248072194471], "prediction_id": 0, "keep_prob": 1.0}
{"serving_input": [21.0, 2.0, 3.0 ... 3.14159265359, 0.04138248072194471], "prediction_id": 1, "keep_prob": 1.0}
Output:
.. Incompatible shapes: [2] vs. [2,108] .. (_arg_keep_prob_0_1, Model/dropout/random_uniform)
Where as 108 is the size of the first hidden layer(net_dim=[2015,108,2]). (Initialized with tf.nn.dropout, thus the keep_prob=1.0)
Exporting code:
probabilities = tf.nn.softmax(self.out_layer)
top_k, _ = tf.nn.top_k(probabilities, self.network_dim[-1])
prediction_signature = (
tf.saved_model.signature_def_utils.predict_signature_def(
inputs={'serving_input': self.x, 'keep_prob': self.keep_prob,
'prediction_id': self.prediction_id_in},
outputs={'serving_output': self.out_layer, 'argmax': tf.argmax(self.out_layer, 1),
'prediction_id': self.prediction_id_out, 'scores': probabilities, 'top_k': top_k}))
builder.add_meta_graph_and_variables(
sess,
tags=[tf.saved_model.tag_constants.SERVING],
signature_def_map={
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
prediction_signature
},
main_op=tf.saved_model.main_op.main_op())
builder.save()
How can i format the instance.json to perform a batched prediction? (Prediction with multiple input instances)
The problem is not in the JSON. Check to see how you are using self.x
I think that your code is assuming that it's a 1D array, when you should treat it as a tensor of shape [?, 108]