Problem: How can i convert a .tflite (serialised flat buffer) to .pb (frozen model)? The documentation only talks about one way conversion.
Use-case is: I have a model that is trained on converted to .tflite but unfortunately, i do not have details of the model and i would like to inspect the graph, how can i do that?
I found the answer here
We can use Interpreter to analysis the model and the same code looks like following:
import numpy as np
import tensorflow as tf
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
Netron is the best analysis/visualising tool i found, it can understand lot of formats including .tflite.
I don't think there is a way to restore tflite back to pb as some information are lost after conversion. I found an indirect way to have a glimpse on what is inside tflite model is to read back each of the tensor.
interpreter = tf.contrib.lite.Interpreter(model_path=model_path)
interpreter.allocate_tensors()
# trial some arbitrary numbers to find out the num of tensors
num_layer = 89
for i in range(num_layer):
detail = interpreter._get_tensor_details(i)
print(i, detail['name'], detail['shape'])
and you would see something like below. As there are only limited of operations that are currently supported, it is not too difficult to reverse engineer the network architecture. I have put some tutorials too on my Github
0 MobilenetV1/Logits/AvgPool_1a/AvgPool [ 1 1 1 1024]
1 MobilenetV1/Logits/Conv2d_1c_1x1/BiasAdd [ 1 1 1 1001]
2 MobilenetV1/Logits/Conv2d_1c_1x1/Conv2D_bias [1001]
3 MobilenetV1/Logits/Conv2d_1c_1x1/weights_quant/FakeQuantWithMinMaxVars [1001 1 1 1024]
4 MobilenetV1/Logits/SpatialSqueeze [ 1 1001]
5 MobilenetV1/Logits/SpatialSqueeze_shape [2]
6 MobilenetV1/MobilenetV1/Conv2d_0/Conv2D_Fold_bias [32]
7 MobilenetV1/MobilenetV1/Conv2d_0/Relu6 [ 1 112 112 32]
8 MobilenetV1/MobilenetV1/Conv2d_0/weights_quant/FakeQuantWithMinMaxVars [32 3 3 3]
9 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/Relu6 [ 1 14 14 512]
10 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/depthwise_Fold_bias [512]
11 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/weights_quant/FakeQuantWithMinMaxVars [ 1 3 3 512]
12 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/Conv2D_Fold_bias [512]
13 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/Relu6 [ 1 14 14 512]
14 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/weights_quant/FakeQuantWithMinMaxVars [512 1 1 512]
15 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/Relu6 [ 1 14 14 512]
16 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/depthwise_Fold_bias [512]
17 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/weights_quant/FakeQuantWithMinMaxVars [ 1 3 3 512]
18 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/Conv2D_Fold_bias [512]
19 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/Relu6 [ 1 14 14 512]
20 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/weights_quant/FakeQuantWithMinMaxVars [512 1 1 512]
I have done this with TOCO, using tf 1.12
tensorflow_1.12/tensorflow/bazel-bin/tensorflow/contrib/lite/toco/toco --
output_file=coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.pb --
output_format=TENSORFLOW_GRAPHDEF --input_format=TFLITE --
input_file=coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.tflite --
inference_type=FLOAT --input_type=FLOAT --input_array="" --output_array="" --
input_shape=1,450,450,3 --dump_grapHviz=./
(you can remove the dump_graphviz option)
Related
I'm trying to build a linear model on my own yield
# Create features
X = np.array([-7.0, -4.0, -1.0, 2.0, 5.0, 8.0, 11.0, 14.0])
# Create labels
y = np.array([3.0, 6.0, 9.0, 12.0, 15.0, 18.0, 21.0, 24.0])
model = tf.keras.Sequential([
tf.keras.layers.Dense(50, activation = "elu", input_shape = [1]),
tf.keras.layers.Dense(1)
])
model.compile(loss = "mae",
optimizer = tf.keras.optimizers.Adam(learning_rate = 0.01),
metrics = ["mae"])
model.fit(X, y, epochs = 150)
When I train with the above X and y data, the loss value starts from a normal value.
experience salary
0 0 2250
1 1 2750
2 5 8000
3 8 9000
4 4 6900
5 15 20000
6 7 8500
7 3 6000
8 2 3500
9 12 15000
10 10 13000
11 14 18000
12 6 7500
13 11 14500
14 12 14900
15 3 5800
16 2 4000
But when I use such a dataset, the initial loss value starts as 800.(same as above model btw)
What could be the reason for this?
Your learning rate is significantly high. You should opt for much lower initial learning rates, such as 0.0001 or 0.00001.
Otherwise you are using 'linear' activation on the last layer (default one) and the correct loss function and metric. Also note that the default batch_size in absence of explicit mentioning is 32.
UPDATING : as determined by the author of the question, underfitting was also fundamental to the problem. Adding multiple more layers helped solved the problem.
My tensor training data set structure is as follows:
[[1 3 0 99 4 ... 9 0 9],
...
[3 2 3 1 3 ... 9 9 8]]
In which there are 798 "rows" or individual arrays and each array is of length 9000, or has 9000 "columns".
The label data is like this: [ [1 0 1 0 0 ... 0 0 0 1]]. It is one array with 798 columns.
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.LSTM(100,batch_input_shape=(798,1000,9000)))
model.compile(loss='mean_absolute_error',optimizer='adam',metrics=['accuracy'])
model.fit(training_data,training_labels,batch_size=150,epochs=10,validation_data=(validation_data,validation_labels))
But I kept getting the error:
ValueError: Input 0 of layer sequential_2 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 9000]
I'm not sure what I'm doing wrong. Should I specify the batch_input_shape? Should I specify the input_shape? What do samples, batch size, time step, and features mean in relation to my data?
From here I understand what shuffle, batch and repeat do. I'm working on Medical image data where each mini-batch has slices from one patient record. I'm looking for a way to shuffle within the minibatch while training. I cannot increase the buffer size because I don't want slices from different records to get mixed up. Could someone please explain how this can be done?
dataset = tf.data.Dataset.from_tensor_slices(tf.range(1, 20))
data = dataset.batch(5).shuffle(5).repeat(1)
for element in data.as_numpy_iterator():
print(element)
Current Output :
[ 6 7 8 9 10]
[1 2 3 4 5]
[11 12 13 14 15]
[16 17 18 19]
Expected Output :
[ 6 8 9 7 10]
[3 4 1 5 2]
[15 12 11 14 13]
[16 17 19 20 17]
I just realized, there is no need to shuffle within the mini-batch as shuffling within the minibatch doesn't contribute to improving training in any way. Appretiate if anyone has other views on this.
I am using Keras with Tensorflow backend.
I am facing a batch size limitation due to high memory usage
My data is composed of 4 1D signals treated with a sample size of 801 for each channel. Global sample size is 3204
Input data:
4 channels of N 1D signals of length 7003
Input generated by applying a sliding window on 1D signals
Give input data shape (N*6203, 801, 4)
N is the number of signals used to build one batch
My Model:
Input 801 x 4
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
Flatten
Dense 2000
Dense 5
With my GPU (Quadro K6000, 12189 MiB) i can fit only N=2 without warning
With N=3 I get a ran out of memory warning
With N=4 I get a ran out of memory error
It sound like batch_size is limitated by the space used by all tensors.
Input 801 x 4 x 1
Conv 797 x 4 x 20
MaxPooling 398 x 4 x 20
Conv 394 x 4 x 20
MaxPooling 197 x 4 x 20
Conv 193 x 4 x 20
MaxPooling 96 x 4 x 20
Conv 92 x 4 x 20
Dense 2000
Dense 5
With a 1D signal of 7001 with 4 channels -> 6201 samples
Total = N*4224 MiB.
N=2 -> 8448 MiB fit in GPU
N=3 -> 12672 MiB work but warning: failed to allocate 1.10 GiB then 3.00 GiB
N=4 -> 16896 MiB fail, only one message: failed to allocate 5.89 GiB
Does it work like that ? Is there any way to reduce the memory usage ?
To give a time estimation: 34 batch run in 40s and I got N total = 10^6
Thank you for your help :)
Example with python2.7: https://drive.google.com/open?id=1N7K_bxblC97FejozL4g7J_rl6-b9ScCn
I coded some of the neural networks like image classifier,mnist and NLP I
got an accuracy of 98 percent on my GPU (NVIDIA GT 610). How can I feed new data (not training data) to my neural network and get the predictions?
Let's suppose:
Inputs Output
0 0 1 0
1 1 1 1
1 0 1 1
0 1 1 0
I got an accuracy of 98.7 how to give an input like [ 1 1 0] and predict the output. Is there any method in Tensorflow to do this?
If your output variable is y and your input placeholder is x:
sess.run(y, feed_dict={x: mnist.test.images})
See https://www.tensorflow.org/get_started/mnist/beginners