tensorflow lite inference give different results then regular inference - tensorflow

i have a model that extract 512 features from an image (numbers between -1,1).
i converted this model to tflite float format using the instruction here
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite
i run an inference on the same image with the original model and the tflite model.
i am getting different results for the vector, i was expecting to get very similar results as i didn't use quantized format. and from what i understand tf-lite should only improve the inference performance time and not effect the features calculation.
my question is this normal ? anyone else encountered this ?
i didn't find any topics regarding this at any place.
Updated with code.
i have this network i trained (removed many items as i can't share full network)
placeholder = tf.placeholder(name='input', dtype=tf.float32,shape=[None, 128,128, 1])
with slim.arg_scope([slim.conv2d, slim.separable_conv2d],
activation_fn=tf.nn.relu, normalizer_fn=slim.batch_norm):
net = tf.identity(placeholder)
net = slim.conv2d(net, 32, [3, 3], scope='conv11')
net = slim.separable_conv2d(net, 64, [3, 3], scope='conv12')
net = slim.max_pool2d(net, [2, 2], scope='pool1') # 64x64
net = slim.separable_conv2d(net, 128, [3, 3], scope='conv21')
net = slim.max_pool2d(net, [2, 2], scope='pool2') # 32x32
net = slim.separable_conv2d(net, 256, [3, 3], scope='conv31')
net = slim.max_pool2d(net, [2, 2], scope='pool3') # 16x16
net = slim.separable_conv2d(net, 512, [3, 3], scope='conv41')
net = slim.max_pool2d(net, [2, 2], scope='pool4') # 8x8
net = slim.separable_conv2d(net, 1024, [3, 3], scope='conv51')
net = slim.avg_pool2d(net, [8, 8], scope='pool5') # 1x1
net = slim.dropout(net)
net = slim.conv2d(net, feature_vector_size, [1, 1], activation_fn=None, normalizer_fn=None, scope='features')
embeddings = tf.nn.l2_normalize(net, 3, 1e-10, name='embeddings')
bazel-bin/tensorflow/contrib/lite/toco/toco --input_file=/tmp/network_512.pb
--input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --output_file=/tmp/tffiles/network_512.tflite
--inference_type=FLOAT --input_type=FLOAT --input_arrays=input --output_arrays=embeddings --input_shapes=1,128,128,1
i run network_512.pb using tensorflow in python and network_512.tflite using the code from https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo
where i modified the code to load my network with and run it.

Update that i have found. the test i did was using the Demo app tensorflow provide and change it to use my costume model and extracting features, and there i noticed the difference in the features values.
once i compiled the tf-lite c++ lib manually on latest android, and run the flow with the same flow i use (which is TF-C API until now) i got almost same results for the features.
didn't have time to investigate from where come the difference. but i am happy now.

Related

How to do slice assignment in Tensorflow 2.0

I am currently trying to perform a TensorFlow slice assignment similar to this PyTorch code.
input_seq[1:, :] = torch.from_numpy(stroke[:-1, :])
A plain item assignment like the above does not work for TensorFlow and gives the following error.
TypeError: 'tensorflow.python.framework.ops.EagerTensor' object does not support item assignment.
Previous solutions for the same problem are quite dated using older versions of TensorFlow. I would greatly appreciate any help regarding how to tackle the same.
Here's an example. You can't do numpy like slice assignements. But can do the following (Tested tensorflow==2.9).
tensorflow as tf
import numpy as np
a = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
b = np.array([[6, 5, 4], [9, 8, 7]])
c = tf.tensor_scatter_nd_update(a, [[1], [2]], b)

problem in converting tensorflow 1.x code to tensorflow 2.x

I have a code in tf 1.x which I have converted in tf 2.x using 'tf upgrade v2.....' command but
there is 1 segment which I am having trouble in converting,i.e.
x=tf.placeholder(tf.float32, [None, None, None, 4])
y=tf.placeholder(tf.float32, [None, None, None, 3])
I want to change the above two lines such that eager execution is not disabled
and one more line which is:
conv1 = slim.conv2d(input, 32, [3, 3], rate=1, activation_fn=lrelu, scope='g_conv1_1')
In the above line I don't want to use keras.layers.conv2D but I want to use tf.nn.conv2d().

tf.shape(image) returns None in Tensorflow 2.0

I was using Tensorflow 2.0 to build a super resolution model. During pre-processing, I wanted to crop both the low and high resolution images by a given patch size. In order to do so, I wanted to get the height and width of the low and high resolution images. But tf.shape(image) is returning None.
Is there a better approach?
Currently I am just resizing every image to some size before using tf.shape, but since not all images have equal size, it is affecting the quality of the imaged. Looking forward to your suggestions.
Edited part:
Here is some parts of the code
low_r = tf.io.decode_jpeg(lr_filename, channels=3)
low_r = tf.cast(low_r, dtype=tf.float32)
print(low_r.shape)
The print statement prints (None, None, 3)
What I wanted was to get the height and weight, like (240,360,3)
I'm not sure if this is also your case, but in my TensorFlow (v2.4.0rc2), my_tensor.shape() also returns TensorShape([None, None, None, None]). It is connected to the fact that the TensorShape tensor is generated during the build and not during the execution.
Using tf.shape() (mentioned in your question, but not used in your code snippet actually) solves it for me.
> my_tensor.shape()
TensorShape([None, None, None, None])
> tf.shape(my_tensor)
[10 512 512 8]
I'm unable to repeat your issue, but this should give you a way to test out your Tensorflow 2.0 install and compare with the results you're currently getting.
Create a tensor and check it's shape:
import tensorflow as tf
t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])
tf.shape(t) # [2, 2, 3]
Out[1]: <tf.Tensor: id=1, shape=(3,), dtype=int32, numpy=array([2, 2, 3])>
Next, checking what the function return when called:
tf_shape_var = tf.shape(t)
print(tf_shape_var)
Output:
tf.Tensor([2 2 3], shape=(3,), dtype=int32)
Finally, calling it on an int and string to get back a valid return:
tf.shape(1)
Out[10]: <tf.Tensor: id=12, shape=(0,), dtype=int32, numpy=array([], dtype=int32)>
tf.shape('asd')
Out[11]: <tf.Tensor: id=15, shape=(0,), dtype=int32, numpy=array([], dtype=int32)>
And the print statements:
print(tf.shape(1))
print(tf.shape('asd'))
Output:
tf.Tensor([], shape=(0,), dtype=int32)
tf.Tensor([], shape=(0,), dtype=int32)
Link for tf.shape() https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/shape

Kernel crashing, trying to run a single convolution layer over 1 image

I am using the MNIST data and loading a single image for a cnn, I wanted to see how the image looks like after a single layer. I have gone through the documentation to see if there were any errors with my inputs or how I am consolidating the data. Is the code wrong by any means or is it just my computer ?
filtersw = tf.Variable(tf.random_normal(shape=[5, 5, 1, 1], mean=0.5, stddev=0.01))
filtersb = tf.Variable(tf.zeros(1))
conv = tf.nn.conv2d(x, filtersw, strides=[1, 1, 1, 1], padding='VALID') + filtersb
sess = tf.Session()
sess.run(tf.global_variables_initializer())
afterimage = sess.run(conv, feed_dict={x : X_train[0:1]})
You need to call tf.layers.conv2d instead of tf.nn.conv2d. The two functions have the same name, but they perform different operations. tf.layers.conv2d creates a convolution layer in a CNN and tf.nn.conv2d performs convolution with a known filter or set of filters.

TensorFlow – Slice at different position for each batch element

I need to slice a window of constant size for each batch element, but starting at different locations. For example, for windows of length two, I want to be able to do something like:
batch = tf.constant([[1, 2, 3],
[4, 5, 6]])
window_size = 2
window_starts = tf.constant([1, 0]) # The index from which to start slicing for each batch element.
slice_windows(batch, window_size, window_starts)
# this should return [[2, 3],
# [4, 5]]
I won’t know what the window_starts are beforehand (they come from data), so I can’t just enumerate all of the indices I need and use tf.gather_nd.
Furthermore, after doing computations on the windows, I then need to pad them back into place with 0s (so a different amount of padding for each batch element):
...computation on windows...
restore_window_positions(windows, window_starts, original_size=3)
# this should return [[0, 2, 3],
# [4, 5, 0]]