Why model.fit() method of keras do not accept any tensor as feature or label argument, on the other hand it accepts numpy arrays - numpy

Last time when I was training a dnn model I noticed that When I try to train my model with tensor (dtype = float64) it always gives error but when I train the model with numpy array with same specs(shape, values, dtype) as tensor it shows no error. Why is it so
Code
For feature and labels as tensor replace numpy.arrys in 2nd script with:
celsius_q = tf.Variable([-40, -10, 0, 8, 15, 22, 38], tf.float64)
fahrenheit_a = tf.Variable([-40, 14, 32, 46, 59, 72, 100], tf.float64)
When using feature and label as tensor it shows this error:
Error: ValueError: Failed to find data adapter that can handle input:
<class 'tensorflow.python.ops.resource_variable_ops.ResourceVariable'>,
<class 'tensorflow.python.ops.resource_variable_ops.ResourceVariable'>

Use tf.constant for creating an input tensor in Tensorflow.
tf.Variable can be changed later so this type of tensor is not good for model input. Please refer to this answer https://stackoverflow.com/a/44746203/20388268

Related

Different behavior of sequential API and functional API for tensorflow embedding

When I tried using Sequential API and Functional API in Tensorflow to apply the same simple embedding function, I see different result.
The result is as follows:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras import layers
inputs = np.random.randint(0, 99, [32, 100, 1])
myLayer = layers.Embedding(input_dim = 100, output_dim = 8)
# Sequential API
sm = keras.Sequential()
sm.add(myLayer)
sm_out = sm(inputs)
sm_out.shape # Shape of sm_out is: TensorShape([32, 100, 8])
# Functional API
fm_out = myLayer(inputs)
fm_out.shape # Shape of fm_out is: TensorShape([32, 100, 1, 8])
Is it intended or a bug?
First of all, your second call is not a functional API call. You need to wrap your layer output (with a tf.keras.layers.Input) in a tf.keras.models.Model for this to be a functional API call.
Secondly, when you're calling the sequential model, it is smart enough to detect that last dimension is 1 and ignore that when looking up embeddings (I'm not sure where exactly this is handled, maybe someone else can point to). So when you pass in a tensor of [32, 100, 1], what the embedding layer really sees is a [32, 100] sized array. This, after the look up, gets converted to a [32, 100, 8] sized tensor.
In your second call, when calling the model directly, it doesn't do this. So it simply converts the [32, 100, 1] sized input to a [32, 100, 1, 8] sized input.
You can get the same result from both these methods if you set your inputs shape to [32, 100] or [32, 100, 2] (last dimension != 1).
I guess the lesson here is always use the input_shape argument (to the first layer of the Sequential model) to prevent such unexpected behaviors.

How to use customvision.ai to create object detection model for TensorFlow Lite?

I have an object detection model that I've created in https://customvision.ai. If I export it as a TensorFlow Lite model, I get a model that expects FLOAT32 [1, 416, 416, 3] as input and returns FLOAT32 [1, 13, 13, 35] as output (as per TensorFlow Lite's visualize.py).
I would like to use that model in an Android app. I've tried to load the .tflite model file into the TensorFlow Lite object detection sample app, however it expects a different format. I get the following exception when running the app. java.lang.IllegalArgumentException: Cannot copy between a TensorFlowLite tensor with shape [1, 13, 13, 35] and a Java object with shape [1, 10, 4].
Is it feasible to adapt the sample app to use the model from customvision.ai?
How should I interpret the shape [1, 13, 13, 35]?
Thanks in advance!

The input dimension of the LSTM layer in Keras

I'm trying keras.layers.LSTM.
The following code works.
#!/usr/bin/python3
import tensorflow as tf
import numpy as np
from tensorflow import keras
data = np.array([1, 2, 3]).reshape((1, 3, 1))
x = keras.layers.Input(shape=(3, 1))
y = keras.layers.LSTM(10)(x)
model = keras.Model(inputs=x, outputs=y)
print (model.predict(data))
As shown above, the input data shape is (1, 3, 1), and the actual input shape in the Input layer is (3, 1). I'm a little bit confused about this inconsistency of the dimension.
If I use the following shape in the Input layer, it doesn't work:
x = keras.layers.Input(shape=(1, 3, 1))
The error message is as follows:
ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, 1, 3, 1]
It seems that the rank of the input must be 3, but why should we use a rank-2 shape in the Input layer?
Keras works with "batches" of "samples". Since most models use variable batch sizes that you define only when fitting, for convenience you don't need to care about the batch dimension, but only with the sample dimension.
That said, when you use shape = (3,1), this is the same as defining batch_shape = (None, 3, 1) or batch_input_shape = (None, 3, 1).
The three options mean:
A variable batch size: None
With samples of shape (3, 1).
It's important to know this distinction especially when you are going to create custom layers, losses or metrics. The actual tensors all have the batch dimension and you should take that into account when making operations with tensors.
Check out the documentation for tf.keras.Input. The syntax is as-
tf.keras.Input(
shape=None,
batch_size=None,
name=None,
dtype=None,
sparse=False,
tensor=None,
**kwargs
)
shape: defines the shape of a single sample, with variable batch size.
Notice, that it expects the first value as batch_size otherwise pass batch_size as a parameter explicitly

Tensorflow dataset with vectors of differing shapes

I am trying to create a dataset from vectors which can have differing lengths (the data column). I am currently using the following code:
import tensorflow as tf
data = [[1,2,3,4,5,6],[7,8,9,10]]
shapes = [[3,2],[2,2]]
classes = [0,1]
dataset = tf.data.Dataset.from_tensor_slices(
{"data": tf.constant(data),
"shape": tf.constant(shapes),
"class": tf.constant(classes)})
iterator = dataset.make_one_shot_iterator().get_next()
with tf.Session() as sess:
x = sess.run(dataset)
print(x)
However, I get this error:
Traceback (most recent call last):
File "test2.py", line 7, in <module>
{"data": tf.constant(data),
File "/Users/[username]/Documents/University/Project/Application/env/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 214, in constant
value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/Users/[username]/Documents/University/Project/Application/env/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 442, in make_tensor_proto
_GetDenseDimensions(values)))
ValueError: Argument must be a dense tensor: [[1, 2, 3, 4, 5, 6], [7, 8, 9, 10]] - got shape [2], but wanted [2, 6].
What is the correct method to set up a dataset which can accept vectors of different lengths? This question addresses the issue when reading from a file, however, I am defining the data explicitly.
You either pad the tensors yourself or you use sparse tensors.
I usually use sparse tensors. When you convert the sparse tensors to dense you can specify what the size should be and have the padding done for you.
The usual case for such tensors are either input strings, bags of words or sequences. The embedding operations handle strings and bags of words. The sequences are usually handled with rnn related operations (check out tf.nn.static_rnn for example)
In general you want the tensors to eventually have the same length in the same batch because the matrix operations need to have matrix operands.

Deconvolutions/Transpose_Convolutions with tensorflow

I am attempting to use tf.nn.conv3d_transpose, however, I am getting an error indicating that my filter and output shape is not compatible.
I have a tensor of size [1,16,16,4,192]
I am attempting to use a filter of [1,1,1,192,192]
I believe that the output shape would be [1,16,16,4,192]
I am using "same" padding and a stride of 1.
Eventually, I want to have an output shape of [1,32,32,7,"does not matter"], but I am attempting to get a simple case to work first.
Since these tensors are compatible in a regular convolution, I believed that the opposite, a deconvolution, would also be possible.
Why is it not possible to perform a deconvolution on these tensors. Could I get an example of a valid filter size and output shape for a deconvolution on a tensor of shape [1,16,16,4,192]
Thank you.
I have a tensor of size [1,16,16,4,192]
I am attempting to use a filter of [1,1,1,192,192]
I believe that the output shape would be [1,16,16,4,192]
I am using "same" padding and a stride of 1.
Yes the output shape will be [1,16,16,4,192]
Here is a simple example showing that the dimensions are compatible:
import tensorflow as tf
i = tf.Variable(tf.constant(1., shape=[1, 16, 16, 4, 192]))
w = tf.Variable(tf.constant(1., shape=[1, 1, 1, 192, 192]))
o = tf.nn.conv3d_transpose(i, w, [1, 16, 16, 4, 192], strides=[1, 1, 1, 1, 1])
print(o.get_shape())
There must be some other problem in your implementation than the dimensions.