How to deal with Tensorflow model.predict() value error? - tensorflow

I am getting the following error in my code
WARNING:tensorflow:Model was constructed with shape (None, 3) for input KerasTensor(type_spec=TensorSpec(shape=(None, 3), dtype=tf.float32, name='dense_input'), name='dense_input', description="created by layer 'dense_input'"), but it was called on an input with incompatible shape (None,).
and here is my code
import numpy as np
import tensorflow as tf
inum = np.array([[1,1,2],[2,25,6],[32,4,7],[8,9,0]], dtype="float")
onum = np.array([3,56,135,72],dtype="float")
l0 = tf.keras.layers.Dense(units=4, input_shape=(3,))
l1 = tf.keras.layers.Dense(units=4)
l2 = tf.keras.layers.Dense(units=4)
l3 = tf.keras.layers.Dense(units=1)
model = tf.keras.Sequential([l0,l1,l2,l3])
model.compile(loss="mean_squared_error",optimizer=tf.keras.optimizers.Adam(0.1))
history = model.fit(inum,onum,epochs=1200,verbose=False)
model.predict([2,2,4])
I am very new to Machine Learning and have now idea what to do with this.
Any help is greatly appreciated.

Use
Model.predict([[2,2,4]])
Because keras model treat inputs as a batch of data. So even if your just want to input one data has shape [3], you should wrap it as a [1,3] data like I do.

Related

Tensorflow input shape incompatible with layer

I'm trying to build a Sequential model with tensorflow.
import tensorflow as tf
import keras
from tensorflow.keras import layers
from keras import optimizers
import numpy as np
model = keras.Sequential (name="model")
model.add(keras.Input(shape=(786,)))
model.add(layers.Dense(2048, activation="relu", name="layer1"))
model.add(layers.Dense(786, activation="relu", name="layer2"))
model.add(layers.Dense(786, activation="relu", name="layer3"))
output = model.add(layers.Dense(786, activation="relu", name="output"))
model.summary()
model.compile(
optimizer=tf.optimizers.Adam(), # Optimizer
loss=keras.losses.CategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
history = model.fit(
x_train,
y_train,
batch_size=1,
epochs=5,
)
The input shape is a vector with length of 768 (so the input shape is (768,) right?), representing a chess board:
def get_dataset():
container = np.load('/content/drive/MyDrive/test_data_vector.npz')
b, v = container['arr_0'], container['arr_1']
v = np.asarray(v / abs(v).max() / 2 + 0.5, dtype=np.float32) # normalization (0 - 1)
return b, v
xtrain, ytrain = get_dataset()
print(xtrain.shape)
print(ytrain.shape)
>> (37, 786) #there are 37 samples
>> (37, 786)
But I always get the error:
ValueError: Input 0 of layer model is incompatible with the layer: expected axis -1 of input shape to have value 786 but received input with shape (1, 1, 768)
I tried with np.expand_dims(), which ended in the same Error.
The error is just a typo, as the user mentioned the issue is resolved by changing the output shape from 786 to 768 and the issue is resolved.
One suggestion based on the model structure.
The number of units are not related to your input shape, you don't have to match that number.
The number of units like 2048 and 786 in dense layer is too large and this may not help the model to learn better.
Try with smaller numbers like 32,64 etc, you can refer some of the examples in the tensorflow document.

call keras model as a function

I tryied to test direct keras model call with a demo, the code is simple as below.
from tensorflow import keras
from tensorflow.keras import layers
model = keras.models.Sequential()
model.add(layers.Embedding(input_dim=100, output_dim=32, input_length=5))
model.add(layers.Flatten())
model.add(layers.Dense(units=5, activation='sigmoid'))
f = np.random.randint(0,100, 5)
print(model(f))
It raises error:
ValueError: Input 0 of layer dense is incompatible with the layer: expected axis -1 of input shape to have value 160 but received input with shape (5, 32)
Is the data I mocked format wrong or keras model doesn't support this kind of debugging/testing.
It turns out I made a foolish mistake.
The input should be batched called.
So change f with f = np.random.randint(0,100, (1,5)) will work

Unable to convert Tensorflow from 1.0 to Tensorflow 2.0

I have tensorflow 1.0 version code and unable to convert tensorflow 2.0 using below syntax.
Could you please help me out ?
A)
lstm_cell =tf.keras.layers.LSTM(units=hidden_unit)
#lstm_cell = tf.compat.v1.nn.rnn_cell.DropoutWrapper(lstm_cell, output_keep_prob=self.dropout_keep_prob)
Q -1) how to use drop out for the lstm_cell on Tf2.0?
B)
self._initial_state = lstm_cell.zero_state(self.batch_size, tf.float32)
Q-2 ) when I use above syntax,am getting an error "LSTM cell does not have zero_state cell for TF2.0"
How to initialize lSTM cell?
C) how to use tf.keras.layers.RNN cell for TF2.0
Thank #AlexisBRENON !!! ..
Here is my code . Please let me know if I did any mistake .
lstm_cell =tf.keras.layers.LSTM(units=hidden_unit)
lstm_cell = tf.nn.RNNCellDropoutWrapper(lstm_cell, output_keep_prob=self.dropout_keep_prob)
self._initial_state = lstm_cell.get_initial_state(self.batch_size, tf.float32)
inputs = [tf.squeeze(input_, [1]) for input_ in tf.split(pooled_concat,num_or_size_splits=int(reduced),axis=1)]
outputs, state_size =tf.keras.layers.RNN(lstm_cell, inputs, initial_state=self._initial_state, return_sequences=self.real_len)
==>>> Want to Collect the appropriate last words into variable output (dimension = batch x embedding_size)
output = outputs[0]
ERROR:-
self._initial_state = lstm_cell.get_initial_state(self.batch_size, tf.float32)
ValueError: slice index 0 of dimension 0 out of bounds. for 'strided_slice' (op: 'StridedSlice') with input shapes: [0], [1], [1], [1] and with computed input tensors: input[1] = <0>, input[2] = <1>, input[3] = <1>.
For the RNN dropout, the DropoutWrapper has been move to tf.nn.RNNCellDropoutWrapper.
I suppose that tf.keras.layers.LSTMCell.get_initial_state is the new name of zero_state.
You should be more precise on what you want to do with RNNs. tf.keras.layers.RNN is a base class for recurrent layers and should not be used as is. Instead, you should use some sub-classes like SimpleRNN, GRU or LSTM, or make your own sub-class. Take a look at the tutorial on recurrent neural network.

why tensorflow TFLiteConverter.from_session require the same size for input and output

I am trying to use TFLiteConverter to convert my network. So I tried the sample code first. It works. But after some modification, it sends back error. Seems the input_array and output_array must be the same size. I just don't understand why. Can anybody help me?
I modified the size of img from and the size of var from [1,64,64,3 to [1,64,3,1]
the complete code is pasted bellowenter code here
import tensorflow as tf
img = tf.placeholder(name="img", dtype=tf.float32, shape=(1, 64, 64, 1))
var = tf.get_variable("weights", dtype=tf.float32, shape=(1, 64, 3, 1))
val = tf.matmul(img,var)
out = tf.identity(val, name="out")
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(val.shape)
converter = tf.lite.TFLiteConverter.from_session(sess, [img], [out])
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
The ERROR message:
ValueError: Dimensions must be equal, but are 1 and 3 for 'MatMul' (op: 'BatchMatMulV2') with input shapes: [1,64,64,1], [1,64,3,1].
The problem is not with the TFLite conversion, but with build the graph in the first place.
tf.matmul operates on the inner-most 2D matrices in your tensors. So in your case, you are trying to matrix multiply a matrix of shape 64x1 by a matrix of size 3x1, which is not valid. Matrix multiplication requires that columns of the first operand is equal to the rows in the second operand, but here 1 != 3 so it doesn't work.
For example, replace the 3 by a 1 then it will work :
import tensorflow as tf
img = tf.placeholder(name="img", dtype=tf.float32, shape=(1, 64, 64, 1))
var = tf.get_variable("weights", dtype=tf.float32, shape=(1, 64, 3, 1))
val = tf.matmul(img,var)
out = tf.identity(val, name="out")

How to use tensorflow SimpleRNNCell process batch dataset?

I'm using Tensorflow to create Seq2Seq model. I try to use mini batch to process dataset. When I build dataset using batch() method in Tensorflow, the dataset shape becomes (None,10). However, when feed data to SimpleRNNCell it raises error:
ValueError: Shape must be rank 2 but is rank 1 for 'simple_rnn_cell/MatMul_1' (op: 'MatMul') with input shapes: [10], [10,10].
The code is like this:
def decoder(self, input_x, real_y, encoder_outputs, training=False):
decoder_state, cell_states = encoder_outputs, []
predict_shape = (5, 1)
output = tf.convert_to_tensor(np.zeros(predict_shape), dtype=tf.float32)
for x in range(self.max_output):
# below code raises error, here output and decoder_state shape is (5, 1) (?, 10)
output, decoder_state = self.decoder_rnn(output, decoder_state)