What is wrong with the simple code in Keras below? - tensorflow

I am struggling for the last hour to understand what i am doing wrong. I am a novice in NN, but this is not my first code.
def simple_model(lr=0.1):
X = Input(shape=(6144,))
out = Dense(1)(X)
model = Model(inputs=X, outputs=out)
opt = tf.keras.optimizers.SGD(learning_rate=lr)
model.compile(optimizer=opt, loss='mean_squared_error')
model.summary()
return model
mod = simple_model()
a = np.zeros(6144)
v = mod.predict(a)
running this i get the following error:
WARNING:tensorflow:Model was constructed with shape (None, 6144) for input Tensor("input_1:0", shape=(None, 6144), dtype=float32), but it was called on an input with incompatible shape (32, 1).
......
ValueError: Input 0 of layer dense is incompatible with the layer: expected axis -1 of input shape to have value 6144 but received input with shape [32, 1]
Where does this [32, 1] come from ?!
I am sure there is some silly mistake in my code, but can't see it :(
p.s. It does compile the mode and prints the summary before throwing an error

mod = simple_model()
a = np.zeros(6144)
#Add this line
a = np.expand_dims(a,axis=0)
v = mod.predict(a)
The reason why your error appears is that Keras + TensorFlow only allow batch predictions. When we use expand_dims function, we actually create a batch of dimension 1.

Related

Keras(Tensorflow) LSTM error in spyder and jupyter

when I use google colab, there's no error in code
but when I use spyder or jupyter, the error occurs.
Model_10 = Sequential()
Model_10.add(LSTM(128, batch_input_shape = (1,10,5), stateful = True))
Model_10.add(Dense(5, activation = 'linear'))
Model_10.compile(loss = 'mse', optimizer = 'rmsprop')
Model_10.fit(x_train, y_train, epochs=1, batch_size=1, verbose=2, shuffle=False, callbacks=[history])
x_train_data.shape = (260,10,5)
y_train_data.shape = (260,1,5)
I'm using python3.7 and tensorflow 2.0
I don't know why error occurs in anaconda only.
ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ
error code
ValueError: A target array with shape (260, 1, 5) was passed for an output of shape (1, 5) while using as loss mean_squared_error. This loss expects targets to have the same shape as the output.
You should reshape your labels/targets:
y_train_data = y_train_data.reshape((260,5))
Since you're using batch_input_shape in the input layer and specifying batch size of 1, the model will take one example from your labels at each step which will have a shape of (1, 5) for the labels anyway.

tensorflow multiply two layers

I do have two inputs to my network. The one input gets feed through a few linear layers and then should be multiplied elementwise with the other input.
input_a = Input(shape=input_a_shape)
x = Dense(side_channel_speed_output_dimension, activation="relu")(x)
x = tf.reshape(x, [input_shape_image[0], input_shape_image[1]])
x = tf.expand_dims(x, input_shape_image[2])
x = tf.repeat(x, repeats=input_shape_image[2], axis=2)
input_b = Input(shape=input_shape_b)
At this stage I would like to multiply input_a and input_b. How do I do that?
I tried:
input = keras.layers.multiply([input_b, input_a])
There I got this error:
ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_2:0", shape=(None, 60, 40, 2), dtype=float32) at layer "input_2". The following previous layers were accessed without issue: ['input_1', 'dense', 'tf_op_layer_Reshape', 'tf_op_layer_ExpandDims', 'tf_op_layer_Repeat/Shape', 'tf_op_layer_Repeat/strided_slice', 'tf_op_layer_Repeat/strided_slice_1', 'tf_op_layer_Repeat/ExpandDims', 'tf_op_layer_Repeat/Tile', 'tf_op_layer_Repeat/concat']
I also tried just tf.multipy(a,b). It does not work either.
Does someone know, how to solve this?
Thanks
I got it now. I need to use this function:
x = keras.layers.multiply([input_image, x])

why tensorflow TFLiteConverter.from_session require the same size for input and output

I am trying to use TFLiteConverter to convert my network. So I tried the sample code first. It works. But after some modification, it sends back error. Seems the input_array and output_array must be the same size. I just don't understand why. Can anybody help me?
I modified the size of img from and the size of var from [1,64,64,3 to [1,64,3,1]
the complete code is pasted bellowenter code here
import tensorflow as tf
img = tf.placeholder(name="img", dtype=tf.float32, shape=(1, 64, 64, 1))
var = tf.get_variable("weights", dtype=tf.float32, shape=(1, 64, 3, 1))
val = tf.matmul(img,var)
out = tf.identity(val, name="out")
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(val.shape)
converter = tf.lite.TFLiteConverter.from_session(sess, [img], [out])
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
The ERROR message:
ValueError: Dimensions must be equal, but are 1 and 3 for 'MatMul' (op: 'BatchMatMulV2') with input shapes: [1,64,64,1], [1,64,3,1].
The problem is not with the TFLite conversion, but with build the graph in the first place.
tf.matmul operates on the inner-most 2D matrices in your tensors. So in your case, you are trying to matrix multiply a matrix of shape 64x1 by a matrix of size 3x1, which is not valid. Matrix multiplication requires that columns of the first operand is equal to the rows in the second operand, but here 1 != 3 so it doesn't work.
For example, replace the 3 by a 1 then it will work :
import tensorflow as tf
img = tf.placeholder(name="img", dtype=tf.float32, shape=(1, 64, 64, 1))
var = tf.get_variable("weights", dtype=tf.float32, shape=(1, 64, 3, 1))
val = tf.matmul(img,var)
out = tf.identity(val, name="out")

How to replace the input channel shape from (224, 224, 3) to (224, 224, 1) in VGG16?

I am using VGG16 for transfer learning. My images are grayscale. So, I need to change the input channel shape of Vgg16 from (224, 224, 3) to (224, 224, 1). I tried the following code and got error:
TypeError: build() takes from 1 to 2 positional arguments but 4 were given
Can anyone help me where Am I doing it wrong?
vgg16_model= load_model('Fetched_VGG.h5')
vgg16_model.summary()
# transform the model to Sequential
model= Sequential()
for layer in vgg16_model.layers[1:-1]:
model.add(layer)
# Freezing the layers (Oppose weights to be updated)
for layer in model.layers:
layer.trainable = False
model.build(224,224,1)
model.add(Dense(2, activation='softmax', name='predictions'))
you can't, even if you get rid of the input layer, this model has a graph that has already been compiled and your first conv layer expects an input with 3 channels. I don't think there is really an easy work around to make it accept 1 channel if there is any at all.
you need to repeat your data in third dimension and have the same grayscale image in all 3 bands instead of RGB, that works just fine.
if your image has the shape of : (224,224,1):
import numpy as np
gray_image_3band = np.repeat(gray_img, repeats = 3, axis = -1)
if your image has the shape of : (224,224)
gray_image_3band = np.repeat(gray_img[..., np.newaxis], repeats = 3, axis = -1)
you don't need to call the model.build() anymore this way, keep the input layer. but if you ever wanted to call it you need to pass the shape as a tuple like this:
model.build( (224, 224, 1) ) # this is correct, notice the parentheses

Weights Variable modification after creation and still want to train it in Tensorflow

Is it possible to reassign weights with different values than the initialized and still train it successfully?
For ex:
Weights= tf.Variable(shape, zeros(), name="weights")
update_weights = weights + steps * bytes
Weights = Weights.assign(update_weights)
But I get the following error when I train it using AdamOptimizer:
Trying to optimize unsupported type <tf.Tensor 'conv_1/Assign:0' shape=(5, 5, 1, 30) dtype=float32_ref>
To convert tensor to variable suitable for minimize() of Adam optimizer: used this:
q_weights = tf.Variable(q_weights.assign(weights))
But got the following error!
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node conv_2/Variable/conv_2/Assign_conv_2/Variable_0 was passed float from conv_2/Variable/cond/Merge:0 incompatible with expected float_ref.
Full flow of code attached:
Weights= tf.Variable(shape, zeros(), name="weights")
update_weights = weights + steps * bytes
Weights = Weights.assign(update_weights)
conv = tf.nn.conv2d(input, Weights, ...)
act = tf.nn.relu(...)
tf.add_to_collection('train_params, Weights)
do dot ....
tf.add_to_collection('train_params, Weights)
do all remaining layers.....
tf.add_to_collection('train_params, Weights)
do logits
tf.add_to_collection('train_params, Weights)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels, logits)
back_prop = tf.train.AdamOptimizer(LR).minimize(loss, var_list = tf.get_collection('train_params')
repeat for every iteration
Thanks for the help.
Just put some dependency on your Graph, since you want to tell "Before starting your computation update the weights like this" which gets translated into:
Weights= tf.Variable(shape, zeros(), name="weights")
update_weights = tf.assign(Weights, Weights + steps * bytes)
with tf.control_dependencies([update_weights]):
conv = tf.nn.conv2d(input, Weights, ...)
etc. etc.