Tensorflow Keras - feeding input to multiple model layers in parallel - tensorflow

With tensorflow.keras (Tensorflow 2), I want to feed my input into different layers of my model. So we are looking at a graph where the input layers branches off into 3 lines to go to 3 different convolutional layers. It has 3 outputs.
Pseudocode is something like this:
inputs = Input()
conv1 = Conv2D()(inputs)
conv2 = Conv2D()(inputs)
conv3 = Conv2D()(inputs)
model = Model(inputs=inputs, outputs=[conv1, conv2, conv3])
But I'm getting the following error when I try to fit the model with a tf DataSet stream:
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 3 array(s), for inputs ['conv2d_1', 'conv2d_2', 'conv2d_3'] but instead got the following list of 1 arrays: [<tf.Tensor 'ExpandDims:0' shape=(None, 1) dtype=int32>]
I have verified that my code works fine if I comment out the branches and set outputs=conv1.
Note: I am not trying to feed in multiple different inputs (there are many questions and answers on here that solve this). Just one input which should branch off.

Issue solved. I should be providing a 3-array of labels.

Related

Tensorflow Keras output layer shape weird error

I am fairly new to TF, Keras and ML in general.
I am trying to implement a very simple MLP with an input shape of (batch_size,3,2) and an output shape of (batch_size,3), that is (if I got it right): for every 3x2 feature, there is a corresponding 3 value array label.
Here is how I create the model:
model = tf.keras.Sequential([
tf.keras.layers.Dense(50,tf.keras.activations.relu,input_shape=((3,2)),
tf.keras.layers.Dense(3)
])
and these are the X and y shapes:
X_train.shape,y_train.shape
TensorShape([64,3,2]),TensorShape([64,3])
On model.fit I am facing a weird error I cannot understand:
ValueError: Dimensions must be equal, but are 3 and 32 for ... with input shapes: [32,3,3] and [32,3]
I have no clue what's going on, I understand the batch size is 32, but where does that [32,3,3] comes from?
Moreover, if from the original 64, I lower the number (shapes) of X_train and y_train, say, to: (19,3,2) and (19,3), I get the following error instead:
InvalidArgumentError: required broadcastable shapes at loc(unknown)
What's even more weird for me is that if I specify a single unit for the output (last) layer, instead of 3 like this:
model = tf.keras.Sequential([
tf.keras.layers.Dense(50,tf.keras.activations.relu,input_shape=((3,2)),
tf.keras.layers.Dense(1)
])
model.fit works, but the predictions have shape (1,3,1) instead of my expected (3,)
I am very confused.
Whenever you have not any idea about the journey of data throughout your model, use model.summary() to see the details and what happens to the shape of data in each layer.
In this case, the input is a 2D array, and the output is a 1D array, and you just used dense layers. Dense layers can not handle 2d features in nature. For example for an image as input, you can not feed it directly to a dense layer. Instead you should use other layers such as Conv2D or Flatten your input (make it 1D) before feeding your data to the dense layer. Otherwise you will get the other dimension in the output.
Inference: If your input dimension and output dimension differs, somewhere in your model, the shape need to be changed. Most common ways to do so, is using a Flatten layer or GlobalAveragePooling and so on.
When you pass an input to a dense layer, the input should be flattened first. There are 2 ways to deal with this:
Way 1: Adding a flatten input as a first layer of your model:
model = Sequential()
model.add(Flatten(input_shape=(3,2)))
model.add(Dense(50, 'relu'))
model.add(Dense(3))
Way 2: Converting the 2D array to 1D before passing the inputs to your model:
X_train = tf.reshape(X_train, shape=([6]))
or
X_train = tf.reshape(X_train, shape=((6,)))
Then change the input shape of the first layer as:
model.add(Dense(50, 'relu', input_shape=(6,))

How to correct shape of Keras input into a 3D array

I've a Keras model that when I fit fails with this error
> kerasInput = Input(shape=(None, 47))
> LSTM(..)(kerasInput)
...
> model.fit(realInput, ...)
ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (10842, 1)
When looking at my input I found it has a shape of (10842, 1) but for each row it's actually a list of list. I can verify with
> pd.DataFrame(realInput[0]).shape
(260, 47)
How I could correct my input shape?
When trying with keras Reshape layer, the creation of the model fails with:
Model inputs must come from `keras.layers.Input` (thus holding past layer metadata), they cannot be the output of a previous non-Input layer. Here, a tensor specified as input to your model was not an Input tensor, it was generated by layer reshape_8.
Note that input tensors are instantiated via `tensor = keras.layers.Input(shape)`.
The tensor that caused the issue was: reshape_8/Reshape:0
You can use numpy.expand_dims method to convert the shape to 3D.
import numpy as np
np.expand_dims(realInput,axis=0)
Reshape layer keras
https://keras.io/layers/core/#reshape
Use the third parameter as 1
# Something Similar to this
X_train = np.reshape(X_train,(X_train.shape[0],X_train.shape[1],1))
Edit: Added np.reshape method
Refer this repository: https://github.com/NilanshBansal/Stock_Price_Prediction/blob/master/Stock_Price_Prediction_20_days_later_4_LSTM.ipynb
As I said before in the comments. You will need to make sure to reshape your data to match what LSTM expects to receive and also make sure the input_shape is correctly set.
I found this post quite helpful when I struggled with inputting to an LSTM layer. I hope it helps you too : Reshape input for LSTM

Tensorflow concatenate layers not recogniing list of inputs

I am trying to create a tensorflow model using keras.Sequential() as follows:
nn_1 = tf.keras.Sequential() model
nn_1 = tf.keras.Sequential() model
model = tf.keras.Sequential()
model.add(tf.keras.layers.concatenate([nn_1, nn_2]))
When I try this, I get the following error:
ValueError: A `Concatenate` layer should be called on a list of at least 2 inputs
I don't understand why this is happening, since I am passing a list of 2 inputs to concatenate() . Am I missing something basic?
P.S. The same thing happened with dot() (which is what i was initially trying)

Error when checking target: expected time_distributed_5 to have 3 dimensions, but got array with shape (14724, 1)

Trying to build a single output regression model, but there seems to be problem in the last layer
inputs = Input(shape=(48, 1))
lstm = CuDNNLSTM(256,return_sequences=True)(inputs)
lstm = Dropout(dropouts[0])(lstm)
#aux_input
auxiliary_inputs = Input(shape=(48, 7))
auxiliary_outputs = TimeDistributed(Dense(4))(auxiliary_inputs)
auxiliary_outputs = TimeDistributed(Dense(7))(auxiliary_outputs)
#concatenate
output = keras.layers.concatenate([lstm, auxiliary_outputs])
output = TimeDistributed(Dense(64, activation='linear'))(output)
output = TimeDistributed(Dense(64, activation='linear'))(output)
output = TimeDistributed(Dense(1, activation='linear'))(output)
model = Model(inputs=[inputs, auxiliary_inputs], outputs=[output])
I am new to keras... I am getting the following error
ValueError: Error when checking target: expected time_distributed_5 to have 3 dimensions, but got array with shape (14724, 1)
Okay guys, think I found a fix
According to - https://keras.io/layers/wrappers/ it says that we are applying dense layer to each timestep (in my case I have 48 timesteps). So, the output of my final layer would be (batch_size, timesteps, dimensions) for below:
output = TimeDistributed(Dense(1, activation='linear'))(output)
will be (?,48,1) hence the dimensions mismatch. However, If I want to convert this to single regression output we will have to flatten the final TimeDistributed layer
so I added the following lines to fix it:
output = Flatten()(output)
output = (Dense(1, activation='linear'))(output)
so now the timedistributed layer flattens to 49 inputs(looks like a bias input is included) to the final dense layer into a single output.
Okay, the code works fine and I am getting proper results(the model learns). My only doubt is if it is mathematically okay to flatten TimeDistributed layer to simple dense layer to get my result like stated above?
Can you provide a more on the context of your problem? Test data or at least more code. Why are you choosing this architecture in the first place? Would a simpler architecture (just the LSTM) do the trick? What are you regressing? Stacking multiple TimeDistributed Dense layers with linear activation functions probably isn't adding much to the model.

Keras: difference of InputLayer and Input

I made a model using Keras with Tensorflow. I use Inputlayer with these lines of code:
img1 = tf.placeholder(tf.float32, shape=(None, img_width, img_heigh, img_ch))
first_input = InputLayer(input_tensor=img1, input_shape=(img_width, img_heigh, img_ch))
first_dense = Conv2D(16, 3, 3, activation='relu', border_mode='same', name='1st_conv1')(first_input)
But I get this error:
ValueError: Layer 1st_conv1 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.engine.topology.InputLayer'>. Full input: [<keras.engine.topology.InputLayer object at 0x00000000112170F0>]. All inputs to the layer should be tensors.
When I use Input like this, it works fine:
first_input = Input(tensor=img1, shape=(224, 224, 3), name='1st_input')
first_dense = Conv2D(16, 3, 3, activation='relu', border_mode='same', name='1st_conv1')(first_input)
What is the difference between Inputlayer and Input?
InputLayer is a layer.
Input is a tensor.
You can only call layers passing tensors to them.
The idea is:
outputTensor = SomeLayer(inputTensor)
So, only Input can be passed because it's a tensor.
Honestly, I have no idea about the reason for the existence of InputLayer. Maybe it's supposed to be used internally. I never used it, and it seems I'll never need it.
According to tensorflow website, "It is generally recommend to use the functional layer API via Input, (which creates an InputLayer) without directly using InputLayer."
Know more at this page here
Input: Used for creating a functional model
inp=tf.keras.Input(shape=[?,?,?])
x=layers.Conv2D(.....)(inp)
Input Layer: used for creating a sequential model
x=tf.keras.Sequential()
x.add(tf.keras.layers.InputLayer(shape=[?,?,?]))
And the other difference is that
When using InputLayer with the Keras Sequential model, it can be skipped by moving the input_shape parameter to the first layer after the InputLayer.
That is in sequential model you can skip the InputLayer and specify the shape directly in the first layer.
i.e From this
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(4,)),
tf.keras.layers.Dense(8)])
To this
model = tf.keras.Sequential([
tf.keras.layers.Dense(8, input_shape=(4,))])
To define it in simple words:
keras.layers.Input is used to instantiate a Keras Tensor. In this case, your data is probably not a tf tensor, maybe an np array.
On the other hand, keras.layers.InputLayer is a layer where your data is already defined as one of the tf tensor types, i.e., can be a ragged tensor or constant or other types.
I hope this helps!