predictions must be <= 1. Error while training LSTM model - tensorflow

So I just finished my model and wanted to start training but I think something went wrong with my metrics. First my model structure
inputs = tf.keras.Input(shape=(None,nb_features), name = 'inputs')
x = tf.keras.layers.Masking(mask_value = data.MASK_VALUE)(inputs)
x = tf.keras.layers.LSTM(hidden_units,
return_sequences = True,
dropout = dropout_rate)(x)
dense = tf.keras.layers.Dense(nb_skills, activation = 'sigmoid')
outputs = tf.keras.layers.TimeDistributed(dense, name = 'outputs')(x)
Edit:
My Model is looking like this.
Model: "DKTModel"
inputs (InputLayer) [(None, None, 80)] 0
masking (Masking) (None, None, 80) 0
lstm (LSTM) (None, None, 100) 72400
outputs (TimeDistributed) (None, None, 40) 4040
=================================================================
Total params: 76,440
Trainable params: 76,440
Non-trainable params: 0
Edit:
Here are my compile and fit function.
def compile(self, optimizer, metrics=None):
def custom_loss(y_true, y_pred):
y_true, y_pred = data.get_target(y_true, y_pred)
return tf.keras.losses.binary_crossentropy(y_true, y_pred)
super(DKTModel, self).compile(
loss = custom_loss,
optimizer = optimizer,
metrics = metrics,
experimental_run_tf_function = False)
def fit (self,
dataset,
epochs = 1,
verbose = 1,
callbacks=None,
validation_data = None,
shuffle = True,
initial_epoch = 0,
steps_per_epoch = None,
validation_steps = None,
validation_freq = 1):
return super (DKTModel, self).fit(x=dataset, epochs=epochs,verbose=verbose, callbacks = callbacks, validation_data = validation_data, shuffle = shuffle, initial_epoch = initial_epoch, steps_per_epoch = steps_per_epoch, validation_steps = validation_steps, validation_freq = validation_freq)
I get following error when running model.fit().
2 root error(s) found.(0) INVALID_ARGUMENT: assertion failed: [predictions must be <= 1] [Condition x <= y did not hold element-wise:] [x (Sum_5:0) = ] [[[19.462822][19.5533848][19.5251656]]...] [y (Cast_11/x:0) = ] [1] [[{{node assert_less_equal/Assert/AssertGuard/Assert}}]][[assert_less_equal_2/Assert/AssertGuard/pivot_f/_122/_201]](1) INVALID_ARGUMENT: assertion failed: [predictions must be <= 1] [Condition x <= y did not hold element-wise:] [x (Sum_5:0) = ] [[[19.462822][19.5533848][19.5251656]]...] [y (Cast_11/x:0) = ] [1][[{{node assert_less_equal/Assert/AssertGuard/Assert}}]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_7560]
Edit:
I changed the activation function from sigmoid to softmax but still got the same error. But the values are different
2 root error(s) found.(0) INVALID_ARGUMENT: assertion failed: [predictions must be <= 1] [Condition x <= y did not hold element-wise:] [x (Sum_5:0) = ] [[[0.99999994][1][1]]...] [y (Cast_11/x:0) = ] [1][{{node assert_less_equal/Assert/AssertGuard/Assert}}]]
[[broadcast_weights_2/assert_broadcastable/AssertGuard/pivot_f/_58/_101]](1) INVALID_ARGUMENT: assertion failed: [predictions must be <= 1] [Condition x <= y did not hold element-wise:] [x (Sum_5:0) = ] [[[0.99999994][1][1]]...] [y (Cast_11/x:0) = ] [1]
[[{{node assert_less_equal/Assert/AssertGuard/Assert}}]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_7568]
Edit:
Here are the metrics i use to compile.
student_model.compile(
optimizer = optimizer,
metrics = [
metrics.AUC(),
metrics.Precision(),
metrics.Recall()
])
Ok I just removed all metrics and it started at least but got an error which I try to solve now.
Which lets met think that some metrics are not appliable to my outputs but I dont know how to change it. Maybe someone encountered this problem before. If you need the full error I get as well as the last part I showed above, let me know im thankful for every help.
Thanks

I see that you have a nb_skills number of outputs. If this number is >2 I think that the problem is that you are using a sigmoid as activation function.
Try substituting it with a softmax.
Remember that for multi-class classification you should use a softmax activation function. The softmax function normalizes a set of N (nb_skills) real numbers into a probability distribution such that they sum up to 1. You can think of the sigmoid as a softmax with N=2 classes.
Update:
From your updated code I see now that you also used a binary_crossentropy, and again this is not fit for a multi-classification problem. So instead of:
tf.keras.losses.binary_crossentropy(y_true, y_pred)
try using:
tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)

Related

batch size > 1 gives an error using TensorFlow 1.x

I am using this example of a VAE.
The only difference I made was change the loss from binary cross entropy to MSE, like this:
class OptimizerVAE(object):
def __init__(self, model, learning_rate=1e-3):
"""
OptimizerVAE initializer
:param model: a model object
:param learning_rate: float, learning rate of the optimizer
"""
# binary cross entropy error
self.bce = tf.keras.losses.mse(model.x, model.logits)
self.reconstruction_loss = tf.reduce_mean(tf.reduce_sum(self.bce, axis=-1))
if model.distribution == 'normal':
# KL divergence between normal approximate posterior and standard normal prior
self.p_z = tf.distributions.Normal(tf.zeros_like(model.z), tf.ones_like(model.z))
kl = model.q_z.kl_divergence(self.p_z)
self.kl = tf.reduce_mean(tf.reduce_sum(kl, axis=-1))*0.1
elif model.distribution == 'vmf':
# KL divergence between vMF approximate posterior and uniform hyper-spherical prior
self.p_z = HypersphericalUniform(model.z_dim - 1, dtype=model.x.dtype)
kl = model.q_z.kl_divergence(self.p_z)
self.kl = tf.reduce_mean(kl)*0.1
else:
raise NotImplemented
self.ELBO = - self.reconstruction_loss - self.kl
self.train_step = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(-self.ELBO)
self.print = {'recon loss': self.reconstruction_loss, 'ELBO': self.ELBO, 'KL': self.kl}
and when running the original architecture, the model runs perfectly (2 MLP layers), no matter the size of the batches (specified as "None" in the github code).
I am trying to change this to a convolutional model, but when I change just the encoder to this:
def _encoder(self, x):
"""
Encoder network
:param x: placeholder for input
:return: tuple `(z_mean, z_var)` with mean and concentration around the mean
"""
# 2 hidden layers encoder
#h0 = tf.layers.dense(x, units=self.h_dim * 2, activation=self.activation)
#h1 = tf.layers.dense(h0, units=self.h_dim, activation=self.activation)
h1 = tf.layers.conv1d(x, filters = 32, kernel_size = 7, activation = tf.nn.relu)
h1 = tf.layers.conv1d(h1, filters = 64, kernel_size = 7, activation =tf.nn.relu)
h1 = tf.layers.conv1d(h1, filters = 64, kernel_size = 7, activation = tf.nn.relu)
h1 = tf.layers.flatten(h1)
h1 = tf.layers.dense(h1, 32, activation = tf.nn.relu)
if self.distribution == 'normal':
# compute mean and std of the normal distribution
z_mean = tf.layers.dense(h1, units=self.z_dim, activation=None, name = 'z_output')
z_var = tf.layers.dense(h1, units=self.z_dim, activation=tf.nn.softplus)
elif self.distribution == 'vmf':
# compute mean and concentration of the von Mises-Fisher
z_mean = tf.layers.dense(h1, units=self.z_dim, activation=lambda x: tf.nn.l2_normalize(x, axis=-1))
# the `+ 1` prevent collapsing behaviors
z_var = tf.layers.dense(h1, units=1, activation=tf.nn.softplus) + 1
else:
raise NotImplemented
return z_mean, z_var
and when running the model, I get the error:
InvalidArgumentError: Incompatible shapes: [32,1] vs. [32,512,1]
[[{{node gradients/SquaredDifference_grad/BroadcastGradientArgs}}]]
32 is the batch_size when running the model. The thing that is confusing me is when I run this with batch_size = 1, the model runs!
Where is this going wrong? is it the optimizer and the way it averages?
I solved the issue by reshaping the output from the decoder in form: (win_size, 1), since the MLP fails to add that extra dim'n in!

What is wrong with the simple code in Keras below?

I am struggling for the last hour to understand what i am doing wrong. I am a novice in NN, but this is not my first code.
def simple_model(lr=0.1):
X = Input(shape=(6144,))
out = Dense(1)(X)
model = Model(inputs=X, outputs=out)
opt = tf.keras.optimizers.SGD(learning_rate=lr)
model.compile(optimizer=opt, loss='mean_squared_error')
model.summary()
return model
mod = simple_model()
a = np.zeros(6144)
v = mod.predict(a)
running this i get the following error:
WARNING:tensorflow:Model was constructed with shape (None, 6144) for input Tensor("input_1:0", shape=(None, 6144), dtype=float32), but it was called on an input with incompatible shape (32, 1).
......
ValueError: Input 0 of layer dense is incompatible with the layer: expected axis -1 of input shape to have value 6144 but received input with shape [32, 1]
Where does this [32, 1] come from ?!
I am sure there is some silly mistake in my code, but can't see it :(
p.s. It does compile the mode and prints the summary before throwing an error
mod = simple_model()
a = np.zeros(6144)
#Add this line
a = np.expand_dims(a,axis=0)
v = mod.predict(a)
The reason why your error appears is that Keras + TensorFlow only allow batch predictions. When we use expand_dims function, we actually create a batch of dimension 1.

Basic RNN training using fit_generator doesn't output the expected shape

I'm implementing a basic RNN composed of a 512 units GRU and a dense layer using Keras:
model = Sequential()
model.add(GRU(units=512,
return_sequences=True,
input_shape=(None, num_x_signals,)))
model.add(Dense(num_y_signals, activation='sigmoid'))
I needed to generate input batches on the fly so I used fit_generator :
model.fit_generator(generator=generator_train, epochs=NB_EPOCHS, steps_per_epoch=STEPS_PER_EPOCH,
validation_data=generator_test, validation_steps=900, callbacks=callbacks)
And here is how I define my batch generator :
SAMPLE_PERIOD_PER_INPUT = 1728
PERIOD_TO_PREDICT = 288
BATCH_SIZE = 64
def batch_generator(batch_size, sequence_length, train = True):
while True:
x_shape = (batch_size, sequence_length, num_x_signals)
x_batch = np.zeros(shape=x_shape, dtype=np.float16)
y_shape = (batch_size, PERIOD_TO_PREDICT, num_y_signals)
y_batch = np.zeros(shape=y_shape, dtype=np.float16)
for i in range(batch_size):
if train:
idx = np.random.randint(num_train - sequence_length)
predict_idx = (idx + sequence_length) - PERIOD_TO_PREDICT
x_batch[i] = x_train_scaled[idx:idx+sequence_length]
y_batch[i] = y_train_scaled[predict_idx:idx+sequence_length]
else:
idx = np.random.randint(num_test - sequence_length)
predict_idx = (idx + sequence_length) - PERIOD_TO_PREDICT
x_batch[i] = x_test_scaled[idx:idx+sequence_length]
y_batch[i] = y_test_scaled[predict_idx:idx+sequence_length]
yield (x_batch, y_batch)
generator_train = batch_generator(batch_size=BATCH_SIZE, sequence_length=SAMPLE_PERIOD_PER_INPUT)
generator_test = batch_generator(batch_size=BATCH_SIZE, sequence_length=SAMPLE_PERIOD_PER_INPUT, train = False)
I also use a "custom" loss function because I need to ignore the first computed sequence which is supposed to not be accurate :
warmup_steps = 50
def loss_mse_warmup(y_true, y_pred):
y_true_slice = y_true[:, warmup_steps:, :]
y_pred_slice = y_pred[:, warmup_steps:, :]
loss = tf.losses.mean_squared_error(labels=y_true_slice,
predictions=y_pred_slice)
loss_mean = tf.reduce_mean(loss)
return loss_mean
optimizer = RMSprop(lr=1e-3)
model.compile(loss=loss_mse_warmup, optimizer=optimizer)
Here is the summary of my model :
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
gru (GRU) (None, None, 512) 798720
_________________________________________________________________
dense (Dense) (None, None, 1) 513
=================================================================
Total params: 799,233
Trainable params: 799,233
Non-trainable params: 0
_________________________________________________________________
But when I run this it says that there shape errors :
2 root error(s) found.
(0) Invalid argument: Incompatible shapes: [64,238,1] vs. [64,1678,1]
[[{{node loss_4/dense_loss/mean_squared_error/SquaredDifference}}]]
[[loss_4/mul/_167]]
(1) Invalid argument: Incompatible shapes: [64,238,1] vs. [64,1678,1]
[[{{node loss_4/dense_loss/mean_squared_error/SquaredDifference}}]]
0 successful operations.
0 derived errors ignored.
Any ideas why ? Where did I write something wrong ?

Getting error while adding embedding layer to lstm autoencoder

I have a seq2seq model which is working fine. I want to add an embedding layer in this network which I faced with an error.
this is my architecture using pretrained word embedding which is working fine(Actually the code is almost the same code available here, but I want to include the Embedding layer in the model rather than using the pretrained embedding vectors):
LATENT_SIZE = 20
inputs = Input(shape=(SEQUENCE_LEN, EMBED_SIZE), name="input")
encoded = Bidirectional(LSTM(LATENT_SIZE), merge_mode="sum", name="encoder_lstm")(inputs)
encoded = Lambda(rev_ent)(encoded)
decoded = RepeatVector(SEQUENCE_LEN, name="repeater")(encoded)
decoded = Bidirectional(LSTM(EMBED_SIZE, return_sequences=True), merge_mode="sum", name="decoder_lstm")(decoded)
autoencoder = Model(inputs, decoded)
autoencoder.compile(optimizer="sgd", loss='mse')
autoencoder.summary()
NUM_EPOCHS = 1
num_train_steps = len(Xtrain) // BATCH_SIZE
num_test_steps = len(Xtest) // BATCH_SIZE
checkpoint = ModelCheckpoint(filepath=os.path.join('Data/', "simple_ae_to_compare"), save_best_only=True)
history = autoencoder.fit_generator(train_gen, steps_per_epoch=num_train_steps, epochs=NUM_EPOCHS, validation_data=test_gen, validation_steps=num_test_steps, callbacks=[checkpoint])
This is the summary:
Layer (type) Output Shape Param #
=================================================================
input (InputLayer) (None, 45, 50) 0
_________________________________________________________________
encoder_lstm (Bidirectional) (None, 20) 11360
_________________________________________________________________
lambda_1 (Lambda) (512, 20) 0
_________________________________________________________________
repeater (RepeatVector) (512, 45, 20) 0
_________________________________________________________________
decoder_lstm (Bidirectional) (512, 45, 50) 28400
when I change the code to add the embedding layer like this:
inputs = Input(shape=(SEQUENCE_LEN,), name="input")
embedding = Embedding(output_dim=EMBED_SIZE, input_dim=VOCAB_SIZE, input_length=SEQUENCE_LEN, trainable=True)(inputs)
encoded = Bidirectional(LSTM(LATENT_SIZE), merge_mode="sum", name="encoder_lstm")(embedding)
I received this error:
expected decoder_lstm to have 3 dimensions, but got array with shape (512, 45)
So my question, what is wrong with my model?
Update
So, this error is raised in the training phase. I also checked the dimension of the data being fed to the model, it is (61598, 45) which clearly do not have the number of features or here, Embed_dim.
But why this error raises in the decoder part? because in the encoder part I have included the Embedding layer, so it is totally fine. though when it reached the decoder part and it does not have the embedding layer so it can not correctly reshape it to three dimensional.
Now the question comes why this is not happening in a similar code?
this is my view, correct me if I'm wrong. because Seq2Seq code usually being used for Translation, summarization. and in those codes, in the decoder part also there is input (in the translation case, there is the other language input to the decoder, so the idea of having embedding in the decoder part makes sense).
Finally, here I do not have seperate input, that's why I do not need any separate embedding in the decoder part. However, I don't know how to fix the problem, I just know why this is happening:|
Update2
this is my data being fed to the model:
sent_wids = np.zeros((len(parsed_sentences),SEQUENCE_LEN),'int32')
sample_seq_weights = np.zeros((len(parsed_sentences),SEQUENCE_LEN),'float')
for index_sentence in range(len(parsed_sentences)):
temp_sentence = parsed_sentences[index_sentence]
temp_words = nltk.word_tokenize(temp_sentence)
for index_word in range(SEQUENCE_LEN):
if index_word < sent_lens[index_sentence]:
sent_wids[index_sentence,index_word] = lookup_word2id(temp_words[index_word])
else:
sent_wids[index_sentence, index_word] = lookup_word2id('PAD')
def sentence_generator(X,embeddings, batch_size, sample_weights):
while True:
# loop once per epoch
num_recs = X.shape[0]
indices = np.random.permutation(np.arange(num_recs))
# print(embeddings.shape)
num_batches = num_recs // batch_size
for bid in range(num_batches):
sids = indices[bid * batch_size : (bid + 1) * batch_size]
temp_sents = X[sids, :]
Xbatch = embeddings[temp_sents]
weights = sample_weights[sids, :]
yield Xbatch, Xbatch
LATENT_SIZE = 60
train_size = 0.95
split_index = int(math.ceil(len(sent_wids)*train_size))
Xtrain = sent_wids[0:split_index, :]
Xtest = sent_wids[split_index:, :]
train_w = sample_seq_weights[0: split_index, :]
test_w = sample_seq_weights[split_index:, :]
train_gen = sentence_generator(Xtrain, embeddings, BATCH_SIZE,train_w)
test_gen = sentence_generator(Xtest, embeddings , BATCH_SIZE,test_w)
and parsed_sentences is 61598 sentences which are padded.
Also, this is the layer I have in the model as Lambda layer, I just added here in case it has any effect ever:
def rev_entropy(x):
def row_entropy(row):
_, _, count = tf.unique_with_counts(row)
count = tf.cast(count,tf.float32)
prob = count / tf.reduce_sum(count)
prob = tf.cast(prob,tf.float32)
rev = -tf.reduce_sum(prob * tf.log(prob))
return rev
nw = tf.reduce_sum(x,axis=1)
rev = tf.map_fn(row_entropy, x)
rev = tf.where(tf.is_nan(rev), tf.zeros_like(rev), rev)
rev = tf.cast(rev, tf.float32)
max_entropy = tf.log(tf.clip_by_value(nw,2,LATENT_SIZE))
concentration = (max_entropy/(1+rev))
new_x = x * (tf.reshape(concentration, [BATCH_SIZE, 1]))
return new_x
Any help is appreciated:)
I tried the following example on Google colab (TensorFlow version 1.13.1),
from tensorflow.python import keras
import numpy as np
SEQUENCE_LEN = 45
LATENT_SIZE = 20
EMBED_SIZE = 50
VOCAB_SIZE = 100
inputs = keras.layers.Input(shape=(SEQUENCE_LEN,), name="input")
embedding = keras.layers.Embedding(output_dim=EMBED_SIZE, input_dim=VOCAB_SIZE, input_length=SEQUENCE_LEN, trainable=True)(inputs)
encoded = keras.layers.Bidirectional(keras.layers.LSTM(LATENT_SIZE), merge_mode="sum", name="encoder_lstm")(embedding)
decoded = keras.layers.RepeatVector(SEQUENCE_LEN, name="repeater")(encoded)
decoded = keras.layers.Bidirectional(keras.layers.LSTM(EMBED_SIZE, return_sequences=True), merge_mode="sum", name="decoder_lstm")(decoded)
autoencoder = keras.models.Model(inputs, decoded)
autoencoder.compile(optimizer="sgd", loss='mse')
autoencoder.summary()
And then trained the model using some random data,
x = np.random.randint(0, 90, size=(10, 45))
y = np.random.normal(size=(10, 45, 50))
history = autoencoder.fit(x, y, epochs=NUM_EPOCHS)
This solution worked fine. I feel like the issue might be the way you are feeding in labels/outputs for MSE calculation.
Update
Context
In the original problem, you are attempting to reconstruct word embeddings using a seq2seq model, where embeddings are fixed and pre-trained. However you want to use a trainable embedding layer as a part of the model it becomes very difficult to model this problem. Because you don't have fixed targets (i.e. targets change every single iteration of the optimization because your embedding layer is changing). Furthermore this will lead to a very unstable optimization problem, because the targets are changing all the time.
Fixing your code
If you do the following you should be able to get the code working. Here embeddings is the pre-trained GloVe vector numpy.ndarray.
def sentence_generator(X, embeddings, batch_size):
while True:
# loop once per epoch
num_recs = X.shape[0]
embed_size = embeddings.shape[1]
indices = np.random.permutation(np.arange(num_recs))
# print(embeddings.shape)
num_batches = num_recs // batch_size
for bid in range(num_batches):
sids = indices[bid * batch_size : (bid + 1) * batch_size]
# Xbatch is a [batch_size, seq_length] array
Xbatch = X[sids, :]
# Creating the Y targets
Xembed = embeddings[Xbatch.reshape(-1),:]
# Ybatch will be [batch_size, seq_length, embed_size] array
Ybatch = Xembed.reshape(batch_size, -1, embed_size)
yield Xbatch, Ybatch

Dimension mismatch while building LSTM RNN in Tensorflow

I am trying to build multilayer, multiclass, multilabel LSTM in Tensorflow. I have been trying to bend this tutorial to my data.
However, I am getting an error that says I have dimension mismatch when building RNN.
ValueError: Dimensions must be equal, but are 1000 and 923 for 'rnn/while/rnn/multi_rnn_cell/cell_0/lstm_cell/MatMul_1' (op: 'MatMul') with input shapes: [?,1000], [923,2000].
I cannot pinpoint which variable is incorrect in building architecture:
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.0, shape=shape)
return tf.Variable(initial)
def lstm(x, weight, bias, n_steps, n_classes):
cell = rnn_cell.LSTMCell(cfg.n_hidden_cells_in_layer, state_is_tuple=True)
multi_layer_cell = tf.nn.rnn_cell.MultiRNNCell([cell] * 2)
# FIXME : ERROR binding x to LSTM as it is
output, state = tf.nn.dynamic_rnn(multi_layer_cell, x, dtype=tf.float32)
# FIXME : ERROR
output_flattened = tf.reshape(output, [-1, cfg.n_hidden_cells_in_layer])
output_logits = tf.add(tf.matmul(output_flattened, weight), bias)
output_all = tf.nn.sigmoid(output_logits)
output_reshaped = tf.reshape(output_all, [-1, n_steps, n_classes])
# ??? switch batch size with sequence size. ???
# then gather last time step values
output_last = tf.gather(tf.transpose(output_reshaped, [1, 0, 2]), n_steps - 1)
return output_last, output_all
These are my placeholders, loss function and all that jazz:
x_test, y_test = load_multiple_vector_files(test_filepaths)
x_valid, y_valid = load_multiple_vector_files(valid_filepaths)
n_input, n_steps, n_classes = get_input_target_lengths(check_print=False)
# FIXME n_input should be the problem
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
y_steps = tf.placeholder("float", [None, n_classes])
weight = weight_variable([cfg.n_hidden_layers, n_classes])
bias = bias_variable([n_classes])
y_last, y_all = lstm(x, weight, bias, n_steps, n_classes)
#all_steps_cost=tf.reduce_mean(-tf.reduce_mean((y_steps * tf.log(y_all))+(1 - y_steps) * tf.log(1 - y_all),reduction_indices=1))
all_steps_cost = -tf.reduce_mean((y_steps * tf.log(y_all)) + (1 - y_steps) * tf.log(1 - y_all))
last_step_cost = -tf.reduce_mean((y * tf.log(y_last)) + ((1 - y) * tf.log(1 - y_last)))
loss_function = (cfg.alpha * all_steps_cost) + ((1 - cfg.alpha) * last_step_cost)
optimizer = tf.train.AdamOptimizer(learning_rate=cfg.learning_rate).minimize(loss_function)
I am pretty sure it is my X placeholder that is causing the problem, resulting in layers and their matrices dimensions not matching. The constant which the linked example is using is rather tough to see what it actually stands for.
Can anyone help me out here? :)
UPDATE:
I have made an "educated guess" on the mismatching dimensions.
One is 2*hidden_width, so hidden getting new input + its old recurrent input. The mismatching dimension, however, is input_width + hidden_width, like it was trying to set recurrency for width of hidden layer to the input layer.
I figured out I am incorrectly setting the weight variable, using the constant for n_hidden_layers(number of hidden layers) instead of n_hidden_cells_in_layer(number of layers).