I'm using Tensorflow to create Seq2Seq model. I try to use mini batch to process dataset. When I build dataset using batch() method in Tensorflow, the dataset shape becomes (None,10). However, when feed data to SimpleRNNCell it raises error:
ValueError: Shape must be rank 2 but is rank 1 for 'simple_rnn_cell/MatMul_1' (op: 'MatMul') with input shapes: [10], [10,10].
The code is like this:
def decoder(self, input_x, real_y, encoder_outputs, training=False):
decoder_state, cell_states = encoder_outputs, []
predict_shape = (5, 1)
output = tf.convert_to_tensor(np.zeros(predict_shape), dtype=tf.float32)
for x in range(self.max_output):
# below code raises error, here output and decoder_state shape is (5, 1) (?, 10)
output, decoder_state = self.decoder_rnn(output, decoder_state)
Related
I'm trying to build a Sequential model with tensorflow.
import tensorflow as tf
import keras
from tensorflow.keras import layers
from keras import optimizers
import numpy as np
model = keras.Sequential (name="model")
model.add(keras.Input(shape=(786,)))
model.add(layers.Dense(2048, activation="relu", name="layer1"))
model.add(layers.Dense(786, activation="relu", name="layer2"))
model.add(layers.Dense(786, activation="relu", name="layer3"))
output = model.add(layers.Dense(786, activation="relu", name="output"))
model.summary()
model.compile(
optimizer=tf.optimizers.Adam(), # Optimizer
loss=keras.losses.CategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
history = model.fit(
x_train,
y_train,
batch_size=1,
epochs=5,
)
The input shape is a vector with length of 768 (so the input shape is (768,) right?), representing a chess board:
def get_dataset():
container = np.load('/content/drive/MyDrive/test_data_vector.npz')
b, v = container['arr_0'], container['arr_1']
v = np.asarray(v / abs(v).max() / 2 + 0.5, dtype=np.float32) # normalization (0 - 1)
return b, v
xtrain, ytrain = get_dataset()
print(xtrain.shape)
print(ytrain.shape)
>> (37, 786) #there are 37 samples
>> (37, 786)
But I always get the error:
ValueError: Input 0 of layer model is incompatible with the layer: expected axis -1 of input shape to have value 786 but received input with shape (1, 1, 768)
I tried with np.expand_dims(), which ended in the same Error.
The error is just a typo, as the user mentioned the issue is resolved by changing the output shape from 786 to 768 and the issue is resolved.
One suggestion based on the model structure.
The number of units are not related to your input shape, you don't have to match that number.
The number of units like 2048 and 786 in dense layer is too large and this may not help the model to learn better.
Try with smaller numbers like 32,64 etc, you can refer some of the examples in the tensorflow document.
I have a Keras model that takes an input layer with shape (n, 288, 1), of which 288 is the number of features. I am using a TensorFlow dataset tf.data.experimental.make_batched_features_dataset and my input layer will be (n, 1, 1) which means it gives one feature to the model at a time. How can I make an input tensor with the shape of (n, 288, 1)? I mean how can I use all my features in one tensor?
You can specify the shape of your input in the Keras input layer. Here an example code demonstrating with dummy data demonstrating the same.
import tensorflow as tf
## Creating dummy data for demo
def make_sample():
return tf.random.normal([288, 1])
n_samples = 100
samples = [make_sample() for _ in range(n_samples)]
labels = [tf.random.uniform([1]) for _ in range(n_samples)]
# Use tf.data to create dataset
batch_size = 4
dataset = tf.data.Dataset.from_tensor_slices((samples, labels))
dataset = dataset.batch(batch_size)
# Build keras function model
inputs = tf.keras.Input(shape=[288, 1], name='input')
x = tf.keras.layers.Dense(1)(inputs)
model = tf.keras.Model(inputs=[inputs], outputs=[x])
# Compile loss and optimizer
model.compile(loss='mse', optimizer='sgd', metrics=['mae'])
model.fit(dataset, epochs=1)
I am trying to get to run a bit of sample code from github in order to learn Working with Tensorflow 2 and the YOLO Framework. My Laptop has a M1000M Graphics Card and I installed the CUDA Platform from NVIDIA from here.
So the Code in question is this bit:
tf.compat.v1.disable_eager_execution()
_MODEL_SIZE = (416, 416)
_CLASS_NAMES_FILE = './data/labels/coco.names'
_MAX_OUTPUT_SIZE = 20
def main(type, iou_threshold, confidence_threshold, input_names):
class_names = load_class_names(_CLASS_NAMES_FILE)
n_classes = len(class_names)
model = Yolo_v3(n_classes=n_classes, model_size=_MODEL_SIZE,
max_output_size=_MAX_OUTPUT_SIZE,
iou_threshold=iou_threshold,
confidence_threshold=confidence_threshold)
if type == 'images':
batch_size = len(input_names)
batch = load_images(input_names, model_size=_MODEL_SIZE)
inputs = tf.compat.v1.placeholder(tf.float32, [batch_size, *_MODEL_SIZE, 3])
detections = model(inputs, training=False)
saver = tf.compat.v1.train.Saver(tf.compat.v1.global_variables(scope='yolo_v3_model'))
with tf.compat.v1.Session() as sess:
saver.restore(sess, './weights/model.ckpt')
detection_result = sess.run(detections, feed_dict={inputs: batch})
draw_boxes(input_names, detection_result, class_names, _MODEL_SIZE)
print('Detections have been saved successfully.')
While executing this (also wondering why starting the detection.py doesnt use GPU in the first place), I get the Error Message:
File "C:\SDKs etc\Python 3.8\lib\site-packages\tensorflow\python\client\session.py", line 1451, in _call_tf_sessionrun
return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
tensorflow.python.framework.errors_impl.UnimplementedError: The Conv2D op currently only supports the NHWC tensor format on the CPU. The op was given the format: NCHW
[[{{node yolo_v3_model/conv2d/Conv2D}}]]
Full Log see here.
If I am understanding this correctly, the format of inputs = tf.compat.v1.placeholder(tf.float32, [batch_size, *_MODEL_SIZE, 3]) is already NHWC (Model Size is a tuple of 2 Numbers) and I don't know how I need to change things in Code to get this running on CPU.
If I am understanding this correctly, the format of inputs =
tf.compat.v1.placeholder(tf.float32, [batch_size, *_MODEL_SIZE, 3]) is
already NHWC (Model Size is a tuple of 2 Numbers) and I don't know how
I need to change things in Code to get this running on CPU.
Yes you are. But look here:
def __init__(self, n_classes, model_size, max_output_size, iou_threshold,
confidence_threshold, data_format=None):
"""Creates the model.
Args:
n_classes: Number of class labels.
model_size: The input size of the model.
max_output_size: Max number of boxes to be selected for each class.
iou_threshold: Threshold for the IOU.
confidence_threshold: Threshold for the confidence score.
data_format: The input format.
Returns:
None.
"""
if not data_format:
if tf.test.is_built_with_cuda():
data_format = 'channels_first'
else:
data_format = 'channels_last'
And later:
def __call__(self, inputs, training):
"""Add operations to detect boxes for a batch of input images.
Args:
inputs: A Tensor representing a batch of input images.
training: A boolean, whether to use in training or inference mode.
Returns:
A list containing class-to-boxes dictionaries
for each sample in the batch.
"""
with tf.compat.v1.variable_scope('yolo_v3_model'):
if self.data_format == 'channels_first':
inputs = tf.transpose(inputs, [0, 3, 1, 2])
Solution:
check tf.test.is_built_with_cuda() work as expected
if not - set order manually when create model:
model = Yolo_v3(n_classes=n_classes, model_size=_MODEL_SIZE,
max_output_size=_MAX_OUTPUT_SIZE,
iou_threshold=iou_threshold,
confidence_threshold=confidence_threshold,
data_format = 'channels_last')
I am struggling for the last hour to understand what i am doing wrong. I am a novice in NN, but this is not my first code.
def simple_model(lr=0.1):
X = Input(shape=(6144,))
out = Dense(1)(X)
model = Model(inputs=X, outputs=out)
opt = tf.keras.optimizers.SGD(learning_rate=lr)
model.compile(optimizer=opt, loss='mean_squared_error')
model.summary()
return model
mod = simple_model()
a = np.zeros(6144)
v = mod.predict(a)
running this i get the following error:
WARNING:tensorflow:Model was constructed with shape (None, 6144) for input Tensor("input_1:0", shape=(None, 6144), dtype=float32), but it was called on an input with incompatible shape (32, 1).
......
ValueError: Input 0 of layer dense is incompatible with the layer: expected axis -1 of input shape to have value 6144 but received input with shape [32, 1]
Where does this [32, 1] come from ?!
I am sure there is some silly mistake in my code, but can't see it :(
p.s. It does compile the mode and prints the summary before throwing an error
mod = simple_model()
a = np.zeros(6144)
#Add this line
a = np.expand_dims(a,axis=0)
v = mod.predict(a)
The reason why your error appears is that Keras + TensorFlow only allow batch predictions. When we use expand_dims function, we actually create a batch of dimension 1.
when I use google colab, there's no error in code
but when I use spyder or jupyter, the error occurs.
Model_10 = Sequential()
Model_10.add(LSTM(128, batch_input_shape = (1,10,5), stateful = True))
Model_10.add(Dense(5, activation = 'linear'))
Model_10.compile(loss = 'mse', optimizer = 'rmsprop')
Model_10.fit(x_train, y_train, epochs=1, batch_size=1, verbose=2, shuffle=False, callbacks=[history])
x_train_data.shape = (260,10,5)
y_train_data.shape = (260,1,5)
I'm using python3.7 and tensorflow 2.0
I don't know why error occurs in anaconda only.
ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ
error code
ValueError: A target array with shape (260, 1, 5) was passed for an output of shape (1, 5) while using as loss mean_squared_error. This loss expects targets to have the same shape as the output.
You should reshape your labels/targets:
y_train_data = y_train_data.reshape((260,5))
Since you're using batch_input_shape in the input layer and specifying batch size of 1, the model will take one example from your labels at each step which will have a shape of (1, 5) for the labels anyway.