So I have this neural network and I am feeding examples "X" and labels "Y" whose shapes are:
X.shape = (10,10,2)
Y.shape = (10,10,2)
The code for the model looks like:
import tensorflow as tf
from convert import process
import numpy as np
X, Y, rate = process('songs/song1.wav')
X = np.array(X[:10])
Y = np.array(Y[:10])
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128))
model.add(tf.keras.layers.Dense(128))
model.add(tf.keras.layers.Dense(20))
model.compile(optimizer='adam', loss='categorical_crossentropy')
model.fit(X, Y, epochs=2)
Now for some reason once I run this i get the error:
ValueError: Shapes (None, 10, 2) and (None, 20) are incompatible
I am confused because I fed it data where each example of both "X" and "Y" have shapes (10, 2). So why is it saying that I passed it (None, 10, 2) and (None, 20)
Your last layer uses linear activation whereas you choose categorical_crossentropy loss. Set either
model.add(tf.keras.layers.Dense(20, activations='softmax'))
....loss='categorical_crossentropy')
or,
model.add(tf.keras.layers.Dense(20))
....loss='mse')
Also check your data shape, especially the label (y).
Related
I'm building an RNN and am having trouble passing in the data. The csv file I'm pulling data from has a sentence column, and a label column that's filled with a binary classification value (1 or 0). This is how I'm preprocessing right now:
data = pd.read_csv(r'/cybersecurity-sqlinjection/sqli.csv', encoding='utf-8')
vectorizer = TfidfVectorizer(norm = False, smooth_idf = False, analyzer='word', stop_words=stopwords.words('english'))
sentence_vectors = vectorizer.fit_transform((data['Sentence'].values.astype('U')))
df = pd.DataFrame(sentence_vectors.toarray())
X=df[df.columns]
y=data['Label']
X_train, X_test, y_train, y_test =train_test_split(X,y, train_size=0.8, test_size=0.2, random_state=42)
X.head()
Next I was passing in X_train to an LSTM model. At this point I was receiving errors about the shape of the data being passed in to the model, so I used the first response on this issue. I added this to the end of my code, before inputting the data into the model.
X_train_shape = X_train.shape #outputs (19327, 15016)
X_train = X_train.values.reshape(-1, 1, 15016)
model = keras.models.Sequential()
model.add(keras.layers.LSTM(15, input_shape=(1, 15016), return_sequences=True))
Now this error is being returned ValueError: Shapes (None, 1) and (None, 1, 10) are incompatible
I'm not sure what the issue is and would appreciate any help!
I'm trying to build a Sequential model with tensorflow.
import tensorflow as tf
import keras
from tensorflow.keras import layers
from keras import optimizers
import numpy as np
model = keras.Sequential (name="model")
model.add(keras.Input(shape=(786,)))
model.add(layers.Dense(2048, activation="relu", name="layer1"))
model.add(layers.Dense(786, activation="relu", name="layer2"))
model.add(layers.Dense(786, activation="relu", name="layer3"))
output = model.add(layers.Dense(786, activation="relu", name="output"))
model.summary()
model.compile(
optimizer=tf.optimizers.Adam(), # Optimizer
loss=keras.losses.CategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
history = model.fit(
x_train,
y_train,
batch_size=1,
epochs=5,
)
The input shape is a vector with length of 768 (so the input shape is (768,) right?), representing a chess board:
def get_dataset():
container = np.load('/content/drive/MyDrive/test_data_vector.npz')
b, v = container['arr_0'], container['arr_1']
v = np.asarray(v / abs(v).max() / 2 + 0.5, dtype=np.float32) # normalization (0 - 1)
return b, v
xtrain, ytrain = get_dataset()
print(xtrain.shape)
print(ytrain.shape)
>> (37, 786) #there are 37 samples
>> (37, 786)
But I always get the error:
ValueError: Input 0 of layer model is incompatible with the layer: expected axis -1 of input shape to have value 786 but received input with shape (1, 1, 768)
I tried with np.expand_dims(), which ended in the same Error.
The error is just a typo, as the user mentioned the issue is resolved by changing the output shape from 786 to 768 and the issue is resolved.
One suggestion based on the model structure.
The number of units are not related to your input shape, you don't have to match that number.
The number of units like 2048 and 786 in dense layer is too large and this may not help the model to learn better.
Try with smaller numbers like 32,64 etc, you can refer some of the examples in the tensorflow document.
I am a building model with TensorFlow probability layers. When I do, model.output.shape, I get an error:
AttributeError: 'UserRegisteredSpec' object has no attribute '_shape'
If I do, output_shape = tf.shape(model.output) it gives a Keras Tensor:
<KerasTensor: shape=(5,) dtype=int32 inferred_value=[None, 3, 128, 128, 128] (created by layer 'tf.compat.v1.shape_15')
How can I get the actual values [None, 3, 128, 128, 128]?
I tried output_shape.get_shape(), but that gives the Tensor shape [5].
Code to reproduce error:
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow_probability import distributions as tfd
tfd = tfp.distributions
model = tf.keras.Sequential()
model.add(tf.keras.layers.Input(10))
model.add(tf.keras.layers.Dense(2, activation="linear"))
model.add(
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(
loc=t[..., :1], scale=1e-3 + tf.math.softplus(0.1 * t[..., 1:])
)
)
)
model.output.shape
tf.shape will return a KerasTensor which is not easy to get the output shape directly.
However you can do this:
tf.shape(model.output)
>> `<KerasTensor: shape=(2,) dtype=int32 inferred_value=[None, 1] (created by layer 'tf.compat.v1.shape_168')>`
You want to get inferred_value, so:
tf.shape(model.output)._inferred_value
>> [None, 1]
Basically you can access any layer's output shape with:
tf.shape(model.layers[idx].output)._inferred_value
where idx is the index of the desired layer.
To get the output shape of all the layers you could do for instance:
out_shape_list=[]
for layer in model.layers:
out_shape = layer.output_shape
out_shape_list.append(out_shape)
You will get a list of output shapes, one for each layer
I'm trying to apply transfer learning to MNIST using MobileNet weights in Keras. Keras documentation to use MobileNet https://keras.io/applications/#mobilenet
Mobilenet accepts 224x224x3 as input but MNIST is 28x28x1. I'm creating a Lambda layer which can convert 28x28x1 image into 224x224x3 and send it as input to MobileNet. The following code causes
TypeError: Input layers to a Model must be InputLayer objects. Received inputs: Tensor("lambda_2/ResizeNearestNeighbor:0", shape=(?, 224, 224, 3), dtype=float32). Input 0 (0-based) originates from layer type Lambda.
height = 28
width = 28
input_image = Input(shape=(height,width,1))
def resize_image_to_inception(x):
x = K.repeat_elements(x, 3, axis=3)
x = K.resize_images(x, 8, 8, data_format="channels_last")
return x
input_image_ = Lambda(resize_image_to_inception, output_shape=(224, 224, 3))(input_image)
print(type(input_image_))
base_model = MobileNet(input_tensor=input_image_, weights='imagenet', include_top=False)
Why the following code runs twice gram layer?
import numpy as np
from keras.applications import vgg19
from keras import backend as K
from keras.layers import Input, Lambda
import tensorflow as tf
from keras.models import Model
def gram_layer(y):
print('Using Gram Layer')
# assert K.ndim(y) == 4
print(y.shape, 'y.shape')
# a = y.get_shape()[1].value
# b = y.get_shape()[2].value
# c = y.get_shape()[3].value
# print(a, b, c)
# x = K.squeeze(y, axis=0)
# features = K.batch_flatten(K.permute_dimensions(x, (2, 0, 1)))
# features_nomean = features - K.mean(features, axis=0, keepdims=True)
# gram = K.dot(features_nomean, K.transpose(features_nomean)) / (a * b * c)
print('exiting Gram')
# return x
return y
In = K.placeholder((1, 256, 256, 3))
model = vgg19.VGG19(input_tensor = In, weights='imagenet', include_top=False)
for layer in model.layers:
if layer.name == 'block1_conv1':
print(layer.name)
print(layer.output.shape)
outputs = (Lambda(gram_layer))(layer.output)
Debug info:
block1_conv1
(1, 256, 256, 64)
Using Gram Layer
(1, 256, 256, 64) y.shape
exiting Gram
Using Gram Layer
(?, ?, ?, 64) y.shape
exiting Gram
Debug information contains two "Using Gram Layer", it means this layer runs twice, and it fails at the second time, but strangely it's only called once.
Any idea what's wrong?
PS: I realize that the problem lies in the for loop part, if the last line
outputs = (Lambda(gram_layer))(layer.output)
is replaced as
outputs = (Lambda(gram_layer))(In)
the debug info goes as
block1_conv1
(1, 256, 256, 64)
Using Gram Layer
(1, 256, 256, 3) y.shape
exiting Gram
Using Gram Layer
(?, ?, ?, 3) y.shape
exiting Gram
If the last 5 lines are replaced as
outputs = (Lambda(gram_layer))(In)
then the debug info goes as
Using Gram Layer
(1, 256, 256, 3) y.shape
exiting Gram
Using Gram Layer
(1, 256, 256, 3) y.shape
exiting Gram
It still runs twice, but the shape inference is correct. Is this a bug? or should I report it at GitHub?
Not sure why your function is called twice, but it's not uncommon to see that. It's called during compilation first.
The problem there seems to te reshaping with "None" values. That's not supported.
You can reshape with "-1" instead of None, but you can have only one "-1" in a reshape.
Suggestion 1:
All your reshape code can be replaced with: x = K.squeeze(y,axis=0)
Warning:
But this is highly unusual in keras. The axis=0 dimension is the batch size. This code will only run fine with batch_size = 1. (Either your code and my suggestion).
Suggestion 2:
If you're going to use batch_flatten, why the reshape then?
Any reshape you do before batch_flatten() will be pointless, unless you really mean to flatten only the last two dimensions and have a (256,768) tensor.
Suggestion 3:
If you want the actual values of a, b, c for calculations, you need to get their tensor values instead of their config values:
shp = K.shape(y)
a = shp[1] #maybe you need shp[1:2], depending on whether you get an error in the division line
b = shp[2]
c = shp[3]
Suggestion 4:
It's quite strange to use a placeholder. That's not the keras way of doing it.
You should simply create the model and tell it the shape you want:
model = vgg19.VGG19(input_shape = (256,256,64), weights='imagenet', include_top=False)
If you do want to enforce a batch size of 1, then you can create an input tensor:
inputTensor = Input(batch_shape=(1,256,256,64)
output = model(inputTensor)
model = Model(inputTensor, output)