AttributeError: 'Tensor' object has no attribute 'Activation' - tensorflow

I am building a chatbot. While specifying the activation function from keras, I am getting an attribute error as "AttributeError: 'Tensor' object has no attribute 'Activation'"
input_sequence = Input((max_story_len,))
question = Input((max_question_len,))
# INPUT ENCODER M
input_encoder_m = Sequential()
input_encoder_m.add(Embedding(input_dim = vocab_size, output_dim=64))
input_encoder_m.add(Dropout(0.3))
# INPUT ENCODER C
input_encoder_c = Sequential()
input_encoder_c.add(Embedding(input_dim = vocab_size, output_dim=max_question_len))
input_encoder_c.add(Dropout(0.3))
question_encoder = Sequential()
question_encoder.add(Embedding(input_dim = vocab_size, output_dim=64,
input_length=max_question_len))
question_encoder.add(Dropout(0.3))
input_encoded_m = input_encoder_m(input_sequence)
input_encoded_c = input_encoder_c(input_sequence)
question_encoded = question_encoder(question)
match =dot([input_encoded_m, question_encoded], axes= (2,2))
match.Activation('softmax')(match)

Related

Keras Autoencoder: Target data is missing. Model expects target data to be provided in fit()

I am encountering an error when fitting an autoencoder model. I'm trying to translate text between 2 languages.
Here is my code:
xLines = np.genfromtxt('data-1.ar',dtype='str',delimiter=',' )
yLines = np.genfromtxt('data-1.en',dtype='str',delimiter=',' )
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=maxTokens,
split="whitespace",
ngrams=None,
output_mode="int",
output_sequence_length=outputLength
)
vectorize_layer.adapt(vocab_data)
def vectorize_text(text):
text = tf.expand_dims(text, -1)
return vectorize_layer(text)
arDataset = vectorize_text(xLines)
enDataset = vectorize_text(yLines)
print(arDataset.shape)
print(enDataset.shape)
x = keras_nlp.layers.TokenAndPositionEmbedding(
vocabulary_size=maxTokens,
sequence_length=outputLength,
embedding_dim=EMBED_DIM,
mask_zero=True,
)(encoder_inputs)
encoder_outputs = keras_nlp.layers.TransformerEncoder(
intermediate_dim=INTERMEDIATE_DIM, num_heads=NUM_HEADS
)(inputs=x)
encoder = keras.Model(encoder_inputs, encoder_outputs)
# Decoder
decoder_inputs = keras.Input(shape=(None,), dtype="int64", name="decoder_inputs")
encoded_seq_inputs = keras.Input(shape=(None, EMBED_DIM), name="decoder_state_inputs")
x = keras_nlp.layers.TokenAndPositionEmbedding(
vocabulary_size=maxTokens,
sequence_length=outputLength,
embedding_dim=EMBED_DIM,
mask_zero=True,
)(decoder_inputs)
x = keras_nlp.layers.TransformerDecoder(
intermediate_dim=INTERMEDIATE_DIM, num_heads=NUM_HEADS
)(decoder_sequence=x, encoder_sequence=encoded_seq_inputs)
x = keras.layers.Dropout(0.5)(x)
decoder_outputs = keras.layers.Dense(enChars, activation="softmax")(x)
decoder = keras.Model(
[
decoder_inputs,
encoded_seq_inputs,
],
decoder_outputs,
)
decoder_outputs = decoder([decoder_inputs, encoder_outputs])
transformer = keras.Model(
[encoder_inputs, decoder_inputs],
decoder_outputs,
name="transformer",
)
transformer.summary()
transformer.compile(
"rmsprop", loss="categorical_crossentropy", metrics=["accuracy"]
)
transformer.fit([arDataset,enDataset], epochs=100)
The error I'm receiving is:
ValueError: Target data is missing. Your model was compiled with
loss=categorical_crossentropy, and therefore expects target data to be
provided in fit().
The shape of both my datasets is (99, 22)
Update 1
I've tried changing the fit arguments but still encountering a similar error.
transformer.fit(arDataset,enDataset, epochs=100)
transformer.fit(x=arDataset,y=enDataset, epochs=100)
ValueError: Layer "transformer" expects 2 input(s), but it received 1
input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0'
shape=(None, 22) dtype=int64>]
I'm not sure what the error means and how to resolve it.

" ValueError: Expecting KerasTensor which is from tf.keras.Input()". Error in prediction with dropout function

I am trying to predict uncertainty in a regression problem using Dropout during testing as per Yarin Gal's article. I created a class using Keras's backend function as provided by this stack overflow question's answer. The class takes a NN model as input and randomly drops neurons during testing to give a stochastic estimate rather than deterministic output for a time-series forecasting.
I create a simple encoder-decoder model as shown below for the forecasting with 0.1 dropout during training:
input_sequence = Input(shape=(lookback, train_x.shape[2]))
encoder = LSTM(128, return_sequences=False)(input_sequence)
r_vec = RepeatVector(forward_pred)(encoder)
decoder = LSTM(128, return_sequences=True, dropout=0.1)(r_vec) #maybe use dropout=0.1
output = TimeDistributed(Dense(train_y.shape[2], activation='linear'))(decoder)
# optimiser = optimizers.Adam(clipnorm=1)
enc_dec_model = Model(input_sequence, output)
enc_dec_model.compile(loss="mean_squared_error",
optimizer="adam",
metrics=['mean_squared_error'])
enc_dec_model.summary()
After that, I define and call the DropoutPrediction class.
# Define the class:
class KerasDropoutPrediction(object):
def __init__(self ,model):
self.f = K.function(
[model.layers[0].input,
K.learning_phase()],
[model.layers[-1].output])
def predict(self ,x, n_iter=10):
result = []
for _ in range(n_iter):
result.append(self.f([x , 1]))
result = np.array(result).reshape(n_iter ,x.shape[0] ,x.shape[1]).T
return result
# Call the object:
kdp = KerasDropoutPrediction(enc_dec_model)
y_pred_do = kdp.predict(x_test,n_iter=100)
y_pred_do_mean = y_pred_do.mean(axis=1)
However, in the line
kdp = KerasDropoutPrediction(enc_dec_model), when I call the LSTM model,
I got the following error message which says the input has to be a Keras Tensor. Can anyone help me with this error?
Error Message:
ValueError: Found unexpected instance while processing input tensors for keras functional model. Expecting KerasTensor which is from tf.keras.Input() or output from keras layer call(). Got: 0
To activate Dropout at inference time, you simply have to specify training=True (TF>2.0) in the layer of interest (in the last LSTM layer in your case)
with training=False
inp = Input(shape=(10, 1))
x = LSTM(1, dropout=0.3)(inp, training=False)
m = Model(inp,x)
# m.compile(...)
# m.fit(...)
X = np.random.uniform(0,1, (1,10,1))
output = []
for i in range(0,100):
output.append(m.predict(X)) # always the same
with training=True
inp = Input(shape=(10, 1))
x = LSTM(1, dropout=0.3)(inp, training=True)
m = Model(inp,x)
# m.compile(...)
# m.fit(...)
X = np.random.uniform(0,1, (1,10,1))
output = []
for i in range(0,100):
output.append(m.predict(X)) # always different
In your example, this becomes:
input_sequence = Input(shape=(lookback, train_x.shape[2]))
encoder = LSTM(128, return_sequences=False)(input_sequence)
r_vec = RepeatVector(forward_pred)(encoder)
decoder = LSTM(128, return_sequences=True, dropout=0.1)(r_vec, training=True)
output = TimeDistributed(Dense(train_y.shape[2], activation='linear'))(decoder)
enc_dec_model = Model(input_sequence, output)
enc_dec_model.compile(
loss="mean_squared_error",
optimizer="adam",
metrics=['mean_squared_error']
)
enc_dec_model.fit(train_x, train_y, epochs=10, batch_size=32)
and the KerasDropoutPrediction:
class KerasDropoutPrediction(object):
def __init__(self, model):
self.model = model
def predict(self, X, n_iter=10):
result = []
for _ in range(n_iter):
result.append(self.model.predict(X))
result = np.array(result)
return result
kdp = KerasDropoutPrediction(enc_dec_model)
y_pred_do = kdp.predict(test_x, n_iter=100)
y_pred_do_mean = y_pred_do.mean(axis=0)

Keras AttributeError: 'Functional' object has no attribute 'shape'

Trying to add Densenet121 functional block to the model.
I need Keras model to be written in this format, not using
model=Sequential()
model.add()
method
What's wrong the function, build_img_encod
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-62-69dd207148e0> in <module>()
----> 1 x = build_img_encod()
3 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
164 spec.min_ndim is not None or
165 spec.max_ndim is not None):
--> 166 if x.shape.ndims is None:
167 raise ValueError('Input ' + str(input_index) + ' of layer ' +
168 layer_name + ' is incompatible with the layer: '
AttributeError: 'Functional' object has no attribute 'shape'
def build_img_encod( ):
base_model = DenseNet121(input_shape=(150,150,3),
include_top=False,
weights='imagenet')
for layer in base_model.layers:
layer.trainable = False
flatten = Flatten(name="flatten")(base_model)
img_dense_encoder = Dense(1024, activation='relu',name="img_dense_encoder", kernel_regularizer=regularizers.l2(0.0001))(flatten)
model = keras.models.Model(inputs=base_model, outputs = img_dense_encoder)
return model
The reason why you get that error is that you need to provide the input_shape of the base_model, instead of the base_model per say.
Replace this line: model = keras.models.Model(inputs=base_model, outputs = img_dense_encoder)
with: model = keras.models.Model(inputs=base_model.input, outputs = img_dense_encoder)
def build_img_encod( ):
dense = DenseNet121(input_shape=(150,150,3),
include_top=False,
weights='imagenet')
for layer in dense.layers:
layer.trainable = False
img_input = Input(shape=(150,150,3))
base_model = dense(img_input)
flatten = Flatten(name="flatten")(base_model)
img_dense_encoder = Dense(1024, activation='relu',name="img_dense_encoder", kernel_regularizer=regularizers.l2(0.0001))(flatten)
model = keras.models.Model(inputs=img_input, outputs = img_dense_encoder)
return model
This worked..

TFF :ValueError Error when checking model target

I would like to implement a code of image classification with tensorflow-federated, So when I create the model and I pass it to federated averaging process, I find error that I can't understand here. Here is a part of my code implemented with TFF
input_shape = (224,224,3)
def create_compiled_keras_model(input_shape, base_model='resnet18'):
inputs = tf.keras.layers.Input(shape=(input_shape))
base_encoder = tf.keras.applications.ResNet50(
include_top=False, weights=None, input_tensor=None,
input_shape=None, pooling='avg')
base_encoder.training = True
h = base_encoder(inputs)
x = tf.keras.layers.Dense(2)(h)
x = tf.keras.layers.Activation('relu')(x)
x = tf.keras.layers.Dense(2)(x)
model = tf.keras.Model(inputs=inputs, outputs=[h, x])
model.compile(
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=[tf.keras.metrics.CategoricalAccuracy()])
return model
def model_fn():
keras_model = create_compiled_keras_model(input_shape, base_model='resnet18')
return tff.learning.from_compiled_keras_model(keras_model, sample_batch)
iterative_process = tff.learning.build_federated_averaging_process(model_fn, server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0),client_weight_fn=None)
The error was:
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), for inputs ['resnet50', 'dense_1'] but instead got the following list of 1 arrays: [<tf.Tensor 'Const_1:0' shape=(2, 2) dtype=float32>]...
Thanks for help!!

error while converting keras model with 2 input to tflite

I am trying to convert a tf2.keras model to tflite, but get the following error:
ValueError: Invalid input size: expected 2 items got 1 items.
my network is Siamese - it has 2 inputs that both are fed into the same backbone:
input_shape = (image_size, image_size, 3)
left_input = tf.keras.layers.Input(shape=input_shape, name='left_input')
right_input = tf.keras.layers.Input(shape=input_shape, name='right_input')
# define base model:
general_input = tf.keras.layers.Input(shape=input_shape)
x = build_mobilenet(inputs=general_input) # builds a standart mobilenet model
backbone_model = tf.keras.Model(general_input, x)
# run both examples:
left_features = backbone_model(left_input)
right_features = backbone_model(right_input)
output = tf.keras.layers.Subtract(name='diff')([left_features, right_features])
# continue run some more actions over the output tensor...
during training my dataset object return a dictionary of input and the label: {'left_input': im_left, 'right_input': im_right}, label
when trying to qunatize the model I have a representative dataset objects that returns only the inputs (without the label): return {'left_input': left, 'right_input': right}.
the tflite code used for qunatization:
data_generator = DataProvider(num_images=10)
model = tf.keras.models.load_model(float32_model_path, compile=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
converter.representative_dataset = data_generator
tflite_model = converter.convert()
The error occurs when calling to converter.convert(). Does anyone understand what could be the issue?
Thanks!