Problem in Keras with 'merge' - TypeError: 'module' object is not callable - tensorflow

I tried to merge layer3, layer4 and layer5 with following line of code:
layer = merge([layer3,layer4,layer5],mode='sum')
But it throws this error:
show the TypeError: 'module' object is not callable
Why is my code not working?

I assume you're trying to run source code written for an older Keras version. 'sum' just adds your layers element wise. You could also use TensorFlow to do the same:
layer = tf.add(layer3, layer4)
layer = tf.add(layer, layer5)

Related

I keep getting a TypeError when using gbt_regression_prediction().compute with XGBoost and daal4py

I have a pre-trained XGBoost model that I want to optimize with daal4py but I'm getting the following error
TypeError: Argument 'model' has incorrect type (expected daal4py._daal4py.gbt_regression_model, got XGBRegressor)
Here is the line with that is throwing the error:
y_pred = d4p.gbt_regression_prediction().compute(x_test, xgb_model).prediction.reshape(-1)
If you pass the XGBoost object to d4p.gbt_regression_prediction().compute(x_test, xgb_model).prediction.reshape(-1) you will continue to get this error.
You must first convert the model to daal4py format before passing it to the prediction method. Please see the example below.
daal_model = d4p.get_gbt_model_from_xgboost(xgb_model.get_booster())
daal_model).prediction.reshape(-1)

AttributeError: module 'tensorflow.keras.layers' has no attribute 'Rescaling'

When I try:
normalization_layer = layers.Rescaling(1./255)
error message:
AttributeError: module 'tensorflow.keras.layers' has no attribute 'Rescaling'
How to fix it?
I had the same error in v2.5.0
tf.keras.layers.experimental.preprocessing.Rescaling()
I guess this is the "old" way to use this layer
Yes, I used a wrong version of tf. Rescaling in tf v2.70, i used v2.60.
A preprocessing layer which rescales input values to a new range.
Inherits From: Layer, Module
tf.keras.layers.Rescaling(
scale, offset=0.0, **kwargs
)

TypeError: 'AutoTrackable' object is not callable

I am trying to run inference on my trained model following this tutorial. I am using TF 2.1.0 and I have tried with tf-nightly 2.5.0.dev20201202.
But I get TypeError: 'AutoTrackable' object is not callable when I hit the following line detections = detect_fn(input_tensor)
I am aware that
'AutoTrackable' object is not callable in Python
exists but I am not using tensorflow hub and I don't understand how the answer could help me.
Thanks
Try using detect_fn.signatures\['default'](input_tensor)
Changing detections = detect_fn(input_tensor) to
detections = detect_fn.signatures['serving_default'](input_tensor)
fixed the issue for me.

How to implement tensorflow cosine_decay

When I call the cosine_decay function in tensorflow, it shows this error:
'<' not supported between instances of 'CosineDecay' and 'int'
Here is my code:
decay_steps = 1000
lr_decayed_fn = tf.keras.experimental.CosineDecay(initial_learning_rate=0.01, decay_steps=1000)
model.compile(optimizer=Adam(lr=lr_decayed_fn), loss=dice_coef_loss, metrics=[dice_coef])
I just followed the tutorial on tensorflow and I don't know why there is this error
Change Adam(lr=lr_decayed_fn) to Adam(learning_rate=lr_decayed_fn)
The Adam optimizer call in tensorflow v2 needs learning_rate to be spelled out, it does not take the argument as "lr." See this issue: https://github.com/tensorflow/tensorflow/issues/44172

TensorFlow attention_decoder with RNNCell (state_is_tuple=True)

I want to build a seq2seq model with an attention_decoder, and to use MultiRNNCell with LSTMCell as the encoder. Because the TensorFlow code suggests that "This default behaviour (state_is_tuple=False) will soon be deprecated.", I set the state_is_tuple=True for the encoder.
The problem is that, when I pass the state of encoder to attention_decoder, it reports an error:
*** AttributeError: 'LSTMStateTuple' object has no attribute 'get_shape'
This problem seems to be related to the attention() function in seq2seq.py and the _linear() function in rnn_cell.py, in which the code calls the 'get_shape()' function of the 'LSTMStateTuple' object from the initial_state generated by the encoder.
Although the error disappears when I set state_is_tuple=False for the encoder, the program gives the following warning:
WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.LSTMCell object at 0x11763dc50>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.
I would really appreciate if someone can give any instruction about building seq2seq with RNNCell (state_is_tuple=True).
I ran into this issue also, the lstm states need to be concatenated or else _linear will complain. The shape of LSTMStateTuple depends on the kind of cell you're using. With a LSTM cell, you can concatenate the states like this:
query = tf.concat(1,[state[0], state[1]])
If you're using a MultiRNNCell, concatenate the states for each layer first:
concat_layers = [tf.concat(1,[c,h]) for c,h in state]
query = tf.concat(1, concat_layers)