AttributeError: 'numpy.ndarray' object has no attribute 'name' - numpy

I am writing a Grad-CAM function to get attention map of an image trained on VGG16 classifier. Here is the function below :
# Define a function to get the attention maps using Grad-CAM
def grad_cam(img, feature_extractor, layer_name):
x = preprocess_input(img)
features = feature_extractor.predict(x)
output = model.output
grads = K.gradients(output, feature_extractor.get_layer(layer_name).output)[0]
pooled_grads = K.mean(grads, axis=(0, 1, 2))
iterate = K.function([model.input], [pooled_grads, features[0]])
pooled_grads_value, features_value = iterate([x])
for i in range(512):
features_value[:, :, i] *= pooled_grads_value[i]
heatmap = np.mean(features_value, axis=-1)
heatmap = np.maximum(heatmap, 0)
heatmap /= np.max(heatmap)
return heatmap
Given below is the code for calling that function:
# Get the attention maps using Grad-CAM
attention_maps = grad_cam(img, feature_extractor, 'block5_conv3')
# Plot the attention maps
plt.imshow(attention_maps, cmap='jet')
plt.axis('off')
plt.show()
After calling this function I am getting an error AttributeError: 'numpy.ndarray' object has no attribute 'name'
Screenshot of the error message I got
I am expecting a heatmap of the attention maps of a single image after going through a VGG16 model.

Related

Convert TensorFlow data to be used by ONNX inference

I'm trying to convert a LSTM model from TensorFlow into ONNX. The code for generating data for TensorFlow model training is as below:
def make_dataset(self, data):
data = np.array(data, dtype=np.float32)
ds = tf.keras.utils.timeseries_dataset_from_array(
data=data,
targets=None,
sequence_length=self.total_window_size,
sequence_stride=1,
shuffle=True,
batch_size=32, )
ds = ds.map(self.split_window)
The model training code is actually from the official tutorial. Then after conversion to ONNX, I try to perform prediction as follows:
import onnx
import onnxruntime as rt
from tf_lstm import WindowGenerator
import tensorflow as tf
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1,
label_columns=['T (degC)'])
model = onnx.load_model('models/onnx/tf-lstm-weather.onnx')
print(model)
sess = rt.InferenceSession('models/onnx/tf-lstm-weather.onnx')
input_name = sess.get_inputs()[0].name
label_name = sess.get_outputs()[0].name
pred = sess.run([label_name], {input_name: wide_window.test})[0]
But it throws this error:
RuntimeError: Input must be a list of dictionaries or a single numpy array for input 'lstm_input'.
I tried to convert wide_window.test into numpy array and use it instead as follows:
test_data = []
test_label = []
for x, y in wide_window.test:
test_data.append(x.numpy())
test_label.append(y.numpy())
test_data2 = np.array(test_data, dtype=np.float)
pred = sess.run([label_name], {input_name: test_data2})[0]
Then it gives this error:
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (219,) + inhomogeneous part.
Any idea?
That's a numpy error. Each row you add to the input array has to have the same number of elements.
setting an array element with a sequence requested array has an inhomogeneous shape after 1 dimensions The detected shape was (2,)+inhomogeneous part

Is there any other way to set tensor in object detection using ssdmobilenet

I am using a model of object detection and i am referring this link for fitting my image in model, thereby including the method "set_input_tensor" from here
Now when i execute this line of code
def set_input_tensor(interpreter, image):
tensor_index = interpreter.get_input_details()[0]['index']
input_tensor = interpreter.tensor(tensor_index)()[0]
input_tensor[:, :] = image
I am getting error of this line
TypeError: __array__() takes 1 positional argument but 2 were given
Shape of Image : (1, 320, 320, 3)
Shape of input_tensor : (320, 320, 3)
So I tried changing code like
input_tensor = image[0, :, :, :]
Since i am not implementing classes, i have not used self argument
please help
Yes we can below is the code:-
interpreter = tf.lite.Interpreter(model_path="model_path")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_data = image
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
boxes = interpreter.get_tensor(output_details[0]['index'])
labels =interpreter.get_tensor(output_details[1]['index'])
scores =interpreter.get_tensor(output_details[2]['index'])
num =interpreter.get_tensor(output_details[3]['index'])

TypeError: <lambda>() takes 1 positional argument but 2 were given

Here is my code:
img_gen = tf.keras.preprocessing.image.ImageDataGenerator()
gen = img_gen.flow_from_directory('/train/',(224, 224),'rgb', batch_size = 2)
training_set = tf.data.Dataset.from_generator(lambda : gen, output_types=(tf.float32, tf.float32), output_shapes = ([2,224,224,3],[2,2]))
def read_images(features):
return features['image']
training_set = training_set.map(lambda x: read_images(x), num_parallel_calls=tf.data.experimental.AUTOTUNE)
The error was:
TypeError: <lambda>() takes 1 positional argument but 2 were given
So how can I solve the problem in funtion read_images.
Documentation of flow_from_directory -
Returns
A DirectoryIterator yielding tuples of (x, y) where x is a
numpy array containing a batch of images with shape (batch_size,
*target_size, channels) and y is a numpy array of corresponding labels.
You can see that it returns a tuple with 2 elements, so your map function needs to handle that.
def read_images(features):
# some processing
output = features
return output
training_set = training_set.map(lambda image, label: read_images(image), num_parallel_calls=tf.data.experimental.AUTOTUNE)
The ImageDataGenerator itself have a lot of processing options available.
You can also check out other tutorials in tensorflow pages - load images
Looking at the dataset content would also help debug issues
for line in training_set.take(1):
print(len(line))
print(line)

Keras: Predict model within custom loss function

I am trying to use some_model.predict(x) within a custom loss function.
I found this custom loss function:
_EPSILON = K.epsilon()
def _loss_tensor(y_true, y_pred):
y_pred = K.clip(y_pred, _EPSILON, 1.0-_EPSILON)
out = -(y_true * K.log(y_pred) + (1.0 - y_true) * K.log(1.0 - y_pred))
return K.mean(out, axis=-1)
But the problem is that model.predict() is expecting a numpy array.
So I looked for how to convert a tensor (y_pred) to a numpy array.
I found tmp = K.tf.round(y_true) but this returns a tensor.
I have also found: x = K.eval(y_true) which takes a Keras variable and returns a numpy array.
This produces the error: You must feed a value for placeholder tensor 'dense_78_target' with dtype float.....
Some people suggested setting the learning phase to true. I did that, but it did not help.
What I just want to do:
def _loss_tensor(y_true, y_pred):
y_tmp_true = first_decoder.predict(y_true)
y_tmp_pred = first_decoder.predict(y_pred)
return keras.losses.binary_crossentropy(y_tmp_true,y_tmp_pred)
Any help would be appreciated.
This works:
sess = K.get_session()
with sess.as_default():
tmp = K.tf.constant([1,2,3]).eval()
print(tmp)
I also tried this now:
tmp = first_decoder(y_true)
This fails the assertion:
assert input_shape[-1]
Maybe someone knows how to resolve this?
Update:
I can now feed it through the model with:
y_t = first_decoder(K.reshape(y_true, (1,512)))
y_p = first_decoder(K.reshape(y_pred, (1,512)))
But when I try to return the binary cross entropy the shape is not right:
Input to reshape is a tensor with 131072 values, but the requested shape has
512
I figured out that 131072 was the product of my batch size and input size (256*512). I then adopted my code to reshape to (256,512) size. The first batch runs fine, but then I get another error that says that the passed size was (96,512).
[SOLVED]Update:
It works now:
def _loss_tensor(y_true, y_pred):
num_ex = K.shape(y_true)[0]
y_t = first_decoder(K.reshape(y_true, (num_ex,512)))
y_p = first_decoder(K.reshape(y_pred, (num_ex,512)))
return keras.losses.binary_crossentropy(y_t,y_p)

Do the operations defined in array ops in Tensorflow have gradient defined?

I want to know whether the tensorflow operations in this link, have a gradient defined. I am asking because I am implementing a custom loss function and when I run it I always have this error :
ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
This is my custom Loss function:
def calculate_additional_loss(y_true,y_pred):
#additional loss
x_decoded_normalized = original_dim* y_pred
#y_true = K.print_tensor(y_true, message='y_true = ')
#y_pred = K.print_tensor(y_pred, message='y_pred = ')
error = tf.constant(0, dtype= tf.float32)
additional_loss= tf.constant(0, dtype= tf.float32)
final_loss= tf.constant(0, dtype= tf.float32)
for k in range(batch_size):
#add padding
reshaped_elem_1 = K.reshape(x_decoded_normalized[k], [DIM,DIM])
a = K.reshape(reshaped_elem_1[:,DIM-1], [DIM,1])
b = K.reshape(reshaped_elem_1[:,1], [DIM,1])
reshaped_elem_1 = tf.concat ([b,reshaped_elem_1], axis= 1)
reshaped_elem_1 = tf.concat ([reshaped_elem_1,a], axis= 1)
c= K.reshape(reshaped_elem_1[DIM-1,:], [1,DIM+2])
d= K.reshape(reshaped_elem_1[1,:], [1,DIM+2])
reshaped_elem_1 = tf.concat ([d,reshaped_elem_1],axis=0)
reshaped_elem_1 = tf.concat ([reshaped_elem_1,c],axis=0)
for (i,j) in range(reshaped_elem_1.shape[0],reshaped_elem_1.shape[1]):
error = tf.add(error, tf.pow((reshaped_elem_1[i,j]-
reshaped_elem_1[i,j+1]),-2),
tf.pow((reshaped_elem_1[i,j]-reshaped_elem_1[i,j-
1]),-2), tf.pow((reshaped_elem_1[i,j]-
reshaped_elem_1[i-1,j]),-2),
tf.pow((reshaped_elem_1[i,j]-reshaped_elem_1[i+1,j]),-2))
additional_loss = tf.add(additional_loss, tf.divide(error, original_dim))
final_loss += tf.divide(additional_loss, batch_size)
print('final_loss', final_loss)
return final_loss
and This is where I am calling it:
models = (encoder, decoder)
additional_loss = calculate_additional_loss(inputs,outputs)
vae.add_loss(additional_loss)
vae.compile(optimizer='adam')
vae.summary()
plot_model(vae,to_file='vae_mlp.png',show_shapes=True)
vae.fit(x_train, epochs=epochs, batch_size=batch_size, validation_data=(x_test, None), verbose = 1, callbacks=[CustomMetrics()])
Thank you in advance.
Most ops have a defined gradient. There are some ops for which a gradient is not defined and the error message you get gives you some examples.
Having said that, there are couple of mistakes I see in your code :
final_loss is defined as tf.constant, but you are trying to increment it.
You are taking a tuple from range
error is defined as tf.constant, but you are trying to increment it.
Don't use for loop in this way over batch_size. Instead use TensorFlow functions to handle batch dimension directly. This way you are just proliferating your nodes.
The way you have written your code makes me think that you're thinking of TensorFlow as pure python. It is not. You define the graph and then you execute it inside a session. So, in the function use TF functions to just define the computations.