Using patch from larger image as input dim to Keras CNN gives error 'Tensor' object has no attribute '_keras_history'* - tensorflow

I am trying to create a CNN with keras to process 20x20 patches from a larger image of 600x600.
When I attempt the run the code below I receive an error AttributeError: 'Tensor' object has no attribute '_keras_history'
The below code is only intended to look at the first 20 x 20 patch out of an total of 900, I am trying to get this functioning before attempting to loop through the entire input image.
I don't understand why it is returning the error as each layer is generated with an keras layer and I haven't applied any other operations to the tensor?
I am using tensorflow 1.3 and keras 2.0.6.
nb_filters=16
input_image=Input(shape=(600,600,3))
Input_1R=Reshape((900,20,20,3))(input_image)
conv1=Convolution2D(nb_filters,(5,5),activation='relu',padding='valid')(Input_1R[:,0])
conv4=Convolution2D(1,(6,6),activation='hard_sigmoid',padding='same')(conv1)
dense6=Dense(1)(conv4)
output_dense=dense6
model = Model(inputs=input_image, outputs=output_dense)

The error occurs because the slicing operation Input_1R[:,0] is not performed in a Keras layer.
You can wrap it into a Lambda layer:
sliced = Lambda(lambda x: x[:, 0])(Input_1R)
conv1 = Convolution2D(nb_filters, (5,5), activation='relu', padding='valid')(sliced)

Related

In TensorFlow 1 / Keras, how to see the value of a Tensor during training?

On my Keras model, I need to see the output of a hidden layer during training.
Here is what I have done:
net = Model(x, [y, hidden_layer])
then, I constructed a custom callback
class CustomCallback(keras.callbacks.Callback):
def on_batch_end(self):
print(self.model.output[1])
But, when I run the training with:
net.train_on_batch(train_data)
I get:
ValueError: Error when checking model target: the list of Numpy arrays
that you are passing to your model is not the size the model expected.
Expected to see 2 array(s), but instead got the following list of 1
arrays:
Any idea ?
Thanks

K-Means of Tensorflow - Graph disconnected error

I am trying to write a function that runs KMeans on a dataset and outputs the cluster centroids. My aim is to use this in a custom keras layer, so I am using TensorFlow's implementation of KMeans that takes a tensor as the input dataset.
My problem however is that I can't make it work even as a standalone function. The problem comes from the fact that KMeans accepts a generator function that provides mini-batches instead of a plain tensor, but when I am using closure to do that, I get a graph disconnected error:
import tensorflow as tf # version: 2.4.1
from tensorflow.compat.v1.estimator.experimental import KMeans
#tf.function
def KMeansCentroids(inputs, num_clusters, steps, use_mini_batch=False):
# `inputs` is a 2D tensor
def input_fn():
# Each one of the lines below results in the same "Graph Disconnected" error. Tuples don't really needed but just to be consistent with the documentation
return (inputs, None)
return (tf.data.Dataset.from_tensor_slices(inputs), None)
return (tf.convert_to_tensor(inputs), None)
kmeans = KMeans(
num_clusters=num_clusters,
use_mini_batch=use_mini_batch)
kmeans.train(input_fn, steps=steps) # This is where the error happens
return kmeans.cluster_centers()
>>> x = tf.random.uniform((100, 2))
>>> c = KMeansCentroids(x, 5, 10)
The exact error is:
ValueError:
Tensor("strided_slice:0", shape=(), dtype=int32)
must be from the same graph as
Tensor("Equal:0", shape=(), dtype=bool)
(graphs are FuncGraph(name=KMeansCentroids, id=..) and <tensorflow.python.framework.ops.Graph object at ...>).
If I were to use a numpy dataset and convert to tensor inside the function, the code would work just fine.
Also, making input_fn() return directly tf.random.uniform((100, 2)) (ignoring the inputs argument), would again work. That's why I am guessing that tensorflow doesn't support closures since it needs to build the computation graph at the beginning.
But I don't see how to work around that.
Could it be a version error due to KMeans being a compat.v1.experimental module?
Note that the documentation of KMeans states for the input_fn():
The function should construct and return one of the following:
A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below.
A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
The problem you're facing is more about invoking tensor outside the created graph. Basically, when you called the .train function, a new graph will be created and that is with the graph defined in that input_fn and the graph defined in the model_fn.
kmeans.train(input_fn, steps=steps)
And, after that all the tensors those coming outside these functions will be treated as outsiders and won't part of this new graph. That's why you're getting a graph disconnected error for trying to use outsider tensor. To resolve this, you need to create the necessary tensors within these graphs.
import tensorflow as tf
from tensorflow.compat.v1.estimator.experimental import KMeans
#tf.function
def KMeansCentroids(num_clusters, steps, use_mini_batch=False):
def input_fn(batch_size):
pinputs = tf.random.uniform((100, 2))
dataset = tf.data.Dataset.from_tensor_slices((pinputs))
dataset = dataset.shuffle(1000).repeat()
return dataset.batch(batch_size)
kmeans = KMeans(
num_clusters=num_clusters,
use_mini_batch=use_mini_batch)
kmeans.train(input_fn = lambda: input_fn(5),
steps=steps)
return kmeans.cluster_centers()
c = KMeansCentroids(5, 10)
Here is some more info for reading, 1. FYI, I tested your code with a few versions of tf > 2, and I don't think it's related to version error or something.
Re-mentioning here for future readers. An alternative of using KMeans within Keras layers:
tf_kmeans.py
ClusteringLayer

tensorflow2: keras: model.fit() callbacks and eager mode

I am running Tensorflow 2.1 with keras API. I am following the following coding style:
model = tf.keras.Sequential()
...
model.fit(..., callbacks=callbacks)
Now, I would like to save some intermediate layer tensor value as image summary (as a sample what is happening at n-th training step). In order to do this, I've implemented my own callback class. I've also learned how keras.callbacks.TensorBoard is implemented, since it can save layer weights as image summaries.
I do the following in my on_epoch_end:
tensor = self.model.get_layer(layer_name).output
with context.eager_mode():
with ops.init_scope():
tensor = tf.keras.backend.get_value(tensor)
tf.summary.image(layer_name, tensor, step=step, max_outputs=1)
Unfortunately, I am still getting issue related to eager/graph modes:
tensor = tf.keras.backend.get_value(tensor)
File "/home/matwey/lab/venv/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py", line 3241, in get_value
return x.numpy()
AttributeError: 'Tensor' object has no attribute 'numpy'
Unfortunately, there is a little to no documentation on how to correctly combine keras callbacks and tf.summary.image. How could I overcome this issue?
upd: tf_nightly-2.2.0.dev20200427 has the same behaviour.

Tensorflow - h5 model to tflite conversion error

I've made a learning transfer using a pre-trained InceptionV3 model, and I saved the h5 model file. After that, I am able to make predictions.
Now, I want to convert the h5 model to tflite file, using TFLiteConverter.convert() method, like this:
converter = lite.TFLiteConverter.from_keras_model_file('keras.model.h5')
tflite_model = converter.convert()
but I get this error:
File "from_saved_model.py", line 28, in <module>
tflite_model = converter.convert()
File "C:\Anaconda3\lib\site-packages\tensorflow\contrib\lite\python\lite.py", line 409, in convert
"invalid shape '{1}'.".format(_tensor_name(tensor), shape))
ValueError: None is only supported in the 1st dimension. Tensor 'input_1' has invalid shape '[None, None, None, 3]'
I am running Anaconda Python 3.6.8 on Windows 10 64 bits. Thank you in advance for your help!
Only the batch size (index 0) is allowed to be None when converting the model from TensorFlow to TensorFlow Lite. You should be able to use the input_shapes argument when calling from_keras_model_file to get the input array shape to be valid. For an InceptionV3 model, the input_shapes argument is often {'Mul' : [1,299,299,3]}.
The documentation for TFLiteConverter.from_keras_model_file is available here. The accepted parameters are as follows (copied from the documentation):
from_keras_model_file(
cls,
model_file,
input_arrays=None,
input_shapes=None,
output_arrays=None
)
load the keras.model.h5
set the input_shape, just avoid [None, None, None, 3]
save it as a new model.
Convert it just using the code you post in the question.
The batch_size is the only dimension that can be given as none.
The first dimension in the input_shape is the batch_size, the second and third dimensions indicate the input size of the image while the last one indicates the number of channels (RGB).
To avoid the error you get, specify the dimensions beforehand.
This can be achieved using toco (a tool which directly converts the acquired keras model into .tflite without converting it first to a .pb model and then to a .tflite model).
Using input_shape argument in toco you can specify the dimensions of the input_shape of your keras model.
Install toco for python and then run the following command,
toco --output_file = output_model.tflite --keras_model_file = keras.model.h5 --input_arrays input_1 --input_shape 1,299,299,3
Here the batch_size dimension might differ according to your model. As for the input size dimensions, 299x299 is the default input size for InceptionV3 models.

No shape error in tensorflow graph construction but getting shape mismatch error during graph computation

There occurs no error in tensorflow graph construction, but I get a shape mismatch error during graph computation in tf.gradients (I guess that the error is in back propagation).
This is the error I get:
InvalidArgumentError (see above for traceback):
Input to reshape is a tensor with 16777216 values, but the requested shape has 4096
[[Node: gradients/truediv_grad/Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0
/device:GPU:0"](gradients/truediv_grad/Sum, gradients/truediv_grad/Shape)]]
I solved the issue , using two techniques:
1.Apparently if you are creating custom ops and gradients , you need to be very explicit in providing the shape information to tensorflow using set_shape or tf.reshape
2.Also when you are registering your gradient using tf.register_gradient which takes op and grad as inputs, you need to be careful while chaining the gradients i.e dy/dx = dy/dz*dz/dx.
Say dy/dz is the custom gradient we have created and dz/dxis the gradient of the previous ops as per the chain rule of differentiation.
tf.register_gradient(Mygrad)
def Mygrad(op,grad):
*****do stuff with op.inputs and calculate custom grads say cust_grad or dy/dz****
return cust_grad*grad
I changed this to following:
tf.register_gradient(Mygrad)
def Mygrad(op,grad):
*****do stuff with op.inputs and calculate custom grads say cust_grad or dy/dz****
return tf.matmul(tf.reshape(cust_grad,[calculated_shape]),tf.reshape(grad,expeced_shape))