Keras layer channel-wise multiplication of scalar and graph plotting - tensorflow

I try to multiply scalar values to each channel in a tensor:
import tensorflow as tf
t = tf.ones([2,3,3,4])
w = tf.constant([1,2,3,4], dtype=tf.float32)
tf.multiply(t,w)
yields
<tf.Tensor: shape=(2, 3, 3, 4), dtype=float32, numpy=
array([[[[1., 2., 3., 4.],
[1., 2., 3., 4.],
[1., 2., 3., 4.]],
...
which is correct.
Now I am trying to wrap that operation inside a keras.layers.Layer, whereby w is a learnable parameter. I also try to plot my model using tf.keras.utils.plot_model(m). I encounter several problems.
Method 1
from tensorflow.keras import Model, Input
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.w = self.add_weight(shape=(256,), trainable=True)
def call(self, x):
return x * self.w
I plot this model using
mm = MyModel()
x = Input(shape=(64, 64, 256), batch_size=10, name='Input')
m = Model(inputs=[x], outputs=mm.call(x))
tf.keras.utils.plot_model(m)
Problem: I encountered the following warning:
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.math.multiply_2), but
are not present in its tracked objects:
<tf.Variable 'Variable:0' shape=(256,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
Question: Can I savely ignore the warning and the weights are still learned? If yes, how can I suppress this warning?
Method 2
As suggested in the warning, I wrap the multiplication in its own subclassed layer:
class MyMultiply(Layer):
def __init__(self):
super(MyMultiply, self).__init__()
def call(self, x):
return tf.multiply(x[0], x[1])
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.w = self.add_weight(shape=(256,), trainable=True)
self.mul = MyMultiply()
def call(self, x):
return self.mul([x, self.w])
Problem: This works until the model is plotted. Then I encounter the following error: AttributeError: 'ResourceVariable' object has no attribute '_keras_history'
Traceback:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-18-e4cc5cc97726> in <module>()
21 x = Input(shape=(64, 64, 256), batch_size=10, name='Input')
22 m = Model(inputs=[x], outputs=mm.call(x))
---> 23 tf.keras.utils.plot_model(m)
---------------------------------------------------------------------------
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/node.py in <lambda>(t)
259 if self.is_input:
260 return []
--> 261 inbound_layers = nest.map_structure(lambda t: t._keras_history.layer,
262 self.call_args[0])
263 return inbound_layers
AttributeError: 'ResourceVariable' object has no attribute '_keras_history'
Question: How do I resolve that error? Is this a bug (I submitted an issue to the tf github repo, however it was deleted immediately)?
Method 3
I try to use keras.layers.Multiply instead:
from tensorflow.keras.layers import Multiply
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.w = self.add_weight(shape=(256,), trainable=True)
self.mul = Multiply()
def call(self, x):
return self.mul([x, self.w])
Problem: ValueError: Can not merge tensors with different batch sizes. Got tensors with shapes : [(10, 64, 64, 256), (256,)]
To my understanding, the ValueError occurs because the internal _Merge layer checks for equal batch sizes. The internal Multiply layer however implements the multiplication with broadcasting (which should work!):
# from tensorflow/python/keras/layers/merge.py line 316-320
def _merge_function(self, inputs):
output = inputs[0]
for i in range(1, len(inputs)):
output = output * inputs[i]
return output
I could use tf.broadcast_to and so on, however, to my understanding this would materialize the tensor and would occupy more memory which I try to avoid.
Question: Is there another way to make keras.layers.Multiply work, so ultimately the model plotting works?

You can avoid the warning in Method 1 by creating a Keras Layer instead of a Model.
import tensorflow as tf
class MyLayer(tf.keras.layers.Layer):
def __init__(self):
super(MyLayer, self).__init__()
self.w = self.add_weight(name='multiply_weight', shape=(256,), trainable=True)
def call(self, x):
return tf.multiply(x, self.w)
mul_layer = MyLayer()
x = tf.keras.Input(shape=(64, 64, 256), batch_size=10, name='Input')
output = mul_layer(x)
m = tf.keras.Model(inputs=[x], outputs=output)
tf.keras.utils.plot_model(m)

Related

Custom layer in tensorflow to output the running maximum of its inputs

I am trying to create a custom layer in tensorflow to output the running maximum of its inputs. The layer has a memory variable and comparison function. I wrote the following
class ComputeMax(tf.keras.layers.Layer):
def __init__(self):
super(ComputeMax, self).__init__()
def build(self, input_shape):
self.maxval = tf.Variable(initial_value=tf.zeros((input_shape)),
trainable=False)
def call(self, inputs):
self.maxval.assign(tf.maximum(inputs, self.maxval))
return self.maxval
my_sum = ComputeMax()
x = tf.ones((1,2))
y = my_sum(x)
print(y.numpy()) # [1, 1]
y = my_sum(x)
print(y.numpy()) # [1, 1]
It works as above. When I try it in a test model:
model = Sequential()
model.add(tf.keras.Input(shape=(2)))
model.add(Dense(1, activation='relu'))
model.add(ComputeMax())
model.compile(optimizer='adam', loss='mse')
I get the error on compile:
ValueError: Cannot convert a partially known TensorShape to a Tensor: (None, 1)
What am I missing?
Actually, the layer needs to know the input neurons from the previous layer, which is the last value in input_shape. You are using input_shape as it is which is actually batch shape, leading to a layer of the shape of batch.
This implementation might help.
class ComputeMax(tf.keras.layers.Layer):
def __init__(self):
super(ComputeMax, self).__init__()
def build(self, input_shape):
self.maxval = tf.Variable(initial_value=tf.zeros((input_shape[-1])),
trainable=False)
def call(self, inputs):
self.maxval.assign(tf.maximum(inputs, self.maxval))
return self.maxval
But probably it won't give you answers with numpy 1d array.

I want to use keras layers within my custom layer, but I am unable to return the output of the layer as a tensor instead of an object

The error shown is
Failed to convert object of type class 'tensorflow.python.keras.layers.pooling.MaxPooling2D'
to Tensor.
I have tried many things but I am unable to sort this error.
```class Mixed_pooling():
def __init__(self, **kwargs):
super(Mixed_pooling, self).__init__(**kwargs)
def build(self, input_shape):
self.alpha = self.add_weight(
name='alpha', shape=(1,),
initializer='random_normal',
trainable=True
)
super(Mixed_pooling, self).build(input_shape)
def call(self, x):
x1 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2), padding='VALID')
x2 = tf.keras.layers.AveragePooling2D(pool_size=(2, 2), strides=(2,2), padding='VALID')
outputs = tf.add(tf.multiply(x1, self.alpha), tf.multiply(x2, (1-self.alpha)))
return outputs```
Providing the solution here (Answer Section) even though it is present in the Comment Section (Thanks to Slowpoke), for the benefit of the community.
As tf.keras.layers.MaxPooling2D() and tf.keras.layers.AveragePooling2D() are class objects, you need to instantiate the objects in build function and later use them in call function.
Modified Code -
import tensorflow as tf
class Mixed_pooling():
def __init__(self, **kwargs):
super(Mixed_pooling, self).__init__(**kwargs)
def build(self, input_shape):
self.alpha = self.add_weight(
name='alpha', shape=(1,),
initializer='random_normal',
trainable=True
)
self.maxpool=tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2), padding='VALID')
self.avgpool = tf.keras.layers.AveragePooling2D(pool_size=(2, 2), strides=(2,2), padding='VALID')
super(Mixed_pooling, self).build(input_shape)
def call(self, x):
x1 = self.maxpool(x)
x2 = self.avgpool(x)
outputs = tf.add(tf.multiply(x1, self.alpha), tf.multiply(x2, (1-self.alpha)))
return outputs
layer1 = Mixed_pooling()
print(layer1)
Output -
<__main__.Mixed_pooling object at 0x7fce31e46550>
Hope this answers your question. Happy Learning.

Tensorflow asking to run the build even though it is done

As always, tensorflow the weird dumb framework is going unintuitive haywire piece of crap on me. Can someone please be kind enough to help me out with this? I am able to run the checkpointing (how much of a mess can saving a model be? leave it to tensorflow to make a mountain out of a molehill) tutorial as given on the tutorial page, but, dare i make a little modification here a little modification there. The sticks and stones contraption called tensorflow comes crumbling down.
As you can clearly see i am running the build method but i am getting the error that i must run the build method with an input shape. In tutorial the build method is not there at all and the one layer self.l1 is built in the __init__ itself which, they themselves advice against at several other places
class Net(tf.keras.Model):
"""A simple linear model."""
def __init__(self):
super(Net, self).__init__()
#self.l1 = tf.keras.layers.Dense(5)
def build(self,input_shape):
self.l1 = tf.keras.layers.Dense(5)
self.dummy = tf.Variable(trainable=True,initial_value=tf.keras.initializers.glorot_normal()(shape=input_shape,dtype=tf.float32))
print('built layers')
def call(self, x):
return self.l1(x)
net = Net()
net.build([1,])
net.save_weights('easy_checkpoint')
The output and trace i am getting is:
built layers
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-31-3b54dc506ffd> in <module>
1 net = Net()
2 net.build([1,])
----> 3 net.save_weights('easy_checkpoint')
~/anaconda3/envs/tensorflow/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py in save_weights(self, filepath, overwrite, save_format)
1111 ValueError: For invalid/unknown format arguments.
1112 """
-> 1113 self._assert_weights_created()
1114 filepath_is_h5 = _is_hdf5_filepath(filepath)
1115 if save_format is None:
~/anaconda3/envs/tensorflow/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py in _assert_weights_created(self)
1560 'Weights are created when the Model is first called on '
1561 'inputs or `build()` is called with an `input_shape`.' %
-> 1562 self.name)
1563
1564 def _graph_network_add_loss(self, symbolic_loss):
ValueError: Weights for model net_10 have not yet been created. Weights are created when the Model is first called on inputs or `build()` is called with an `input_shape`.
Edit: Here is my hunch: The problem with my code is that the build does not execute the build of self.l1 but just creates it. Things do work out fine if i add self.l1 creation in __init__ and call super().__build__() as the first line in Net's build. Things make sense so far but, the code fails again if i replace super().build(input_shape) with self.l1.build(input_shape). Also, the code belows shows that all the variables are actually there. So, i am lost again. Any help is much appreciated
tf.random.set_seed(42)
class Net1(tf.keras.Model):
"""A simple linear model."""
def __init__(self):
super(Net1, self).__init__()
self.l1 = tf.keras.layers.Dense(5)
def build(self,input_shape):
super().build(input_shape)
self.dummy = tf.Variable(trainable=True,initial_value=tf.keras.initializers.glorot_normal()(shape=(1,),dtype=tf.float32))
print(self.variables)
def call(self, x):
return self.l1(x)
net = Net1()
net.build((10,1))
print('*'*50)
print(net.variables)
output:
[<tf.Variable 'dense_56/kernel:0' shape=(1, 5) dtype=float32, numpy=
array([[ 0.3291242 , -0.11798644, -0.294235 , -0.07103491, -0.9326792 ]],
dtype=float32)>, <tf.Variable 'dense_56/bias:0' shape=(5,) dtype=float32, numpy=array([0., 0., 0., 0., 0.], dtype=float32)>, <tf.Variable 'Variable:0' shape=(1,) dtype=float32, numpy=array([0.09575049], dtype=float32)>]
**************************************************
[<tf.Variable 'dense_56/kernel:0' shape=(1, 5) dtype=float32, numpy=
array([[ 0.3291242 , -0.11798644, -0.294235 , -0.07103491, -0.9326792 ]],
dtype=float32)>, <tf.Variable 'dense_56/bias:0' shape=(5,) dtype=float32, numpy=array([0., 0., 0., 0., 0.], dtype=float32)>, <tf.Variable 'Variable:0' shape=(1,) dtype=float32, numpy=array([0.09575049], dtype=float32)>]
whereas,
tf.random.set_seed(42)
class Net1(tf.keras.Model):
"""A simple linear model."""
def __init__(self):
super(Net1, self).__init__()
self.l1 = tf.keras.layers.Dense(5)
def build(self,input_shape):
self.l1.build(input_shape)
self.dummy = tf.Variable(trainable=True,initial_value=tf.keras.initializers.glorot_normal()(shape=(1,),dtype=tf.float32))
print('variables',self.l1.variables,self.dummy)
def call(self, x):
return self.l1(x)
net = Net1()
net.build((10,1))
print(net.variables)
output:
variables [<tf.Variable 'kernel:0' shape=(1, 5) dtype=float32, numpy=
array([[ 0.3291242 , -0.11798644, -0.294235 , -0.07103491, -0.9326792 ]],
dtype=float32)>, <tf.Variable 'bias:0' shape=(5,) dtype=float32, numpy=array([0., 0., 0., 0., 0.], dtype=float32)>] <tf.Variable 'Variable:0' shape=(1,) dtype=float32, numpy=array([0.09575049], dtype=float32)>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-77-35561efcdc2f> in <module>
15 net = Net1()
16 net.build((10,1))
---> 17 print(net.variables)
~/anaconda3/envs/tensorflow/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in variables(self)
1965 A list of variables.
1966 """
-> 1967 return self.weights
1968
1969 #property
~/anaconda3/envs/tensorflow/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py in weights(self)
498 A list of variables.
499 """
--> 500 return self._dedup_weights(self._undeduplicated_weights)
501
502 #property
~/anaconda3/envs/tensorflow/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py in _undeduplicated_weights(self)
503 def _undeduplicated_weights(self):
504 """Returns the undeduplicated list of all layer variables/weights."""
--> 505 self._assert_weights_created()
506 weights = []
507 for layer in self._layers:
~/anaconda3/envs/tensorflow/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py in _assert_weights_created(self)
1560 'Weights are created when the Model is first called on '
1561 'inputs or `build()` is called with an `input_shape`.' %
-> 1562 self.name)
1563
1564 def _graph_network_add_loss(self, symbolic_loss):
ValueError: Weights for model net1_40 have not yet been created. Weights are created when the Model is first called on inputs or `build()` is called with an `input_shape`.
TL/DR: This is not a problem with save_weight method. In order to build a subclassed model, you need to run the subclassed model on a real input. I only added two lines to the end of your code as shown below.
#net.build(input_shape=[1,]) # don't need it. When you call the model with real input, `build` method will be executed
x_train = tf.random.normal(shape=(100,1),dtype=tf.float32)
output=net.predict(x_train)
Please check below for more details.
import tensorflow as tf
class Net(tf.keras.Model):
"""A simple linear model."""
def __init__(self):
super(Net, self).__init__()
#self.l1 = tf.keras.layers.Dense(5)
def build(self,input_shape):
self.l1 = tf.keras.layers.Dense(5)
self.dummy = tf.Variable(trainable=True,initial_value=tf.keras.initializers.glorot_normal()(shape=(1,),dtype=tf.float32))
print('built layers')
def call(self, x):
return self.l1(x)
net = Net()
#net.build(input_shape=[1,]) # don't need it. When you call the model with real input, `build` method will be executed
x_train = tf.random.normal(shape=(100,1),dtype=tf.float32)
output=net.predict(x_train)
net.save_weights('easy_checkpoint')
A subclassed model is a piece of Python code (a call method). There is no graph of layers here. We cannot know how layers are connected to each other (because that's defined in the body of call, not as an explicit data structure), so we cannot infer input / output shapes. You can try printing model.summary after instantiating the subclass model. It will throw same error as you reported.
In contrast to subclassed models, You can do all these things (printing summary, input / output shapes) in a Functional or Sequential model because these models are static graphs of layers.
With that simple modification, your code is working as expected. I can print the weights, shapes etc., and can save weights also.

How to loop over batch_size in keras custom layer

I want to create a custom layer that takes in __init__ a internal tensor and a custom dot function so that it computes for a given batch the dot function over all possible pairs made with the batch and the internal tensor.
If I were to use the natural inner product, I could write directly tf.matmul(inputs, self.internal_tensor, transpose_b=True) but I want to be able to give other kernel methods.
MWE:
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Layer
class CustomLayer(Layer):
def __init__(self, internal_tensor, kernel, **kwargs):
super().__init__(**kwargs)
self.internal_tensor = tf.Variable(0., shape=tf.TensorShape((None, 10)), validate_shape=False, name='internal_tensor')
self.internal_tensor.assign(internal_tensor)
self.kernel = kernel
#tf.function
def call(self, inputs, **kwargs):
return self.kernel([
tf.reshape(tf.tile(inputs, [1, self.internal_tensor.shape[0]]), [-1, inputs.shape[1]]), # because no tf.repeat
tf.tile(self.support_tensors, [inputs.shape[0], 1]),
])
custom_layer = CustomLayer(
internal_tensor=tf.convert_to_tensor(np.random.rand(30, 10), tf.float32),
kernel=lambda inputs: inputs[0] + inputs[1],
)
x = np.random.rand(15, 10).astype(np.float32)
custom_layer(x)
# TypeError: Failed to convert object of type <class 'list'> to Tensor. Contents: [1, None]. Consider casting elements to a supported type.
For the sake of clarity, here is the target working layer in Numpy:
class NumpyLayer:
def __init__(self, internal_tensor, kernel):
self.internal_tensor = internal_tensor
self.kernel = kernel
def __call__(self, inputs):
return self.kernel([
np.repeat(inputs, len(self.internal_tensor), axis=0),
np.tile(self.internal_tensor, (len(inputs), 1)),
])
numpy_layer = NumpyLayer(
internal_tensor=internal_tensor,
kernel=lambda inputs: inputs[0] + inputs[1],
)
numpy_layer(x)
So all the troubles came from the use of tf.Tensor.shape instead of tf.shape(tf.Tensor).
Here is a working solution:
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Layer
class CustomLayer(Layer):
def __init__(self, internal_tensor, kernel, **kwargs):
super().__init__(**kwargs)
self.internal_tensor = tf.Variable(0., shape=tf.TensorShape((None, None)), validate_shape=False, name='internal_tensor')
self.internal_tensor.assign(internal_tensor)
self.kernel = kernel
#tf.function
def call(self, inputs, **kwargs):
batch_size = tf.shape(inputs)[0]
return self.kernel([
tf.reshape(tf.tile(inputs, [1, tf.shape(self.internal_tensor)[0]]), [-1, inputs.shape[1]]), # because no tf.repeat
tf.tile(self.internal_tensor, [batch_size, 1]),
])
internal_tensor = np.random.rand(30, 10)
custom_layer = CustomLayer(
internal_tensor=tf.convert_to_tensor(internal_tensor, tf.float32),
kernel=lambda inputs: inputs[0] + inputs[1],
)
x = np.random.rand(10, 10).astype(np.float32)
custom_layer(x)
though there is still a warning:
WARNING:tensorflow:Entity <bound method CustomLayer.call of <tensorflow.python.eager.function.TfMethodTarget object at 0x7f8e7e2d8400>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method CustomLayer.call of <tensorflow.python.eager.function.TfMethodTarget object at 0x7f8e7e2d8400>>: ValueError: Unable to locate the source code of <bound method CustomLayer.call of <tensorflow.python.eager.function.TfMethodTarget object at 0x7f8e7e2d8400>>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using #tf.autograph.do_not_convert. Original error: could not get source code

Tensorflow compute_output_shape() Not Working For Custom Layer

I have created a custom layer (called GraphGather) in Keras, yet the output tensor prints as :
Tensor("graph_gather/Tanh:0", shape=(?, ?), dtype=float32)
For some reason the shape is being returned as (?,?), which is causing the next dense layer to raise the following error:
ValueError: The last dimension of the inputs to Dense should be defined. Found None.
The GraphGather layer code is as follows:
class GraphGather(tf.keras.layers.Layer):
def __init__(self, batch_size, num_mols_in_batch, activation_fn=None, **kwargs):
self.batch_size = batch_size
self.num_mols_in_batch = num_mols_in_batch
self.activation_fn = activation_fn
super(GraphGather, self).__init__(**kwargs)
def build(self, input_shape):
super(GraphGather, self).build(input_shape)
def call(self, x, **kwargs):
# some operations (most of def call omitted)
out_tensor = result_of_operations() # this line is pseudo code
if self.activation_fn is not None:
out_tensor = self.activation_fn(out_tensor)
out_tensor = out_tensor
return out_tensor
def compute_output_shape(self, input_shape):
return (self.num_mols_in_batch, 2 * input_shape[0][-1])}
I have also tried hardcoding compute_output_shape to be:
python
def compute_output_shape(self, input_shape):
return (64, 150)
```
Yet the output tensor when printed is still
Tensor("graph_gather/Tanh:0", shape=(?, ?), dtype=float32)
which causes the ValueError written above.
System information
Have written custom code
**OS Platform and Distribution*: Linux Ubuntu 16.04
TensorFlow version (use command below): 1.5.0
Python version: 3.5.5
I had the same problem. My workaround was to add the following lines to the call method:
input_shape = tf.shape(x)
and then:
return tf.reshape(out_tensor, self.compute_output_shape(input_shape))
I haven't run into any problems with it yet.
If Johnny's answer doesn't work, I found another way to get around this is to follow advice here https://github.com/tensorflow/tensorflow/issues/38296#issuecomment-623698709
which is to call the set_shape method on the output of your layer.
E.g.
l=GraphGather(...)
y=l(x)
y.set_shape( l.compute_output_shape(x.shape) )
This only works if you are using the functional API.