How to save tf.Variables for use as configuration parameters? - tensorflow

So I’m working on a perceptual experiment for which I have designed a moderately complex stimulus which requires a good amount of computation to create. I had it implemented in Python; it works fine, but it’s slow to generate.
Quick brainstorm: hey, try implementing the stimulus computation in Tensorflow! A little bit of work paid off, cutting stimulus generation time in half (actually fast enough for real-time display), and the tensorflow model proves a more compact design. Great!
But then I started dreaming. I’ve wanted to be able to experiment via an app, but didn’t want to redo all my code in Swift or Java or whatever the language du jour for mobile/web is. But if all the heavy lifting code is embedded in a Tensorflow lite model, then a small wrapper for iOS/Android/(javascript maybe) would be doable in a reasonable time frame.
Here’s where my question comes in. In the off-line stimulus generation model, I configure my parameters and let it generate a video file, which can then be viewed by my subjects. If my theoretical app just takes a tensorflow model instead of a video file, then I’m really just shortening download time. What I’d really like to be able to do is to adjust the stimulus parameters within the app, instead of guessing, generating, and uploading again.
So (and from here on I’m just winging it as far as my TF skills) I turned my configuration parameters into tf.Variables, stuck them into the model, and voila, I can now adjust my stimulus on the fly, from within the Python CLI. Great! Now I just save the model…
Oops.
How does saving tf.Variables work? Here's a simple subset of my code to demonstrate the problem. Start with a layer that computes the sine of a temporal input, with a configurable phase, frequency, and amplitude:
class Sine(tf.keras.layers.Layer):
def __init__(self, *args, **kwargs):
super(Sine, self).__init__(*args, **kwargs)
self._twopi = tf.constant(np.pi * 2.0)
def call(self, parameters):
time = parameters[0]
scale = parameters[1]
frequency = parameters[2]
base = parameters[3]
phase = parameters[4]
# def call(self, time, scale, frequency, base, phase):
time = tf.cast(time, tf.float32)
return scale*tf.sin(self._twopi * frequency * time + phase) + base
Note here that I tried compressing the five arguments into one list to see what that would do.
Here's a stupid model that uses it:
class StupidModel:
def __init__(self, frequency, amplitude, base, phase):
self._frequency = tf.Variable(frequency, name="frequency", dtype=tf.float32)
self._amplitude = tf.Variable(amplitude, name="amplitude", dtype=tf.float32)
self._base = tf.Variable(base, name="base", dtype=tf.float32)
self._phase = tf.Variable(phase, name="phase", dtype=tf.float32)
self._model = self._build_model()
def _build_model(self):
input = tf.keras.layers.Input(1)
out = Sine()([input, self._frequency, self._amplitude, self._base, self._phase])
model = tf.keras.Model(inputs=[input], outputs = out)
model._myfrequency = self._frequency
model._myamplitude = self._amplitude
model._mybase = self._base
model._myphase = self._phase
return model
def __call__(self, time):
return self._model.predict(time)
And here's what happens when I try to save it:
>>> sm._model.save("foo.tf")
WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.
INFO:tensorflow:Assets written to: foo.tf/assets
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/j/.miniforge3/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/Users/j/.miniforge3/lib/python3.9/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/Users/j/.miniforge3/lib/python3.9/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
TypeError: Unable to serialize <tf.Variable 'frequency:0' shape=() dtype=float32, numpy=0.5> to JSON. Unrecognized type <class 'tensorflow.python.ops.resource_variable_ops.ResourceVariable'>.
I believe that were I using TF1.X, I'd be looking at placeholders and feed dicts. But what can I do now? I know what I'm doing is probably outside of the normal tensorflow usage, e.g. no training, in-flight tuning, etc., but it works so well right up until saving...

Further exploration has yielded fruit. It seems complicated, and I shudder to think of what I'll need to do to convert to Tensorflow Lite, but as of now, I can:
Add arbitrary configuration parameters
Save the model
Load the model
Retrieve configuration parameters names
Set configuration parameters
The key I found was to make my StupidModel inherit from tf.Module. I added an intervening class to encapsulate the parameter logic:
class Parameterized(tf.Module):
def __init__(self, *args, **kwargs):
super(Parameterized, self).__init__()
def _save(self, name, variable):
self.__setattr__(name, variable)
def _initialize(self):
for key in self.variables().keys():
self.set(key, self.get(key))
def save(self, name):
tf.saved_model.save(self, name)
#tf.function
def variables(self):
return {k:self._trackable_children()[k] for k in self._trackable_children() if issubclass(type(self._trackable_children()[k]), tf.Variable)}
#tf.function
def set(self, variable, value):
self.__getattribute__(variable).assign(value)
#tf.function
def get(self, variable):
return self.__getattribute__(variable)
and then I updated the StupidModel to inherit from Parameterized, making sure to add the appropriate #tf.function decorator:
class StupidModel(Parameterized):
def __init__(self, frequency, amplitude, base, phase):
super().__init__(name="stupid")
self._save("frequency", tf.Variable(frequency, name="frequency", dtype=tf.float32))
self._save("amplitude", tf.Variable(amplitude, name="amplitude", dtype=tf.float32))
self._save("base", tf.Variable(base, name="base", dtype=tf.float32))
self._save("phase", tf.Variable(phase, name="phase", dtype=tf.float32))
self._sine = Sine()
self._initialize()
def _make_key(self, var):
return var.name.split(":")[0]
#tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.float32),))
def __call__(self, time):
return self._sine(time, self.frequency, self.amplitude, self.base, self.phase)
Now, when I reload:
>>> import tensorflow as tf
>>> sm = tf.saved_model.load("foo.tf")
[...]
>>> sm([1, 2, 3])
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([0.2602872 , 0.73971283, 0.26028723], dtype=float32)>
And I can check out my parameters (by name), set them, and see the change:
>>> sm.variables().keys()
dict_keys(['phase', 'amplitude', 'base', 'frequency'])
>>> sm.set('base', 5.0)
>>> sm([1, 2, 3])
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([4.7602873, 5.2397127, 4.7602873], dtype=float32)>
And one final important note: the tf.function argument provided allows me to use arbitrary inputs. (Without it, I can only call __call__ with an argument signature that exactly matches calls made before the save, e.g. if I called sm([1, 2, 3]) and then saved, I'd get an error back if I called sm([10, 20, 30]) after reloading. Specifying the signature in #tf.function prevents that.):
>>> sm([10, 20, 30])
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([5.239713 , 5.239714 , 5.2397127], dtype=float32)>
So that's what I've learned. It seems complicated, and I'm definitely open to easier approaches.

Related

What does self._compute_output_and_mask_jointly = True do in tf.keras.layers.Masking layer?

tf.keras.layers.Masking layer has _compute_output_and_mask_jointly set to True in its __init__(...), what does this attribute do other than telling what it is doing in its call(...)?
def __init__(self, mask_value=0., **kwargs):
...
self._compute_output_and_mask_jointly = True
In addition, the mask has been created and applied in call(...). What is the purpose of compute_mask(...)? Seems redundant.
def compute_mask(self, inputs, mask=None):
return tf.reduce_any(tf.not_equal(inputs, self.mask_value), axis=-1)
def call(self, inputs):
boolean_mask = tf.reduce_any(
tf.not_equal(inputs, self.mask_value), axis=-1, keepdims=True)
outputs = inputs * tf.cast(boolean_mask, inputs.dtype)
# Compute the mask and outputs simultaneously.
outputs._keras_mask = tf.squeeze(boolean_mask, axis=-1) # pylint: disable=protected-access
return outputs
First of all, a hefty, fair warning:
This is an implementation detail, never use it!
It may be in fact on the way out.
Having said that, this is a minor optimization, used by the single layers.Masking class of all layer classes there are. This is part of TensorFlow Keras (as opposed to TensorFlow proper). When this attribute is present and set to True on a layer, the Keras framework assumes that the output mask has been already computed in the __call__ invocation and placed into the KerasTensor._layer_mask attribute, and optimizes out a call to the compute_mask method, both in eager and in graph tracing modes. This is all to it. No magic up to eleven.
Actually, creating the _layer_mask attribute on the output KerasTensor has the same effect. And you'll indeed avoid a nasty surprise one day by setting neither of these attributes.

My tensorflow 2.0 custom model is not receiving the shape or values I expect

I'm in the process of converting my pytorch models into tensorflow 2.0, so I'm still getting used to it. I have mostly gone off the API, I made a custom model, and defined it's call method with argument inputs:
class CustomModel(tf.keras.Model):
<... init...>
def call(self, inputs):
print("inputs: ", inputs)
self.sequential_convolution(inputs)
The sequential_convolution is a keras.Sequential of multiple convolution related layers. I can create the model object, compile it. It is variable length on both the output and input
model = CustomModel(inputs=tf.keras.Input(shape=(None, vdim)))
model.compile(optimizer=optimizer, loss=loss_func, metrics=[calc_accuracy])
for x, y in dataset:
print("x.shape: ", x.shape)
print("y.shape: ", y.shape)
model.fit(x, y, batch_size=1)
Where the shapes are x.shape: (244, 161) and y.shape: (40,). Both are Tensorflow tensors created from numpy arrays with tf.convert_to_tensor().
But when the model's call method prints the inputs, I get the following:
Tensor("input_1_1:0", shape=(None, 161), dtype=float32)
Which I should point out is not the Input defined on the model, this input is calculated from the actual input provided in the model.fit(), I manually changed the numbers to see what the causes were...
Which then ultimately leads to the stack trace:
x = self.sequential_conv(inputs)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/autograph/impl/api.py:396 converted_call
return py_builtins.overload_of(f)(*args)
TypeError: 'NoneType' object is not callable
This error occurs in a function deemed internal use only, but not able to ascertain what the cause of my problem is.
As I can't find much information on the matter, I feel that it's most likely something simple I haven't done, but I'm not sure. Any help would be great...

How to predict in multiple models consisting of tensorflow (.pb) model and keras model (.h5) at the same time in flask?

I try to describe the situations completely. But due to my ability of language, there will be possible some unclear statements. Please let me know. I will try to explain my meaning.
Recently, I want to apply facenet (I mean davisking's project on github) to my project. Therefore, I wrote a class
class FacenetEmbedding:
def __init__(self, model_path):
self.sess = tf.InteractiveSession()
self.sess.run(tf.global_variables_initializer())
# Load the model
facenet.load_model(model_path)
# Get input and output tensors
self.images_placeholder = tf.get_default_graph().get_tensor_by_name("input:0")
self.tf_embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0")
self.phase_train_placeholder = tf.get_default_graph().get_tensor_by_name("phase_train:0")
def get_embedding(self, images):
feed_dict = {self.images_placeholder: images, self.phase_train_placeholder: False}
embedding = self.sess.run(self.tf_embeddings, feed_dict=feed_dict)
return embedding
def free(self):
self.sess.close()
I can use this class independent in flask.
model_path = "models/20191025-223514/"
fe = FacenetEmbedding(model_path)
But I have different demands later. I train two models by using keras. I want to use them (.h5 model) with the above facenet model to predict. I load them first.
modelPic = load_model('models/pp.h5')
lePic = pickle.loads(open('models/pp.pickle', "rb").read())
print(modelPic.predict(np.zeros((1, 128, 128, 3))))
modelM = load_model('models/pv.h5')
leM = pickle.loads(open('models/pv.pickle', "rb").read())
print(modelM.predict(np.zeros((1, 128, 128, 3))))
I print the fake image to test the models. It seems to work normally. But when I run flask server and try to post an image to this api, the message pop up and the prediction doesn't work.
Tensor input_1_3:0, specified in either feed_devices or fetch_devices was not found in the Graph
Exception ignored in: <bound method BaseSession._Callable.__del__ of <tensorflow.python.client.session.BaseSession._Callable object at 0x7ff27d0f0dd8>>
Traceback (most recent call last):
File "/home/idgate/.virtualenvs/Line_POC/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1455, in __del__
self._session._session, self._handle, status)
File "/home/idgate/.virtualenvs/Line_POC/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: No such callable handle: 140675571821088
I try to use these two keras model without loading facenet model in flask server. It works normally. I think that it must collide with something (maybe about session?) to make these three models cannot work simultaneously. But I don't know how to solve this problem. Please help me! Thanks in advance.

how to create a keras layer that takes effect only during evaluation phase (and that is transparent during training)?

I want to add to my model a layer that, during evaluation, takes the input, applies some transformations (a quantization in this case, but can be whatever) and return it as the output. This layer must, however, be completely transparent during training, meaning that it must return the same input tensor.
I have written the following function
from keras.layers import Lambda
import keras.backend as K
def myquantize(x):
return K.in_test_phase( K.clip(K.round(x*(2**5))/(2**5),-3.9,3.9) , x)
which I then use via a Lambda layer
y = keras.layers.Conv1D(**args1)
y = keras.layers.AveragePooling1D(pool_size=2)(y)
y = keras.layers.Lambda(myquantize)(y)
y = keras.layers.Conv1D(**args2)
#...
Now, in principle the K.in_test_phase should return x during training, and that expression during test.
However, training the network with such layer prevent the network from learning (i.e. the train loss stops decreasing after 3 epochs), while if I remove it the network keeps training normally. I assume this layer is not actually transparent during training as expected.
in_test_phase has a parameter of training which you can explicitly set to indicate whether you are training or not. If you don't set it explicitly, then the value of learning_phase is used. This value keeps changing when you reset the graph or when you call different types of fit/predict/evaluate functions of model.
Since your full code isn't present, you can make use of training parameter. Set it to True during training. Then save the weights of the model using save_weights function of model. When you wish to test your model, set the training parameter to False. Then load the weights using load_weights function and then you can proceed accordingly.
For those who are in a similar situation, I created a custom layer like the following, which I only use during training:
class MyLayer(keras.layers.Layer):
def __init__(self, **kwargs):
super(MyLayer, self).__init__(**kwargs)
def compute_output_shape(self, input_shape):
return input_shape
def call(self, inputs, **kwargs):
x=inputs
return K.identity(x)
note that this layer always returns the input tensor, but it serves as 'placeholder' for the next step. On the evaluation part of the code, I wrote the following code:
class MyLayer(keras.layers.Layer):
def __init__(self, **kwargs):
super(MyLayer, self).__init__(**kwargs)
def compute_output_shape(self, input_shape):
return input_shape
def call(self, inputs, **kwargs):
x=inputs
return #Your actual processing here
Here, the only difference is that you actually perform the desired processing steps on your tensor. When I load my stored model, I pass this class as custom object
model = keras.models.load_model(model_file,custom_objects={'MyLayer':MyLayer})
be careful to pass as MyLayer the one where the actual processing is performed.
This is my solution, other suggestions are welcome

Using tf.keras in TF 2.0, how can I define a custom layer that depends on the learning phase?

I want to build a custom layer using tf.keras. For simplicity, suppose it should return inputs*2 during training and inputs*3 during testing. What is the correct way to do this?
I tried this approach:
class CustomLayer(Layer):
#tf.function
def call(self, inputs, training=None):
if training:
return inputs*2
else:
return inputs*3
I can then use this class like this:
>>> layer = CustomLayer()
>>> layer(10)
tf.Tensor(30, shape=(), dtype=int32)
>>> layer(10, training=True)
tf.Tensor(20, shape=(), dtype=int32)
It works fine! However, when I use this class in a model, and I call its fit() method, it seems that training is not set to True. I tried to add the following code at the beginning of the call() method, but training is always equal to 0.
if training is None:
training = K.learning_phase()
What am I missing?
Edit
I found a solution (see my answer), but I'm still looking for a nicer solution using #tf.function (I prefer autograph to this smart_cond() business). Unfortunately, it looks like K.learning_phase() does not play nice with #tf.function (my guess is that when the call() function gets traced, the learning phase gets hard-coded into the graph: since this happens before the call to the fit() method, the learning phase is always 0). This may be a bug, or perhaps there's another way to get the learning phase when using #tf.function.
François Chollet confirmed that the correct solution when using #tf.function is:
class CustomLayer(Layer):
#tf.function
def call(self, inputs, training=None):
if training is None:
training = K.learning_phase()
if training:
return inputs * 2
else:
return inputs * 3
There's currently a bug (as of Feb 15th 2019) that makes training always equal to 0, but this will be fixed shortly.
The following code does not use #tf.function, so it does not look as nice (since it does not use autograph), but it works fine:
from tensorflow.python.keras.utils.tf_utils import smart_cond
class CustomLayer(Layer):
def call(self, inputs, training=None):
if training is None:
training = K.learning_phase()
return smart_cond(training, lambda: inputs * 2, lambda: inputs * 3)