I want to store the float values in the outputs of a particular hidden layer during training. However, since the outputs are KerasTensor objects, I am unable to access them.
How do I access the float values in the tensors so I can store them for later use?
I am currently trying to do this using a custom Callback:
class HidInps(Callback):
def on_train_batch_end(self, batch, logs=None):
layer_out = self.model.get_layer("hidlyr").output
print(layer_out) # KerasTensor(type_spec=TensorSpec(shape=(None, 3), dtype=tf.float32...
print(keras.backend.get_value(layer_out))
However, since the KerasTensor object provides no .numpy() method, eval() or get_value() can not work and I get the appropriate error:
AttributeError: 'KerasTensor' object has no attribute 'numpy'
You need to use the custom training loop in tensorflow to acheive this thing.
Lets say your model instance is refered by the variable my_model. You can create another custom_model from it as follows:
from tensorflow.keras import Model
hidden_layer = self.model.get_layer("hidlyr")
custom_model = Model(inputs=my_model.inputs, outputs=[hidden_layer.output, my_model.output])
with tf.GradientTape() as t:
layer_op, predictions = custom_model(images)
print(layer_op)
For further details, refer https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough#train_the_model
from tensorflow.keras import Model
hidden_layer = self.model.get_layer("hidlyr")
custom_model = Model(inputs=my_model.inputs, outputs=[hidden_layer.output, my_model.output])
with tf.GradientTape() as t:
layer_op, predictions = custom_model(images)
print(layer_op)
Related
All answers I can find related to adopting saved model as part of another model are for keras. but here, I have saved a tf.Module and want to restore it and use it as part of a new model. The saving and loading of the adoptee works, but when saving the adopter, I get an error. Here are the example codes:
from keras.layers import Dense
import tensorflow as tf
import numpy as np
def load_model_safely(path_to_saved_model, signature='serving_default') \
-> tf.types.experimental.ConcreteFunction:
saved_model = tf.saved_model.load(path_to_saved_model)
model = saved_model.signatures[signature]
model._backref_to_saved_model = saved_model
return model
class AdoptedModule(tf.Module):
def __init__(self):
super().__init__()
self.dummy = Dense(3, activation='relu')
#tf.function
def __call__(self, x):
return self.dummy(x)
class AdopterModule(tf.Module):
def __init__(self, adopted):
super().__init__()
self.dummy = adopted
#tf.function
def __call__(self, x):
return self.dummy(x)
adopted = AdoptedModule()
adopted(tf.constant([np.arange(0, 10, dtype=np.float32)]))
tf.saved_model.save(adopted, '/tmp/adopted.tf2',
signatures=adopted.__call__.get_concrete_function(tf.TensorSpec((None, 10), dtype=tf.float32)))
loaded_adopted = load_model_safely('/tmp/adopted.tf2')
adopter = AdopterModule(adopted=loaded_adopted)
adopter(tf.constant([np.arange(0, 10, dtype=np.float32)]))
tf.saved_model.save(adopter, '/tmp/adopter.tf2',
signatures=adopted.__call__.get_concrete_function(tf.TensorSpec((None, 10), dtype=tf.float32)))
The error is
AssertionError: Tried to export a function which references 'untracked' resource Tensor("124629:0", shape=(), dtype=resource). TensorFlow objects (e.g. tf.Variable) captured by functions must be 'tracked' by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly. See the information below:
Function name = b'__inference_signature_wrapper_124635'
Captured Tensor = <ResourceHandle(name="Resource-289-at-0x600028cdc500", device="/job:localhost/replica:0/task:0/device:CPU:0", container="Anonymous", type="tensorflow::Var", dtype and shapes : "[ DType enum: 1, Shape: [10,3] ]")>
Trackable referencing this tensor = <tf.Variable 'dense_3/kernel:0' shape=(10, 3) dtype=float32>
I can probably hack it by first recreating the adopted module instance (by calling the original codes with exact hyperparameters) and then force-assign all trainable variables from the loaded concrete function. But one may not even have the original model codes and just want to adopt the trained autograph. Further, the adopted layer should be trainable during fine-tuning the whole model.
Hi all: this is one of my first posts on Stackoverflow - so apologies in advance if i'm not conforming to certain standards!
I'm having trouble saving my Keras model as a mlflow.pyfunc model as it's giving me a "cannot pickle a 'weakref' object when I try to log it.
So why am i saving my Keras model as a pyfunc model object in the first place? This is because I want to override the default predict method and output something custom. I also want to do some pre-processing steps on the X_test or new data by encoding it with a tf.keras.StringLookup and then invert it back to get the original categorical variable class. For this reason, I was advised by Databricks that the mlflow.pyfunc flavor is the best way to go for these types of use-cases
The Keras model works just fine and i'm able to log it using mlflow.keras.log_model. But it fails when i try to wrap it inside a cutomer "KerasWrapper" class.
Here are some snippets of my code. For the purpose of debugging, the current predict method in the custom class is just the default. I simplified it to help debug, but obviously I haven't been able to resolve it.
I would be extremely grateful for any help. Thanks in advance!
ALL CODE ON AZURE DATABRICKS
Custom mlflow.pyfunc class
class KerasWrapper(mlflow.pyfunc.PythonModel):
def __init__(self, keras_model, labelEncoder, labelDecoder, n):
self.keras_model = keras_model
self.labelEncoder = labelEncoder
self.labelDecoder = labelDecoder
self.topn = n
def load_context(self, context):
self.keras_model = mlflow.keras.load_model(model_uri=context.artifacts[self.keras_model], compile=False)
def predict(self, context, input_data):
scores = self.keras_model.predict(input_data)
return scores
My Keras Deep Learning Model (this works fine by the way)
def build_model(vocab_size, steps, drop_embed, n_dim, encoder, modelType):
model = None
i = Input(shape=(None,), dtype="int64")
#embedding layer
e = Embedding(vocab_size, 16)(i)
s = SpatialDropout1D(drop_embed)(e)
x = Conv1D(256, steps, activation='relu')(s)
x = GlobalMaxPooling1D()(x)
x = Dense(128, activation='relu')(x)
x = Dropout(0.2)(x)
#output layer
x = Dense(vocab_size, activation='softmax')(x)
model = Model(i, x)
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
return model
MLFLOW Section
with mlflow.start_run(run_name=runName):
mlflow.tensorflow.autolog()
#Build the model, compile and train on the training set
#signature: build_model(vocab_size, steps, drop_embed, n_dim, encoder, modelType):
keras_model = build_model((vocab_size + 1), timeSteps, drop_embed, embedding_dimensions, encoder, modelType)
keras_model.fit(X_train_encoded, y_train_encoded, epochs=epochs, verbose=1, batch_size=32, use_multiprocessing = True,
validation_data=(X_test_encoded, y_test_encoded))
# Log the model parameters used for this run.
mlflow.log_param("numofActionsinWorkflow", numofActionsinWf)
mlflow.log_param("timeSteps", timeSteps)
#wrap it up in a pyfunc model
wrappedModel = KerasWrapper(keras_model, encoder, decoder, bestActionCount)
# Create a model signature using the tensor input to store in the MLflow model registry
signature = infer_signature(X_test_encoded, wrappedModel.predict(None, X_test_encoded))
# Let's check out how it looks
print(signature)
# Create an input example to store in the MLflow model registry
input_example = np.expand_dims(X_train[17], axis=0)
# The necessary dependencies are added to a conda.yaml file which is logged along with the model.
model_env = mlflow.pyfunc.get_default_conda_env()
# Record specific additional dependencies required by the serving model
model_env['dependencies'][-1]['pip'] += [
f'tensorflow=={tf.__version__}',
f'mlflow=={mlflow.__version__}',
f'sklearn=={sklearn.__version__}',
f'cloudpickle=={cloudpickle.__version__}',
]
#log the model to experiment
#mlflow.keras.log_model(keras_model, artifact_path = runName, signature=signature, input_example=input_example, conda_env = model_env)
wrapped_model_path = runName
if (os.path.exists(wrapped_model_path)):
shutil.rmtree(wrapped_model_path)
#Log model as pyfunc model
mlflow.pyfunc.log_model(runName, python_model=wrappedModel, signature=signature, input_example=input_example, conda_env = model_env)
#return the run ID for model registration
run_id = mlflow.active_run().info.run_id
mlflow.end_run()
Here is the error that i receive
I am getting an error, when I run the below code.
def __init__(self, model, sess, loss_fn=None):
"""
To generate the White-box Attack Agent.
:param model: the target model which should have the input tensor, the target tensor and the loss tensor.
:param sess: the tensorflow session.
:param loss_fn: None if using original loss of the model.
"""
self.model = model
self.input_tensor = model.inputs[0]
self.output_tensor = model.outputs[0]
self.target_tensor = model.targets[0]
self._sample_weights = model.sample_weights[0]
if loss_fn is None:
self.loss_tensor = model.total_loss
self.gradient_tensor = K.gradients(self.loss_tensor, self.input_tensor)[0]
else:
self.set_loss_function(loss_fn)
self.sess = sess
error:
self.target_tensor = model.targets[0] , AttributeError: 'Model' object has no attribute 'targets'
I am working with Tensorflow 1.14.0 ,keras 2.2.4-tf and python 3.6.13.how can I resolve this problem?
Thank you
The model argument that your function is using is apparently talking about a unique class that isn't part of the default Keras or Tensorflow. You need to make sure that the model object you're passing the function is an instance of the appropriate class. Or, alternatively, you can jerry rig it by doing something like this:
# whatever code instantiates the `model` would be here
...
...
model.targets = [ "targets of an appropriate structure and type go here" ]
Then when you pass that model object to the function, it will have a targets attribute to be accessed. But it's better to just use the right object in the first place.
Or, the third option: just comment out the offending line and hope it isn't actually used anywhere (if it's being set, it's probably used somewhere or other though).
def __init__(self, model, sess, loss_fn=None):
"""
To generate the White-box Attack Agent.
:param model: the target model which should have the input tensor, the target tensor and the loss tensor.
:param sess: the tensorflow session.
:param loss_fn: None if using original loss of the model.
"""
self.model = model
self.input_tensor = model.inputs[0]
self.output_tensor = model.outputs[0]
#self.target_tensor = model.targets[0]
self._sample_weights = model.sample_weights[0]
if loss_fn is None:
self.loss_tensor = model.total_loss
self.gradient_tensor = K.gradients(self.loss_tensor, self.input_tensor)[0]
else:
self.set_loss_function(loss_fn)
self.sess = sess
I have implemented a custom version of Batch Normalization with adding self.skip variable that act somehow as trainable. Here is the minimal code:
from tensorflow.keras.layers import BatchNormalization
import tensorflow as tf
# class CustomBN(tf.keras.layers.Layer):
class CustomBN(BatchNormalization):
def __init__(self, **kwargs):
super(CustomBN, self).__init__(**kwargs)
self.skip = False
def call(self, inputs, training=None):
if self.skip:
tf.print("I'm skipping")
else:
tf.print("I'm not skipping")
return super(CustomBN, self).call(inputs, training)
def build(self, input_shape):
super(CustomBN, self).build(input_shape)
To be crystal clear, all I have done so far are:
sub classing BatchNormalization: should I sub class tf.keras.layers.Layer?
defining self.skip to change the behavior of CustomBN layer in run time.
checking the state of self.skip in call method to act correspondingly.
Now, to change the behavior of the 'CustomBN' layer, I use
self.model.layers[ind].skip = state
where state is either True or False, and ind is the index number of CustomBN layer in the model.
the evident problem is that the value of self.skip will never change.
If you notice any mistakes please notify me.
By default, the call function in your layer will be called when the graph is built. Not on a per batch basis. Keras model compile method as a run_eagerly option that would cause your model to run (slower) in eager mode which would invoke your call function without building a graph. This is most likely not what you want to do however.
Ideally you want the flag that changes the behavior to be an input to the call method... For instance you can add an extra input to your graph which is simply this state flag and pass that to your layer.
The following is an example of how you can have a conditional graph on an extra parameter.
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
class MyLayerWithFlag(keras.layers.Layer):
def call(self, inputs, flag=None):
c_one = tf.constant([1], dtype=tf.float32)
if flag is not None:
x = tf.cond(
flag, lambda: tf.math.add(inputs, c_one),
lambda: inputs)
return x
return inputs
inputs = layers.Input(shape=(2,))
state = layers.Input(shape=(1,), dtype=tf.bool)
x = MyLayerWithFlag()(inputs, flag=state)
out = layers.Lambda(tf.reduce_sum)(x)
model = keras.Model([inputs, state], out)
data = np.array([[1., 2.]])
state = np.array([[True]])
model.predict((data, state))
I have tried all the options described in the documentation but none of them allowed me to save my model in tensorflow 2.0.0 beta1. I've also tried to upgrade to the (also unstable) TF2-RC but that ruined even the code I had working in beta so I quickly rolled back for now to beta.
See a minimal reproduction code below.
What I have tried:
model.save("mymodel.h5")
NotImplementedError: Saving the model to HDF5 format requires the
model to be a Functional model or a Sequential model. It does not work
for subclassed models, because such models are defined via the body of
a Python method, which isn't safely serializable. Consider saving to
the Tensorflow SavedModel format (by setting save_format="tf") or
using save_weights.
model.save("mymodel", format='tf')
ValueError: Model <main.CVAE object at 0x7f1cac2e7c50> cannot be
saved because the input shapes have not been set. Usually, input
shapes are automatically determined from calling .fit() or .predict().
To manually set the shapes, call model._set_inputs(inputs).
3.
model._set_input(input_sample)
model.save("mymodel", format='tf')
AssertionError: tf.saved_model.save is not supported inside a traced
#tf.function. Move the call to the outer eagerly-executed context.
And this is where I am stuck now because it gives me no reasonable hint whatsoever. That's because I am NOT calling the save() function from a #tf.function, I'm already calling it from the outermost scope possible. In fact, I have no #tf.function at all in this minimal reproduction script below and still getting the same error.
So I really have no idea how to save my model, I've tried every options and they all throw errors and provide no hints.
The minimal reproduction example below works fine if you set save_model=False and it reproduces the error when save_model=True.
It may seem unnecessary in this simplified auto-encoder code example to use a subclassed model but I have lots of custom functions added to it in my original VAE code that I need it for.
Code:
import tensorflow as tf
save_model = True
learning_rate = 1e-4
BATCH_SIZE = 100
TEST_BATCH_SIZE = 10
color_channels = 1
imsize = 28
(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images[:5000, ::]
test_images = train_images[:1000, ::]
train_images = train_images.reshape(-1, imsize, imsize, 1).astype('float32')
test_images = test_images.reshape(-1, imsize, imsize, 1).astype('float32')
train_images /= 255.
test_images /= 255.
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).batch(BATCH_SIZE)
test_dataset = tf.data.Dataset.from_tensor_slices(test_images).batch(TEST_BATCH_SIZE)
class AE(tf.keras.Model):
def __init__(self):
super(AE, self).__init__()
self.network = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(imsize, imsize, color_channels)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(50),
tf.keras.layers.Dense(imsize**2 * color_channels),
tf.keras.layers.Reshape(target_shape=(imsize, imsize, color_channels)),
])
def decode(self, input):
logits = self.network(input)
return logits
optimizer = tf.keras.optimizers.Adam(learning_rate)
model = AE()
def compute_loss(data):
logits = model.decode(data)
loss = tf.reduce_mean(tf.losses.mean_squared_error(logits, data))
return loss
def train_step(data):
with tf.GradientTape() as tape:
loss = compute_loss(data)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss, 0
def test_step(data):
loss = compute_loss(data)
return loss
input_shape_set = False
epoch = 0
epochs = 20
for epoch in range(epochs):
for train_x in train_dataset:
train_step(train_x)
if epoch % 1 == 0:
loss = 0.0
num_batches = 0
for test_x in test_dataset:
loss += test_step(test_x)
num_batches += 1
loss /= num_batches
print("Epoch: {}, Loss: {}".format(epoch, loss))
if save_model:
print("Saving model...")
if not input_shape_set:
# Note: Why set input shape manually and why here:
# 1. If I do not set input shape manually: ValueError: Model <main.CVAE object at 0x7f1cac2e7c50> cannot be saved because the input shapes have not been set. Usually, input shapes are automatically determined from calling .fit() or .predict(). To manually set the shapes, call model._set_inputs(inputs).
# 2. If I set input shape manually BEFORE the first actual train step, I get: RuntimeError: Attempting to capture an EagerTensor without building a function.
model._set_inputs(train_dataset.__iter__().next())
input_shape_set = True
# Note: Why choose tf format: model.save('MNIST/Models/model.h5') will return NotImplementedError: Saving the model to HDF5 format requires the model to be a Functional model or a Sequential model. It does not work for subclassed models, because such models are defined via the body of a Python method, which isn't safely serializable. Consider saving to the Tensorflow SavedModel format (by setting save_format="tf") or using save_weights.
model.save('MNIST/Models/model', save_format='tf')
I have tried the same minimal reproduction example in tensorflow-gpu 2.0.0-rc0 and the error was more revealing than what the beta version gave me. The error in RC says:
NotImplementedError: When subclassing the Model class, you should
implement a call method.
This got me read through https://www.tensorflow.org/beta/guide/keras/custom_layers_and_models where I found examples of how to do subclassing in TF2 in a way that allows saving. I was able to resolve the error and have the model saved by replacing my 'decode' method by 'call' in the above example (although this will be more complicated with my actual code where I had various methods defined for the class). This solved the error both in beta and in rc. Strangely, the training (or the saving) got also much faster in rc.
You should change two things:
Change the decode method to call, as you pointed out
As your model is of type Sequential, and not built inside the class, you want to call the save method on the self.network attribute of the model, i.e.,
model.network.save('mymodel.h5')
alternatively, to keep things more standard, you can implement this method inside the AE class, as follows:
def save(self, save_dir):
self.network.save(save_dir)
Cheers mate