I am getting an error, when I run the below code.
def __init__(self, model, sess, loss_fn=None):
"""
To generate the White-box Attack Agent.
:param model: the target model which should have the input tensor, the target tensor and the loss tensor.
:param sess: the tensorflow session.
:param loss_fn: None if using original loss of the model.
"""
self.model = model
self.input_tensor = model.inputs[0]
self.output_tensor = model.outputs[0]
self.target_tensor = model.targets[0]
self._sample_weights = model.sample_weights[0]
if loss_fn is None:
self.loss_tensor = model.total_loss
self.gradient_tensor = K.gradients(self.loss_tensor, self.input_tensor)[0]
else:
self.set_loss_function(loss_fn)
self.sess = sess
error:
self.target_tensor = model.targets[0] , AttributeError: 'Model' object has no attribute 'targets'
I am working with Tensorflow 1.14.0 ,keras 2.2.4-tf and python 3.6.13.how can I resolve this problem?
Thank you
The model argument that your function is using is apparently talking about a unique class that isn't part of the default Keras or Tensorflow. You need to make sure that the model object you're passing the function is an instance of the appropriate class. Or, alternatively, you can jerry rig it by doing something like this:
# whatever code instantiates the `model` would be here
...
...
model.targets = [ "targets of an appropriate structure and type go here" ]
Then when you pass that model object to the function, it will have a targets attribute to be accessed. But it's better to just use the right object in the first place.
Or, the third option: just comment out the offending line and hope it isn't actually used anywhere (if it's being set, it's probably used somewhere or other though).
def __init__(self, model, sess, loss_fn=None):
"""
To generate the White-box Attack Agent.
:param model: the target model which should have the input tensor, the target tensor and the loss tensor.
:param sess: the tensorflow session.
:param loss_fn: None if using original loss of the model.
"""
self.model = model
self.input_tensor = model.inputs[0]
self.output_tensor = model.outputs[0]
#self.target_tensor = model.targets[0]
self._sample_weights = model.sample_weights[0]
if loss_fn is None:
self.loss_tensor = model.total_loss
self.gradient_tensor = K.gradients(self.loss_tensor, self.input_tensor)[0]
else:
self.set_loss_function(loss_fn)
self.sess = sess
Related
I want to store the float values in the outputs of a particular hidden layer during training. However, since the outputs are KerasTensor objects, I am unable to access them.
How do I access the float values in the tensors so I can store them for later use?
I am currently trying to do this using a custom Callback:
class HidInps(Callback):
def on_train_batch_end(self, batch, logs=None):
layer_out = self.model.get_layer("hidlyr").output
print(layer_out) # KerasTensor(type_spec=TensorSpec(shape=(None, 3), dtype=tf.float32...
print(keras.backend.get_value(layer_out))
However, since the KerasTensor object provides no .numpy() method, eval() or get_value() can not work and I get the appropriate error:
AttributeError: 'KerasTensor' object has no attribute 'numpy'
You need to use the custom training loop in tensorflow to acheive this thing.
Lets say your model instance is refered by the variable my_model. You can create another custom_model from it as follows:
from tensorflow.keras import Model
hidden_layer = self.model.get_layer("hidlyr")
custom_model = Model(inputs=my_model.inputs, outputs=[hidden_layer.output, my_model.output])
with tf.GradientTape() as t:
layer_op, predictions = custom_model(images)
print(layer_op)
For further details, refer https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough#train_the_model
from tensorflow.keras import Model
hidden_layer = self.model.get_layer("hidlyr")
custom_model = Model(inputs=my_model.inputs, outputs=[hidden_layer.output, my_model.output])
with tf.GradientTape() as t:
layer_op, predictions = custom_model(images)
print(layer_op)
Hi all: this is one of my first posts on Stackoverflow - so apologies in advance if i'm not conforming to certain standards!
I'm having trouble saving my Keras model as a mlflow.pyfunc model as it's giving me a "cannot pickle a 'weakref' object when I try to log it.
So why am i saving my Keras model as a pyfunc model object in the first place? This is because I want to override the default predict method and output something custom. I also want to do some pre-processing steps on the X_test or new data by encoding it with a tf.keras.StringLookup and then invert it back to get the original categorical variable class. For this reason, I was advised by Databricks that the mlflow.pyfunc flavor is the best way to go for these types of use-cases
The Keras model works just fine and i'm able to log it using mlflow.keras.log_model. But it fails when i try to wrap it inside a cutomer "KerasWrapper" class.
Here are some snippets of my code. For the purpose of debugging, the current predict method in the custom class is just the default. I simplified it to help debug, but obviously I haven't been able to resolve it.
I would be extremely grateful for any help. Thanks in advance!
ALL CODE ON AZURE DATABRICKS
Custom mlflow.pyfunc class
class KerasWrapper(mlflow.pyfunc.PythonModel):
def __init__(self, keras_model, labelEncoder, labelDecoder, n):
self.keras_model = keras_model
self.labelEncoder = labelEncoder
self.labelDecoder = labelDecoder
self.topn = n
def load_context(self, context):
self.keras_model = mlflow.keras.load_model(model_uri=context.artifacts[self.keras_model], compile=False)
def predict(self, context, input_data):
scores = self.keras_model.predict(input_data)
return scores
My Keras Deep Learning Model (this works fine by the way)
def build_model(vocab_size, steps, drop_embed, n_dim, encoder, modelType):
model = None
i = Input(shape=(None,), dtype="int64")
#embedding layer
e = Embedding(vocab_size, 16)(i)
s = SpatialDropout1D(drop_embed)(e)
x = Conv1D(256, steps, activation='relu')(s)
x = GlobalMaxPooling1D()(x)
x = Dense(128, activation='relu')(x)
x = Dropout(0.2)(x)
#output layer
x = Dense(vocab_size, activation='softmax')(x)
model = Model(i, x)
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
return model
MLFLOW Section
with mlflow.start_run(run_name=runName):
mlflow.tensorflow.autolog()
#Build the model, compile and train on the training set
#signature: build_model(vocab_size, steps, drop_embed, n_dim, encoder, modelType):
keras_model = build_model((vocab_size + 1), timeSteps, drop_embed, embedding_dimensions, encoder, modelType)
keras_model.fit(X_train_encoded, y_train_encoded, epochs=epochs, verbose=1, batch_size=32, use_multiprocessing = True,
validation_data=(X_test_encoded, y_test_encoded))
# Log the model parameters used for this run.
mlflow.log_param("numofActionsinWorkflow", numofActionsinWf)
mlflow.log_param("timeSteps", timeSteps)
#wrap it up in a pyfunc model
wrappedModel = KerasWrapper(keras_model, encoder, decoder, bestActionCount)
# Create a model signature using the tensor input to store in the MLflow model registry
signature = infer_signature(X_test_encoded, wrappedModel.predict(None, X_test_encoded))
# Let's check out how it looks
print(signature)
# Create an input example to store in the MLflow model registry
input_example = np.expand_dims(X_train[17], axis=0)
# The necessary dependencies are added to a conda.yaml file which is logged along with the model.
model_env = mlflow.pyfunc.get_default_conda_env()
# Record specific additional dependencies required by the serving model
model_env['dependencies'][-1]['pip'] += [
f'tensorflow=={tf.__version__}',
f'mlflow=={mlflow.__version__}',
f'sklearn=={sklearn.__version__}',
f'cloudpickle=={cloudpickle.__version__}',
]
#log the model to experiment
#mlflow.keras.log_model(keras_model, artifact_path = runName, signature=signature, input_example=input_example, conda_env = model_env)
wrapped_model_path = runName
if (os.path.exists(wrapped_model_path)):
shutil.rmtree(wrapped_model_path)
#Log model as pyfunc model
mlflow.pyfunc.log_model(runName, python_model=wrappedModel, signature=signature, input_example=input_example, conda_env = model_env)
#return the run ID for model registration
run_id = mlflow.active_run().info.run_id
mlflow.end_run()
Here is the error that i receive
The model is built as
class MyModel(tf.keras.Model):
def __init__(self, num_states, hidden_units, num_actions):
super(MyModel, self).__init__()
## btach_size * size_state
self.input_layer = tf.keras.layers.InputLayer(input_shape=(num_states,))
self.hidden_layers = []
for i in hidden_units:
self.hidden_layers.append(tf.keras.layers.Dense(
i, activation='tanh', kernel_initializer='RandomNormal'))
self.output_layer = tf.keras.layers.Dense(
num_actions, activation='linear', kernel_initializer='RandomNormal')
#tf.function
def call(self, inputs):
z = self.input_layer(inputs)
for layer in self.hidden_layers:
z = layer(z)
output = self.output_layer(z)
return output
after training, the model was saved as:
tf.saved_model.save(self.model, model_path_dir='saved_model/model_checkpoint')
Now, I'm trying to restore the model and then do inference as:
train_agent = tf.saved_model.load('saved_model/model_checkpoint')
## input_state is an array with shape of (num_states, )
action = train_agent(np.atleast_2d(input_state))
However, I keep getting this error:
TypeError: '_UserObject' object is not callable
How can I exactly use the restored model?
I haven't figure out how to use the tf.saved_model.save() and tf.saved_model.load() properly, if anyone could help, it would be great! Current solution is to use save_weights() instead. For example:
model = Model()
mode.train()
...
model.save_weights(checkpoint_dir)
input_s = np.random.normal(1, 128)
output = model(input_s) ## output a tensor
You correctly decorated the call method with tf.function, thus, your SavedModel will have the very same method serialized inside.
Thus, you can just call your model by calling the call method:
action = train_agent.call(np.atleast_2d(input_state))
Required code given below
tf.keras.models.save_model(model_name,path)
import tensorflow as tf
#save the model for later use
tf.keras.models.save_model(classifier_model,'F:/Face_Recognition/face_classifier_model.h5')
#Reload the model
classifier_model=tf.keras.models.load_model('F:/Face_Recognition/face_classifier_model.h5')
I'm trying to load a pre-trained tensorflow object detection model from the Tensorflow Object Detection repo as a tf.estimator.Estimator and use it to make predictions.
I'm able to load the model and run inference using Estimator.predict(), however the output is garbage. Other methods of loading the model, e.g. as a Predictor, and running inference work fine.
Any help properly loading a model as an Estimator calling predict() would be much appreciated. My current code:
Load and prepare image
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(list(image.getdata())).reshape((im_height, im_width, 3)).astype(np.uint8)
image_url = 'https://i.imgur.com/rRHusZq.jpg'
# Load image
response = requests.get(image_url)
image = Image.open(BytesIO(response.content))
# Format original image size
im_size_orig = np.array(list(image.size) + [1])
im_size_orig = np.expand_dims(im_size_orig, axis=0)
im_size_orig = np.int32(im_size_orig)
# Resize image
image = image.resize((np.array(image.size) / 4).astype(int))
# Format image
image_np = load_image_into_numpy_array(image)
image_np_expanded = np.expand_dims(image_np, axis=0)
image_np_expanded = np.float32(image_np_expanded)
# Stick into feature dict
x = {'image': image_np_expanded, 'true_image_shape': im_size_orig}
# Stick into input function
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
x=x,
y=None,
shuffle=False,
batch_size=128,
queue_capacity=1000,
num_epochs=1,
num_threads=1,
)
Side note:
train_and_eval_dict also seems to contain an input_fn for prediction
train_and_eval_dict['predict_input_fn']
However this actually returns a tf.estimator.export.ServingInputReceiver, which I'm not sure what to do with. This could potentially be the source of my problems as there's a fair bit of pre-processing involved before the model actually sees the image.
Load model as Estimator
Model downloaded from TF Model Zoo here, code to load model adapted from here.
model_dir = './pretrained_models/tensorflow/ssd_mobilenet_v1_coco_2018_01_28/'
pipeline_config_path = os.path.join(model_dir, 'pipeline.config')
config = tf.estimator.RunConfig(model_dir=model_dir)
train_and_eval_dict = model_lib.create_estimator_and_inputs(
run_config=config,
hparams=model_hparams.create_hparams(None),
pipeline_config_path=pipeline_config_path,
train_steps=None,
sample_1_of_n_eval_examples=1,
sample_1_of_n_eval_on_train_examples=(5))
estimator = train_and_eval_dict['estimator']
Run inference
output_dict1 = estimator.predict(predict_input_fn)
This prints out some log messages, one of which is:
INFO:tensorflow:Restoring parameters from ./pretrained_models/tensorflow/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt
So it seems like pre-trained weights are getting loaded. However results look like:
Load same model as a Predictor
from tensorflow.contrib import predictor
model_dir = './pretrained_models/tensorflow/ssd_mobilenet_v1_coco_2018_01_28'
saved_model_dir = os.path.join(model_dir, 'saved_model')
predict_fn = predictor.from_saved_model(saved_model_dir)
Run inference
output_dict2 = predict_fn({'inputs': image_np_expanded})
Results look good:
When you load the model as an estimator and from a checkpoint file, here is the restore function associated with ssd models. From ssd_meta_arch.py
def restore_map(self,
fine_tune_checkpoint_type='detection',
load_all_detection_checkpoint_vars=False):
"""Returns a map of variables to load from a foreign checkpoint.
See parent class for details.
Args:
fine_tune_checkpoint_type: whether to restore from a full detection
checkpoint (with compatible variable names) or to restore from a
classification checkpoint for initialization prior to training.
Valid values: `detection`, `classification`. Default 'detection'.
load_all_detection_checkpoint_vars: whether to load all variables (when
`fine_tune_checkpoint_type='detection'`). If False, only variables
within the appropriate scopes are included. Default False.
Returns:
A dict mapping variable names (to load from a checkpoint) to variables in
the model graph.
Raises:
ValueError: if fine_tune_checkpoint_type is neither `classification`
nor `detection`.
"""
if fine_tune_checkpoint_type not in ['detection', 'classification']:
raise ValueError('Not supported fine_tune_checkpoint_type: {}'.format(
fine_tune_checkpoint_type))
if fine_tune_checkpoint_type == 'classification':
return self._feature_extractor.restore_from_classification_checkpoint_fn(
self._extract_features_scope)
if fine_tune_checkpoint_type == 'detection':
variables_to_restore = {}
for variable in tf.global_variables():
var_name = variable.op.name
if load_all_detection_checkpoint_vars:
variables_to_restore[var_name] = variable
else:
if var_name.startswith(self._extract_features_scope):
variables_to_restore[var_name] = variable
return variables_to_restore
As you can see even if the config file sets from_detection_checkpoint: True, only the variables in the feature extractor scope will be restored. To restore all the variables, you will have to set
load_all_detection_checkpoint_vars: True
in the config file.
So, the above situation is quite clear. When load the model as an Estimator, only the variables from feature extractor scope will be restored, and the predictors's scope weights are not restored, the estimator would obviously give random predictions.
When load the model as a predictor, all weights are loaded thus the predictions are reasonable.
What is the correct way to load a Tensorflow model from a checkpoint but reset the learning_rate to start from the start instead of continuing from the checkpoint?
I am using the following code for initializing the optimizer.
learning_rate = tf.train.polynomial_decay(start_learning_rate, self.global_step,
decay_steps, end_learning_rate,
power=power,name="new_one2")
opt = tf.train.AdamOptimizer(learning_rate)
I tried to restore var_list from the checkpoint and return only those variables whose name does no have Adam so that the AdamOptimizer is reinitialized from the start. But this doesn't help. Here's the function I use.
def optimistic_restore(self, save_file, ignore_vars=None, verbose=False, ignore_incompatible_shapes=False):
"""This function tries to restore all variables in the save file.
This function ignores variables that do not exist or have incompatible shape.
Raises TypeError if the there is a type mismatch for compatible shapes.
session: tf.Session
The tf session
save_file: str
Path to the checkpoint without the .index, .meta or .data extensions.
ignore_vars: list, tuple or set of str
These variables will be ignored.
verbose: bool
If True prints which variables will be restored
ignore_incompatible_shapes: bool
If True ignores variables with incompatible shapes.
If False raises a runtime error f shapes are incompatible.
"""
def vprint(*args, **kwargs):
if verbose: print(*args, flush=True, **kwargs)
# def dbg(*args, **kwargs): print(*args, flush=True, **kwargs)
def dbg(*args, **kwargs): pass
if ignore_vars is None:
ignore_vars = []
reader = tf.train.NewCheckpointReader(save_file)
var_to_shape_map = reader.get_variable_to_shape_map()
var_list = []
for key in sorted(var_to_shape_map):
if not 'Adam' in key:
var_list.append(key)
return var_list
This will give me all the variables except the Adam ones. And than I pass the resulting list of variables to the saver, as mentioned below:
varsss = self.optimistic_restore(tf.train.latest_checkpoint(my_ckpt_dir))
b = tf.global_variables()
tf.train.Saver(var_list=b[:len(varsss)])
But this still doesn't help - my learning rate doesn't go back to start.
If someone else is also going through the problem.
The solution was rather simple. I just added the global_step to my polynomial_decay learning rate as a placeholder. i.e
custom_lr = tf.placeholder(tf.int32)
learning_rate = tf.train.polynomial_decay(start_learning_rate, custom_lr,
decay_steps, end_learning_rate,
power=power,name="new_one2")
And simply pass the learning_rate in the feed dict.
for step in range(0, 1000):
sess.run(feed_dict={ custom_lr: step }