How to predict with Gru4Rec TensorFlow model - tensorflow

I am watching the following Gru4Rec implementation: https://www.tensorflow.org/recommenders/examples/sequential_retrieval
And was wondering how can I use it to predict new samples?
I was trying to use model.predict() with different input but got:
NotImplementedError: Unimplemented `tf.keras.Model.call()`: if you intend to create a `Model` with the Functional API, please provide `inputs` and `outputs` arguments. Otherwise, subclass `Model` with an overridden `call()` method.

Related

Add other metrics to compute performance

I use TFF version 0.12.0
In order to compute performance of model, I would like to add (with accuracy ) sensitivity and specificity metrics,
def specificity
...
def create_compiled_keras_model():
....
model.compile(optimizer=tf.keras.optimizers.SGD(lr=0.001, momentum =0.9),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=([tf.keras.metrics.BinaryAccuracy()], sensitivity, specificity))
return model
I found this error:
TypeError: Type of `metrics` argument not understood. Expected a list or dictionary, found: ([<tensorflow.python.keras.metrics.BinaryAccuracy object at 0x7fb5b0711748>], <function sensitivity at 0x7fb6adf45e18>, <function specificity at 0x7fb5fdaf5f28>)
So how can I add metrics in Tensorflow federated
Thanks
TFF requires metrics to be implemented using the tf.keras.metrics.Metric interface, and can't wrap arbitrary Python functions.
An example of making a custom metric based off of the tf.keras.metrics.Sum subclass can be found in https://github.com/tensorflow/federated/blob/3ed93c8036501fe327ede249a4b0f20d02c6f476/tensorflow_federated/python/learning/keras_utils_test.py#L33. The key being the implementation of update_state method.
For sensitivity and specificity metrics, looking at the implementation of tf.keras.metrics.SensitivityAtSpecificity and it's base class tf.keras.metrics.SensitivitySpecificityBase might be useful examples.

Loss tensor being pruned out of graph in PopART

I’ve written a very simple PopART program using the C++ interface, but every time I try to compile it to run on an IPU device I get the following error:
terminate called after throwing an instance of ‘popart::error’
what(): Could not find loss tensor ‘L1:0’ in main graph tensors
I’m defining the loss in my program like so:
auto loss = builder->aiGraphcoreOpset1().l1loss({outputs[0]}, 0.1f, popart::ReductionType::Sum, “l1LossVal”);
Is there something wrong with my loss definition that’s resulting in it being pruned out of the graph? I’ve followed the same structure as one of the Graphcore examples here.
This error usually happens when the model protobuf you pass to the TrainingSession or InferenceSession objects doesn’t contain the loss tensor. A common reason for this is when you call builder->getModelProto() before you add the loss tensor to the graph. To ensure your loss tensor is part of the protobuf your calls should be in the following order:
...
auto loss = builder->aiGraphcoreOpset1().l1loss(...);
auto proto = builder->getModelProto();
auto session = popart::TrainingSession::createFromOnnxModel(...);
...
The key point is that the getModelProto() call should be the last call from the builder interface before setting up the PopART session.

tensor conversion function numpy() doesn't work within tf.estimator model function

I have tried this with both tensorflow v2.0 and v1.12.0 (with tf.enable_eager_execution() ). So apparently if I call numpy() with the code snippet shown below in my main() function, it works perfectly. However if I use it in my estimator model function i.e., model_fn(features, labels, mode, params) then it complains that 'Tensor' object has no attribute 'numpy'.
ndarray = np.ones([3, 3])
tensor = tf.multiply(ndarray, 42)
print(tensor)
print(tensor.numpy())
Has anyone else experienced similar problem? Seems like a big issue for tf.estimator no?
It won't work. Estimator API is tied to graph construction and it doesn't fully support eager execution. As per official documentation.
Calling methods of Estimator will work while eager execution is
enabled. However, the model_fn and input_fn is not executed eagerly
https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator
TF 2.0 won't even support custom estimators, only premade ones.

How to initialize weight for convolution layers in Tensorflow Object Detection API?

I followed this tutorial for implementing Tensorflow Object Detection API.
The preferred way is using pretrained models.
But for some cases, we need to train from scratch.
For that we just need to comment out two lines in the configuration file as
#fine_tune_checkpoint: "object_detection/data/mobilenet_v1_1.0_224/mobilenet_v1_1.0_224.ckpt"
#from_detection_checkpoint: true
If I want to initialize weight with Xavier weight initialization, how can I do that?
As you can see in the configuration protobuf definition, there are 3 initializers you can use:
TruncatedNormalInitializer truncated_normal_initializer
VarianceScalingInitializer variance_scaling_initializer
RandomNormalInitializer random_normal_initializer
The VarianceScalingInitializer is what you are looking for. It is general initializer which you can basically turn into Xavier initializer by setting factor=1.0, mode='FAN_AVG', as stated in the documentation.
So, by setting the initializers as
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
in your configuration, you obtain Xavier initializer.
But also, even if you need to train on new data, consider using pretrained network as initialization instead of random initialization. For more details, see this article.
The mobilenet_v1 feature extractor imports the backbone network from research/slim/nets:
25: from nets import mobilenet_v1
The code of mobilenet instantiates the layers according to the specification like this:
net = slim.conv2d(net, depth(conv_def.depth), conv_def.kernel, stride=conv_def.stride, scope=end_point)
See
https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.py#L264
As you can see, there are no kwargs passed to the conv2d call, so with the current code you cannot specify which weights_initializer will be used.
However, by default the initializer is Xavier anyway, so you are lucky.
I must say that training and object detection model without pre-training the feature extractor on some auxiliary task may simply fail.

Using cleverhans with just model weights and no model class

I am using a pretrained model that someone else has created, they have only released the model weights. Currently I am importing the model weights into my graph and calling them by the tensor names. However, this seems incompatible with cleverhans' code that seems to require a model object which has the method predict.
Is there any work around for this which does not require me to rewrite most of the cleverhans attacks because I do not have the model class and predict method?
What you are describing should be possible but may be somewhat intensive on resources because it may recreate the graph several times. Basically, you can implement a CleverHans model class that takes in a graph checkpoint in the init method. The get_logits or fprop method should take an input tensor and load the graph to obtain the corresponding output tensor by performing some graph surgery to replace the checkpoint graph's input tensor with your own tensor: see the input_map argument in `tf.import_graph_de: https://www.tensorflow.org/api_docs/python/tf/graph_util/import_graph_def