Can I use Tensorflow and Keras interchangeably? - tensorflow

I am using l2_regularization
Tensorflow has - tf.nn.l2_loss
Can I use this?
K.sum(K.square(K.abs(Weights)))
tf.nn.l2_loss
Can I use this interchangeably in Keras (Tensorflow backend)?

Yes, you can, but keep in mind that tf.nn.l2_loss computes output = sum(t ** 2) / 2 (from documentation), so you've forgotten about multiplying by 0.5. Also you don't have to calculate K.abs(weights) because K.square(K.abs(weights)) == K.square(weights).
The differences are:
tf.nn.l2_loss is implemented directly in kernel.
operations in Keras backend translate directly to Tensorflow defined here.

Related

How to have scipy operations in tensorflow

I have a network written in Tensorflow Keras, I'd like to use some scipy operations (cKDTree) over some layers and use the output of that operations in my network as a multiplyer. I have some basic questions:
I believe there is no back propagation required for that function (decorated with #tf.function) is that true? If yes how can I tell Tensorflow to not back propagate through this function?
Do I need to use a) variables or b) convert them to numpy arrays or c) use tf.no_gradient?
Any help is appreciated.

How to initialize mean and variance of Pytorch BatchNorm2d?

I’m transforming a TensorFlow model to Pytorch. And I’d like to initialize the mean and variance of BatchNorm2d using TensorFlow model.
I’m doing it in this way:
bn.running_mean = torch.nn.Parameter(torch.Tensor(TF_param))
And I get this error:
RuntimeError: the derivative for 'running_mean' is not implemented
But is works for bn.weight and bn.bias. Is there any way to initialize the mean and variance using my pre-trained Tensorflow model? Is there anything like moving_mean_initializer and moving_variance_initializer in Pytorch?
Thanks!
The running mean and variance of a batch norm layer are not nn.Parameters, but rather a buffer of the layer.
I think you can simply assign a torch.tensor, no need to wrap a nn.Parameter around it.

Replacing multiplication operator in existing keras (tensorflow) model

I am currently using an existing Keras implementation of a certain model and I would like to study the effects of different multiplication implementations on its computational speed and accuracy.
Is there a simple way to replace the Keras (TensorFlow) multiplication that is used in its Dense and Conv (and other pre-existing) layers with a custom one?
The idea is also to see the difference between training with normal multiplication + testing with custom multiplication and doing both with the custom multiplication.
So I'm looking for a solution that's something like:
import tensorflow as tf
tf.__mul__ = custom_mult
and will replace all multiplication operations in Keras's default layers with my own implementation.

math_ops.floor equivalent in Keras

I'm trying to implement a custom layer in Keras where I need to convert a tensor of floats [a, 1+a) to a binary tensor for masking. I can see that Tensorflow has a floor function that can do that, but Keras doesn't seem to have it in keras.backend. Any idea how I can do this?
As requested by OP, I will mention the answer I gave in my comment and elaborate more:
Short answer: you won't encounter any major problems if you use tf.floor().
Long answer: Using Keras backend functions (i.e. keras.backend.*) is necessary in those cases when 1) there is a need to pre-process or augment the argument(s) passed to actual function of Tensorflow or Theano backend or post-process the returned results. For example, the mean method in backend can also work with boolean tensors as input, however the reduce_mean method in TF expects numerical types as input; or 2) you want to write a model that works across all the Keras supported backends.
Otherwise, it is fine to use most of real backend functions directly; however, if the function has been defined in keras.backend module, then it is recommended to use that instead.

How does Tensorflow support using optimizers with custom ops?

I've made a new op and I'd like to use it with AdamOptimizer. I've created a gradient for it following the instructions here and added it to my optimizer's var_list but Tensorflow says that my variable doesn't have a processor.
Is there support for Tensorflow custom ops in optimizers?
Does the optimizer class let me create a new processor or would I have to rewrite part of compute_gradients?
Also, what does automatic differentiation mean, as stated by the TF docs:
To make automatic differentiation work for new ops, you must register a gradient function which computes gradients with respect to the ops' inputs given gradients with respect to the ops' outputs.
Thanks!
So I found out that what I was doing was not supported with Tensorflow optimizer.
I was trying to create an op that would act like a Tensorflow variable (i.e. get updated by the functions within Optimizer::minimize()), however, I believe that TF does something weird with processors and Eigen::Tensors that I don't fully understand in order to update gradients with minimize(), and naturally this doesn't work with Op classes.