How to have scipy operations in tensorflow - tensorflow

I have a network written in Tensorflow Keras, I'd like to use some scipy operations (cKDTree) over some layers and use the output of that operations in my network as a multiplyer. I have some basic questions:
I believe there is no back propagation required for that function (decorated with #tf.function) is that true? If yes how can I tell Tensorflow to not back propagate through this function?
Do I need to use a) variables or b) convert them to numpy arrays or c) use tf.no_gradient?
Any help is appreciated.

Related

Implementing backprop in numpy

I a trying to implement backprop in numpy by defining a function that performs some kind operation given an input, weight matrix and bias, and returns the output with the backward function, which can be used to update weights.
Currently this is my code , however I think there are some bugs in the derivation, as the gradients for the W1 matrix are too large. Here is a pytorch implementation for the same thing as a reference torch.
Any help is appreciated.

How to define a loss function that needs to input numpy array(not tensor) when build a tensorflow graph?

I want to add a constraint option in my loss function. The definition of this constraint option needs numpy array type as input. So, I can not define it as a tensor type as a graph node in tensorflow. How can I define this part in graph so as to join in the network optimization?
Operations done on numpy arrays cannot be automatically differentiated in TensorFlow. Since you are using this computation as part of loss computation, I assume you want to differentiate it. In this case, your best option is probably to reimplement the constraint in TensorFlow. The only other approach I can think of is to use autograd in conjuction with TF. This seems possible - something along the lines of evaluate part of the graph with TF, get numpy arrays out, call your function under autograd, get gradients, feed them back into TF - but will likely be harder and slower.
If you are reimplementing it in TF, most numpy operations have easy one-to-one corresponded operations in TF. If the implementation is using a lot of control flow (which can be painful in classic TF), you can use eager execution or py_func.

math_ops.floor equivalent in Keras

I'm trying to implement a custom layer in Keras where I need to convert a tensor of floats [a, 1+a) to a binary tensor for masking. I can see that Tensorflow has a floor function that can do that, but Keras doesn't seem to have it in keras.backend. Any idea how I can do this?
As requested by OP, I will mention the answer I gave in my comment and elaborate more:
Short answer: you won't encounter any major problems if you use tf.floor().
Long answer: Using Keras backend functions (i.e. keras.backend.*) is necessary in those cases when 1) there is a need to pre-process or augment the argument(s) passed to actual function of Tensorflow or Theano backend or post-process the returned results. For example, the mean method in backend can also work with boolean tensors as input, however the reduce_mean method in TF expects numerical types as input; or 2) you want to write a model that works across all the Keras supported backends.
Otherwise, it is fine to use most of real backend functions directly; however, if the function has been defined in keras.backend module, then it is recommended to use that instead.

Can I use Tensorflow and Keras interchangeably?

I am using l2_regularization
Tensorflow has - tf.nn.l2_loss
Can I use this?
K.sum(K.square(K.abs(Weights)))
tf.nn.l2_loss
Can I use this interchangeably in Keras (Tensorflow backend)?
Yes, you can, but keep in mind that tf.nn.l2_loss computes output = sum(t ** 2) / 2 (from documentation), so you've forgotten about multiplying by 0.5. Also you don't have to calculate K.abs(weights) because K.square(K.abs(weights)) == K.square(weights).
The differences are:
tf.nn.l2_loss is implemented directly in kernel.
operations in Keras backend translate directly to Tensorflow defined here.

Parallel way of applying function element-wise to a Pytorch CUDA Tensor

Suppose I have a torch CUDA tensor and I want to apply some function like sin() but I have explicitly defined the function F. How can I use parallel computation to apply F in Pytorch.
I think currently, it is not possible to explicit parallelize a function on a CUDA-Tensor. A possible solution could be, you can define a Function like the for example the non-linear activation functions. So you can feed forward it through the Net and your function.
The drawback is, it probably don't work, because you have to define a CUDA-Function and have to recompile pytorch.