tensorflow matrix_band_part function equivalent in pytorch - indexing

I need to create upper triangular masking tensor in pytorch. in tensorflow it's easy using matrix_band_part. is it any pytorch equivalent for this function?
it's like numpy triu_indices but I need for tensor, not just matrix.

Related

Theano to tensorflow conversion

What is the tensorflow/keras equivalent of Theano's gt?
Based on Theano's documentation, theano.tensor.gt returns a symbolic 'int8' tensor representing the result of logical greater-than (a>b).
In Theano, I have:
import theano.tensor as T
posInd1 = T.gt(D1, eps).nonzero()[0]
How do I convert it to TensorFlow?

Pytorch: Numpy Arrays

Can I use numpy arrays when using pytorch?
I am converting a code from tensorflow to pytorch and the code uses numpy arrays during the computation. Can I keep my inputs as numpy arrays during the computation or do I have to convert them to torch tensors?
If that array is being passed to a Pytorch model with pytorch nn layers, then it MUST be a <torch.tensor> and NOT a numpy array.
Depending on the Pytorch layer, the tensor has to be in a specific shape like for nn.Conv2d layers you must have a 4d torch tensor and for nn.Linear you must have a 2d torch tensor.
This is among many reasons, it cannot be a numpy array.
Sarthak

Implement zigzag flatten NxN tensor with batches in TensorFlow

The problem can be described in zigzag scanning. However, I wonder if there's is a TensorFlow version of implementation by using something like tf.tensor_scatter_nd_update that TensorFlow suggests.
BxNxN tensor where B represents Batch.
I found a workaround by using 1x1 conv. Use numpy to generate a constant permutation conv kernel ( tf does not support eager tensor assignment... ), then
reshape tensor(BxNxN) to Bx1x1x(NxN) before applying tf.nn.conv2d to it. Finally do some reshape acrobat to flatten it.

How to read SciPy sparse matrix into Tensorflow's placeholder

It's possible to read dense data by this way:
# tf - tensorflow, np - numpy, sess - session
m = np.ones((2, 3))
placeholder = tf.placeholder(tf.int32, shape=m.shape)
sess.run(placeholder, feed_dict={placeholder: m})
How to read scipy sparse matrix (for example scipy.sparse.csr_matrix) into tf.placeholder or maybe tf.sparse_placeholder ?
I think that currently TF does not have a good way to read from sparse data. If you do not want to convert a your sparse matrix into a dense one, you can try to construct a sparse tensor..
Here is what official tutorial tells you:
SparseTensors don't play well with queues. If you use SparseTensors
you have to decode the string records using tf.parse_example after
batching (instead of using tf.parse_single_example before batching).
To feed SciPy sparse matrix to TF placeholder
Option 1: you need to use tf.sparse_placeholder. In Use coo_matrix in TensorFlow shows the way to feed data to a sparse_placeholder
Option 2: you need to convert sparse matrix to NumPy dense matrix and feed to tf.place_holder (of course, this way is impossible when the converted dense matrix is out of memory)

SpatialFullConvolution (torch) in TensorFlow

I try to translate a torch code to TensorFlow, but I cannot find the corresponding tensorflow function of SpatialFullConvolution which is able to apply transpose convoluation on vector (not only image).
How can I deal with it?
here is an example https://github.com/soumith/dcgan.torch/blob/master/main.lua
SpatialFullConvolution in torch is same as Transposed Convolution/Deconvolution/Fractionally strided convolution. You can find the Keras documentation here