I try to translate a torch code to TensorFlow, but I cannot find the corresponding tensorflow function of SpatialFullConvolution which is able to apply transpose convoluation on vector (not only image).
How can I deal with it?
here is an example https://github.com/soumith/dcgan.torch/blob/master/main.lua
SpatialFullConvolution in torch is same as Transposed Convolution/Deconvolution/Fractionally strided convolution. You can find the Keras documentation here
Related
The problem can be described in zigzag scanning. However, I wonder if there's is a TensorFlow version of implementation by using something like tf.tensor_scatter_nd_update that TensorFlow suggests.
BxNxN tensor where B represents Batch.
I found a workaround by using 1x1 conv. Use numpy to generate a constant permutation conv kernel ( tf does not support eager tensor assignment... ), then
reshape tensor(BxNxN) to Bx1x1x(NxN) before applying tf.nn.conv2d to it. Finally do some reshape acrobat to flatten it.
I need to create upper triangular masking tensor in pytorch. in tensorflow it's easy using matrix_band_part. is it any pytorch equivalent for this function?
it's like numpy triu_indices but I need for tensor, not just matrix.
I don't know how to convert the PyTorch method adaptive_avg_pool2d to Keras or TensorFlow. Anyone can help?
PyTorch mehod is
adaptive_avg_pool2d(14,[14])
I tried to use the average pooling, the reshape the tensor in Keras, but got the error:
ValueError: total size of new array must be unchanged
I'm not sure if I understood your question, but in PyTorch, you pass the spatial dimensions to AdaptiveAvgPool2d. For instance, if you want to have an output sized 5x7, you can use nn.AdaptiveAvgPool2d((5,7)).
If you want a global average pooling layer, you can use nn.AdaptiveAvgPool2d(1). In Keras you can just use GlobalAveragePooling2D.
For other output sizes in Keras, you need to use AveragePooling2D, but you can't specify the output shape directly. You need to calculate/define the pool_size, stride, and padding parameters depending on how you want the output shape. If you need help with the calculations, check this page of CS231n course.
I've compared LSTM result with Keras/Tensorflow calculation and Numpy calculation. However, the result is slightly different:
Numpy: [[ 0.16315128 -0.04277606 0.26504123 0.08014129 0.38561829]]
Keras: [[ 0.16836338 -0.04930305 0.25080156 0.08938988 0.3537751 ]]
Keras' LSTM implementation does not use tf.contrib.rnn but Keras directly manages the parameters, and tf.matmul is used to calculate. I found the corresponding implementation of Keras and tried the same calculation with Numpy, but the values are slightly different as shown above.
I have checked the formula several times and it seems like the same. The only difference is the differences between tf.matmul or np.dot. Maybe there are some differences about decimal point calculation method. Even so, I think the results are too much different. The biggest difference is about 10%. I'd like to match the Numpy calculation with the tensorflow calculation. If someone could give me some hint or point me to the right implementation, I'd really appreciate it.
Keras implementation and the Numpy code implemented myself:
Keras: https://github.com/keras-team/keras/blob/master/keras/layers/recurrent.py#L1921-L1948
Numpy: https://github.com/likejazz/jupyter-notebooks/blob/master/deep-learning/lstm-keras-inspect.py
The default value of recurrent_activation is 'hard_sigmoid' for Keras LSTM layer. However, the original sigmoid function is used in your NumPy implementation.
So you can either change the recurrent_activation argument to 'sigmoid',
model.add(LSTM(5, input_shape=(8, 3), recurrent_activation='sigmoid'))
or use the "hard" sigmoid function in your NumPy code.
def hard_sigmoid(x):
return np.clip(0.2 * x + 0.5, 0, 1)
It's possible to read dense data by this way:
# tf - tensorflow, np - numpy, sess - session
m = np.ones((2, 3))
placeholder = tf.placeholder(tf.int32, shape=m.shape)
sess.run(placeholder, feed_dict={placeholder: m})
How to read scipy sparse matrix (for example scipy.sparse.csr_matrix) into tf.placeholder or maybe tf.sparse_placeholder ?
I think that currently TF does not have a good way to read from sparse data. If you do not want to convert a your sparse matrix into a dense one, you can try to construct a sparse tensor..
Here is what official tutorial tells you:
SparseTensors don't play well with queues. If you use SparseTensors
you have to decode the string records using tf.parse_example after
batching (instead of using tf.parse_single_example before batching).
To feed SciPy sparse matrix to TF placeholder
Option 1: you need to use tf.sparse_placeholder. In Use coo_matrix in TensorFlow shows the way to feed data to a sparse_placeholder
Option 2: you need to convert sparse matrix to NumPy dense matrix and feed to tf.place_holder (of course, this way is impossible when the converted dense matrix is out of memory)