I wonder if there is a way to compute the Gaussian kernel of a numpy masked array?
I import:
from sklearn.metrics.pairwise import rbf_kernel
If one uses a masked array and gives it as the input to the rbf_kernel function of scikit learn package the result is not a masked array. It seems that all the pairwise distances are calculated regardless of some of them being masked!
Scikit-learn doesn't support masked arrays.
Computing the RBF kernel is really simple if you can compute euclidean distances, though.
Related
The problem can be described in zigzag scanning. However, I wonder if there's is a TensorFlow version of implementation by using something like tf.tensor_scatter_nd_update that TensorFlow suggests.
BxNxN tensor where B represents Batch.
I found a workaround by using 1x1 conv. Use numpy to generate a constant permutation conv kernel ( tf does not support eager tensor assignment... ), then
reshape tensor(BxNxN) to Bx1x1x(NxN) before applying tf.nn.conv2d to it. Finally do some reshape acrobat to flatten it.
my model requires an adjacency matrix, which is currently created in numpy and passed to tensorflow as a placeholder.
With growing problem size, the I/O between Memory and VRAM is a bottleneck I suppose as the complexity is quadratic. For e.g. I use dim 400, which will result in 160.000 matrix values.
As the adj matrix is sparse, I thought about passing a adj list and then creating the adj matrix in tf on GPU.
Any suggestions?
Thanks
Tensorflow support sparse placeholder. In this page https://www.tensorflow.org/api_docs/python/tf/sparse_placeholder
there is an example showing how to use tf.sparse_placeholder
Given a one dimensional data, how to re-shape it to 2D matrix so that I can leverage the existing 2D convolution in tensorflow?
I have to assume that you are talking about an array. If that is correct then you should be able to convert it using reshape.
from the tensorflow site
https://www.tensorflow.org/api_docs/python/tf/reshape
It's possible to read dense data by this way:
# tf - tensorflow, np - numpy, sess - session
m = np.ones((2, 3))
placeholder = tf.placeholder(tf.int32, shape=m.shape)
sess.run(placeholder, feed_dict={placeholder: m})
How to read scipy sparse matrix (for example scipy.sparse.csr_matrix) into tf.placeholder or maybe tf.sparse_placeholder ?
I think that currently TF does not have a good way to read from sparse data. If you do not want to convert a your sparse matrix into a dense one, you can try to construct a sparse tensor..
Here is what official tutorial tells you:
SparseTensors don't play well with queues. If you use SparseTensors
you have to decode the string records using tf.parse_example after
batching (instead of using tf.parse_single_example before batching).
To feed SciPy sparse matrix to TF placeholder
Option 1: you need to use tf.sparse_placeholder. In Use coo_matrix in TensorFlow shows the way to feed data to a sparse_placeholder
Option 2: you need to convert sparse matrix to NumPy dense matrix and feed to tf.place_holder (of course, this way is impossible when the converted dense matrix is out of memory)
Disclamer: I know nothing about CNN and deep learning and I don't know Torch.
I'm using SIFT for my object recognition application. I found this paper Discriminative Learning of Deep Convolutional Feature Point Descriptors which is particularly interesting because it's CNN based, which are more precise than classic image descripion methods (e.g. SIFT, SURF etc.), but (quoting the abstract):
using the L2 distance during both training and testing we develop
128-D descriptors whose euclidean distances reflect patch similarity,
and which can be used as a drop-in replacement for any task involving
SIFT
Wow, that's fantastic: that means that we can continue to use any SIFT based approach but with more precise descriptors!
However, quoting the github code repository README:
Note the output will be a Nx128 2D float tensor where each row is a
descriptor.
Well, what is a "2D float tensor"? SIFT descriptors matrix is Nx128 floats, is there something that I am missing?
2D float tensor = 2D float matrix.
FYI: The meaning of tensors in the neural network community
This is a 2-d float tensor.
[[1.0,2.0],
[3.0,4.0]]
This is still a 2-d float tensor, even if they have 3 items, and 3 rows!
[[1.0,2.0,3.0],
[4.0,5.0,6.0],
[7.0,5.0,6.0]]
The number of bracket is what matters.