Numpy Changing Matrix dimensions - numpy

I have a 28x28 pixel image as a numpy array and its shape is (28,28) using the np.array.shape function. I want the shape to be 784x1. In other words with a NxN matrix how do you convert it to a N^2x1. Using the flatten function i get almost what I'm looking for, the shape from flatten is (784,).

Another possible way is to use np.atleast_2d
np.atleast_2d(arr.flatten())

Related

How to resize elements in a ragged tensor in TensorFlow

I would like to resize every element in a ragged tensor. For example, if I have a ragged tensor of various sized images, how can I resize each one so that the dimensions are the same?
For example,
digits = tf.ragged.constant([np.zeros((1,60,60,1)), np.zeros((1,46,75,1))])
resize_lambda = lambda x: tf.image.resize(x, (60,60))
res = tf.ragged.map_flat_values(resize_lambda, digits)
I wish res to be a tensor of shape (2,60,60,1). How can I achieve this?
To clarify, this would be useful if within a custom layer we wanted to slice or crop sections from a single image to batch for inference in the next layer. In my case, I am attempting to combine two models (a model to segment an image into multiple cropped images of varying size and a classifier to predict each sub-image). I am also using tf 2.0
You should be able to do the following.
import tensorflow as tf
import numpy as np
digits = tf.ragged.constant([np.zeros((1,60,60,1)), np.zeros((1,46,75,1))])
res = tf.concat(
[tf.image.resize(digits[i].to_tensor(), (60,60)) for i in tf.range(digits.nrows())],
axis=0)

How to convert a numpy array of tensors to a tensor?

I have a numpy array list something like the follows:
a=np.array([tf.convert_to_tensor(1),tf.convert_to_tensor(2)])
I want to convert this list into a tensor.
My real list is not like the constant example but some complex tensor, so does anyone know how to do this?
I assume all of the tensors have the same shape. Then you can just call tf.stack:
>>> print(tf.stack([tf.convert_to_tensor(1), tf.convert_to_tensor(2)]))
Tensor("stack:0", shape=(2,), dtype=int32)
Note that it accepts the list, not numpy array.

About feed sparse matrix into the graph

For the data dimension is too large, i have to change the data into sparse matrix, instead of the dense array.
However, for the graph includes the cnn, and when i feed the sparse matrix directly, i was told the cnn cannot receive the sparse tensor. so i have to do the operation 'sparse to dense' at first.
But the question is that the data(multi sparse matrix) i feed should be converted to a two-dimension sparse matrix.(e.g i have sparse matrix1, dim is [14,25500],and sparse matrix2, dim is [14,25500], the perfect dimension i want to feed is [2,14,25500], but the reality i faced is [28,25500])
So i have to split the tensor after entering the graph.
i want to ask, if any other ways can solve this problem ?
tf.stack is your friend
tf.stack([matrix1, matrix2]) # => [2,14,25500]

Use coo_matrix in TensorFlow

I'm doing a Matrix Factorization in TensorFlow, I want to use coo_matrix from Spicy.sparse cause it uses less memory and it makes it easy to put all my data into my matrix for training data.
Is it possible to use coo_matrix to initialize a variable in tensorflow?
Or do I have to create a session and feed the data I got into tensorflow using sess.run() with feed_dict.
I hope that you understand my question and my problem otherwise comment and i will try to fix it.
The closest thing TensorFlow has to scipy.sparse.coo_matrix is tf.SparseTensor, which is the sparse equivalent of tf.Tensor. It will probably be easiest to feed a coo_matrix into your program.
A tf.SparseTensor is a slight generalization of COO matrices, where the tensor is represented as three dense tf.Tensor objects:
indices: An N x D matrix of tf.int64 values in which each row represents the coordinates of a non-zero value. N is the number of non-zeroes, and D is the rank of the equivalent dense tensor (2 in the case of a matrix).
values: A length-N vector of values, where element i is the value of the element whose coordinates are given on row i of indices.
dense_shape: A length-D vector of tf.int64, representing the shape of the equivalent dense tensor.
For example, you could use the following code, which uses tf.sparse_placeholder() to define a tf.SparseTensor that you can feed, and a tf.SparseTensorValue that represents the actual value being fed :
sparse_input = tf.sparse_placeholder(dtype=tf.float32, shape=[100, 100])
# ...
train_op = ...
coo_matrix = scipy.sparse.coo_matrix(...)
# Wrap `coo_matrix` in the `tf.SparseTensorValue` form that TensorFlow expects.
# SciPy stores the row and column coordinates as separate vectors, so we must
# stack and transpose them to make an indices matrix of the appropriate shape.
tf_coo_matrix = tf.SparseTensorValue(
indices=np.array([coo_matrix.rows, coo_matrix.cols]).T,
values=coo_matrix.data,
dense_shape=coo_matrix.shape)
Once you have converted your coo_matrix to a tf.SparseTensorValue, you can feed sparse_input with the tf.SparseTensorValue directly:
sess.run(train_op, feed_dict={sparse_input: tf_coo_matrix})

How to sample image tensor in tensorflow

I have one image data tensor with shape of B*H*W*C and one position tensor with shape of B*H*W*2. The values in position tensor are pixel coordinates and I want to sample pixels in image data tensor according to these pixel coordinates. I have tried one way to do that like reshaping the tensor to one-dimension tensor, but I think it's really inconvenient. I wonder whether I could implement it by some more convenient approach like matrix mapping(e.g. remap in opencv).
I would first ask if you are sure the position matrix isn't redundant. If the position matrix entries simply correspond to the pixel locations in the image array, then for a given application however you access the position matrix could be used instead on the image data.
Perhaps as a starting point, running
sess = tf.Session()
np_img, np_pos = sess.run([tf_img, tf_pos], feed_dict={...})
will convert tensors to numpy arrays, which may make your operations easier.
Otherwise, a 1D-tensor isn't that bad and there are TF functions for reshaping easily.