Trying to fill a zeros numpy array of size (6,6) with a array size (2,2) - resize

I have a array S:
S = array([[980, 100],
[ 3, 5]])
I need to resize him or fill a zeros array to size (6,6). My desire output is:
out = array([[980, 100, 0, 0, 0, 0],
[3, 5, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]], dtype=int32)
Anyone can help?

I figure it out.
Create a zeros matrix to desired size matrix
zeros = np.zeros((6,6))
your array
array = np.array([[1,2,5,6],[3,4,4,3],[5,6,2,8]])
#getting shape
lenx, leny = array.shape
fill the zeros matrix with your array
zeros[:lenx,:leny] = array

Related

How to create a sparse matrix with a given base matrix?

I have the following 2 x 2 matrix
1 0
1 1
I want to expand this matrix with dimensions in powers of 2. For example the matrix with dimension 4 would look like:
1 0 0 0
1 1 0 0
1 0 1 0
1 1 1 1
Essentially, I want to retain the original matrix wherever 1 occurs in the base matrix and fill up zeros where 0 occurs in the base matrix? Is there a fast way to do this in numpy or scipy? I want to be able to expand this to any power of 2, say 512 or 1024.
For relatively small values of the powers of 2 (say up to 10), you can recursively replace every 1 with the inital matrix a using numpy block:
import numpy as np
a = np.array([[1, 0], [1, 1]])
def generate(a, k):
z = np.zeros_like(a)
result = a.copy()
for _ in range(1, k):
result = eval(f"np.block({str(result.tolist()).replace('1', 'a').replace('0', 'z')})")
return result
Example for k=3 (8x8 result matrix) generate(a, 3):
array([[1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 0],
[1, 0, 0, 0, 1, 0, 0, 0],
[1, 1, 0, 0, 1, 1, 0, 0],
[1, 0, 1, 0, 1, 0, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1]])
You can combine tile and repeat.
>>> np.tile(arr, (2, 2))
array([[1, 0, 1, 0],
[1, 1, 1, 1],
[1, 0, 1, 0],
[1, 1, 1, 1]]
>>> np.repeat(np.repeat(arr, 2, axis=1), 2, axis=0)
array([[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 1, 1],
[1, 1, 1, 1]])
Then just multiply:
def tile_mask(a):
tiled = np.tile(a, (2, 2))
mask = np.repeat(
np.repeat(a, 2, axis=1),
2, axis=0
)
return tiled * mask
>>> tile_mask(arr)
array([[1, 0, 0, 0],
[1, 1, 0, 0],
[1, 0, 1, 0],
[1, 1, 1, 1]])
I don't know of a good way to do this for higher powers besides recursion though:
def tile_mask(a, n=2):
if n > 2:
a = tile_mask(a, n-1)
tiled = np.tile(a, (2, 2))
mask = np.repeat(
np.repeat(a, 2, axis=1),
2, axis=0
)
return tiled * mask
>>> tile_mask(arr, 3)
array([[1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 0],
[1, 0, 0, 0, 1, 0, 0, 0],
[1, 1, 0, 0, 1, 1, 0, 0],
[1, 0, 1, 0, 1, 0, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1]])

find numpy rows that are the same

I have a numpy array
How can I find which of them are the same and how many times appear in the matrix?
thanks
dummy example:
A=np.array([[0, 1, 0, 1],[0, 0, 0, 0],[0, 1, 1, 1],[0, 0, 0, 0]])
You can use numpy.unique with axis=0 and return_counts=True:
np.unique(A, axis=0, return_counts=True)
Output:
(array([[0, 0, 0, 0],
[0, 1, 0, 1],
[0, 1, 1, 1]]),
array([2, 1, 1]))

Block diagonal matrix with offsets Python

Numpy provides a way to create a diagonal matrix from single elements using offset. Now, instead of single elements, I have a list of 2*2 blocks to insert along a diagonal with a specified offset.
Below is 11 blocks of 2*2 arrays that should fit along the +1 offset of a 24*24 matrix. I am aware that scipy.linalg.block_diag can create a block diagonal for an (implicit) offset of zero.
In general, I have a list of 2*2 block arrays and I want to insert these blocks along specified offsets from the main 2*2 block diagonal
[array([[ 1, 8],[ 5, 40]]), array([[ 2, 7],[10, 35]]), array([[0, 0], [0, 0]]), array([[ 3, 6],[15, 30]]), array([[ 4, 5],[20, 25]]),array([[0, 0],[0, 0]]), array([[ 5, 4],[25, 20]]), array([[ 6, 3],[30, 15]]), array([[0, 0],[0, 0]]), array([[ 7, 2],[35, 10]]), array([[ 8, 1], [40, 5]])]
You can make block_diag create an offset by prepending and appending an array of width/height zero:
from scipy import linalg
blocks = np.multiply.outer(np.arange(1,4), np.ones((2,2), int))
offset = 3
aux = np.empty((0, offset), int)
linalg.block_diag(aux.T, *blocks, aux)
# array([[0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0],
# [1, 1, 0, 0, 0, 0, 0, 0, 0],
# [1, 1, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 2, 2, 0, 0, 0, 0, 0],
# [0, 0, 2, 2, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 3, 3, 0, 0, 0],
# [0, 0, 0, 0, 3, 3, 0, 0, 0]])
linalg.block_diag(aux, *blocks, aux.T)
# array([[0, 0, 0, 1, 1, 0, 0, 0, 0],
# [0, 0, 0, 1, 1, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 2, 2, 0, 0],
# [0, 0, 0, 0, 0, 2, 2, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 3, 3],
# [0, 0, 0, 0, 0, 0, 0, 3, 3],
# [0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0]])

What is the order output of conv2d in tensorflow?

I wanted to get the values of output tensor in tensorflow.
the kernel shape of first layer was K[row, col, in_channel, out_channel].
the input image shape is P[batch, row, col, channel]
But I tried to get the first four kernel value, they were K[0, 0, 0, 0], K[0, 1, 0, 0], K[1, 0, 0, 0], K[1, 1, 0, 0].
I got the input values were P[0, 0, 0, 0], P[0, 0, 1, 0], P[0, 1, 0, 0], P[0, 1, 1, 0].
python code is that "F = tf.nn.conv2d(P, K, stride=[1, 1, 1, 1], padding='SAME')"
Console showed output value (F[0, 0, 0, 0]) is not K[0, 0, 0, 0] * P[0, 0, 0, 0] + K[0, 1, 0, 0] * P[0, 0, 1, 0] + K[1, 0, 0, 0] * P[0, 1, 0, 0] + K[1, 1, 0, 0] * P[0, 1, 1, 0]
What is the order of these output feature maps? I had 40 conv_kernel,the first output was not computed by the first conv_kernel
There's something wrong in your input values.
Remember that conv2d wants an input tensor of shape [batch, in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, out_channels].
In fact, reshaping your data the results are the one expected (please note that conv2d computes the correlation and not the convolution).
import tensorflow as tf
K = tf.get_variable("K", shape=(4,4), initializer=tf.constant_initializer([
[0, 0, 0, 0],
[0, 1, 0, 0],
[1, 0, 0, 0],
[1, 1, 0, 0]
]))
K = tf.reshape(K, (4,4,1,1))
P = tf.get_variable("P", shape=(4,4), initializer=tf.constant_initializer([
[0, 0, 0, 0],
[0, 0, 1, 0],
[0, 1, 0, 0],
[0, 1, 1, 0]
]))
P = tf.reshape(P, (1,4,4,1))
F = tf.nn.conv2d(P, K, strides=[1, 1, 1, 1], padding='VALID')
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
print(sess.run(F))
In this example, I'm computing the correlation between the input P (a batch with 1 element of depth 1) and the filter P (4x4 filter, with input depth 1 and output depth 1).

Numpy check if a matrix can be transformed to another matrix by swaping columns

Say we have matrix A and B as follow
>>> A
matrix([[0, 0, 0, 1],
[1, 0, 0, 0],
[1, 0, 0, 0]])
>>> B
matrix([[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 1, 0]])
Clearly we can "transform" matrix A to B by column swapping. Is there an efficient algorithm to check whether two (potentially large) matrices can be transformed to each other in this way?
Here is a simple function. For very large matrix, it is possible that (A==B).all() is slower than np.array_equal(A,B).
import numpy as np
A = np.array([[0, 0, 0, 1],
[1, 0, 0, 0],
[1, 0, 0, 0]])
B = np.array([[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 1, 0]])
def isSwaping(a, b):
count = 0
for i, c in enumerate(a.T): # transpose of a
for d in b.T:
if (c == d).all():
count += 1
break
if count == i : # then it is uncessary to continue
return False
return True
print isSwaping(A, B)