Getting the values of only one channel of a numpy array - numpy

I would very much be grateful if you don't close this question without even giving a hint of how to solve this problem,please.
I have the following
import numpy as np
layer1 = np.zeros((5,3,4),dtype=np.uint8)
layer1[0,0,0]=20
layer1[1,1,0]=20
layer1[2,2,0]=20
layer1[3,1,0]=20
layer1[4,0,0]=20
layer1[0,0,1] =50
layer1[1,0,1]=50
layer1[2,0,1]=50
print(layer1)
print("---------------")
which gives me
[[[20 50 0 0]
[ 0 0 0 0]
[ 0 0 0 0]]
[[ 0 50 0 0]
[20 0 0 0]
[ 0 0 0 0]]
[[ 0 50 0 0]
[ 0 0 0 0]
[20 0 0 0]]
[[ 0 0 0 0]
[20 0 0 0]
[ 0 0 0 0]]
[[20 0 0 0]
[ 0 0 0 0]
[ 0 0 0 0]]]
How can I reduce a get the values only of one channel ?
For example for channel=0
I want to get
[[20 0 0]
[ 0 20 0]
[ 0 0 20]
[ 0 20 0]
[20 0 0]]
where channel can be 0,1,2 or 3
EDIT: Just in case, the layer1[0,0,0]=20 is just a convenient way to fill up the matrix. My question is how to tranform layer1 once filled to the matrix of (5,3)
EDIT: if the "channel" is 1 then I would get
[[50 0 0]
[ 50 0 0]
[ 50 0 0]
[ 0 0 0]
[0 0 0]]

numpy array indexing is well documented. Don't skip it!
In [1]: layer1 = np.zeros((5,3,4),dtype=np.uint8)
...: layer1[0,0,0]=20
...: layer1[1,1,0]=20
...: layer1[2,2,0]=20
...: layer1[3,1,0]=20
...: layer1[4,0,0]=20
...:
...: layer1[0,0,1] =50
...: layer1[1,0,1]=50
...: layer1[2,0,1]=50
In [2]: layer1.shape
Out[2]: (5, 3, 4)
In [3]: layer1[:,:,0]
Out[3]:
array([[20, 0, 0],
[ 0, 20, 0],
[ 0, 0, 20],
[ 0, 20, 0],
[20, 0, 0]], dtype=uint8)
In [4]: layer1[:,:,2]
Out[4]:
array([[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]], dtype=uint8)

Related

replace element in numpy array pending a different element's value

I have the following numpy array (as an example):
my_array = [[3, 7, 0]
[20, 4, 0]
[7, 54, 0]]
I want to replace the 0's in the 3rd column of each row with a value of 5 only if the first index is odd.
So the expected outcome would be:
my_array = [[3, 7, 5]
[20, 4, 0]
[7, 54, 5]]
I tried numpy.where and numpy.place, but couldn't get the expected results.
Is there an elegant way to do this with numpy functions?
you can do this by indexing as:
my_array[my_array[:, 0] % 2 != 0, 2] = 5
# my_array[:, 0] % 2 != 0 --- Boolean shows modifying rows --> [ True False True]

Indices in Numpy and MATLAB

I have a piece of code in Matlab that I want to convert into Python/numpy.
I have a matrix ind which has the dimensions (32768, 24). I have another matrix X which has the dimensions (98304, 6). When I perform the operation
result = X(ind)
the shape of the matrix is (32768, 24).
but in numpy when I perform the same shape
result = X[ind]
I get the shape of the result matrix as (32768, 24, 6).
I would greatly appreciate it if someone can help me with why I can these two different results and how can I fix them. I would want to get the shape (32768, 24) for the result matrix in numpy as well
In Octave, if I define:
>> X=diag([1,2,3,4])
X =
Diagonal Matrix
1 0 0 0
0 2 0 0
0 0 3 0
0 0 0 4
>> idx = [6 7;10 11]
idx =
6 7
10 11
then the indexing selects a block:
>> X(idx)
ans =
2 0
0 3
The numpy equivalent is
In [312]: X=np.diag([1,2,3,4])
In [313]: X
Out[313]:
array([[1, 0, 0, 0],
[0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]])
In [314]: idx = np.array([[5,6],[9,10]]) # shifted for 0 base indexing
In [315]: np.unravel_index(idx,(4,4)) # raveled to unraveled conversion
Out[315]:
(array([[1, 1],
[2, 2]]),
array([[1, 2],
[1, 2]]))
In [316]: X[_] # this indexes with a tuple of arrays
Out[316]:
array([[2, 0],
[0, 3]])
another way:
In [318]: X.flat[idx]
Out[318]:
array([[2, 0],
[0, 3]])

A question about numpy ndarray transformation

any simple way to change this array
[[ 3 4 0 1 2]
[ 8 9 5 6 7]
[13 14 10 11 12]]
into:
[[ 0 0 0 1 2]
[ 0 0 5 6 7]
[ 0 0 10 11 12]]
?
Edit: maximum supported dimension for an ndarray is 32, found 306 for transpose
Use Slicing:
>>> a[:,:2] = 0
>>> a
array([[ 0, 0, 0, 1, 2],
[ 0, 0, 5, 6, 7],
[ 0, 0, 10, 11, 12]])

Partitioned matrix multiplication in tensorflow or pytorch

Assume I have matrices P with the size [4, 4] which partitioned (block) into 4 smaller matrices [2,2]. How can I efficiently multiply this block-matrix into another matrix (not partitioned matrix but smaller)?
Let's Assume our original matric is:
P = [ 1 1 2 2
1 1 2 2
3 3 4 4
3 3 4 4]
Which split into submatrices:
P_1 = [1 1 , P_2 = [2 2 , P_3 = [3 3 P_4 = [4 4
1 1] 2 2] 3 3] 4 4]
Now our P is:
P = [P_1 P_2
P_3 p_4]
In the next step, I want to do element-wise multiplication between P and smaller matrices which its size is equal to number of sub-matrices:
P * [ 1 0 = [P_1 0 = [1 1 0 0
0 0 ] 0 0] 1 1 0 0
0 0 0 0
0 0 0 0]
You can think of representing your large block matrix in a more efficient way.
For instance, a block matrix
P = [ 1 1 2 2
1 1 2 2
3 3 4 4
3 3 4 4]
Can be represented using
a = [ 1 0 b = [ 1 1 0 0 p = [ 1 2
1 0 0 0 1 1 ] 3 4 ]
0 1
0 1 ]
As
P = a # p # b
With (# representing matrix multiplication). Matrices a and b represents/encode the block structure of P and the small p represents the values of each block.
Now, if you want to multiply (element-wise) p with a small (2x2) matrix q you simply
a # (p * q) # b
A simple pytorch example
In [1]: a = torch.tensor([[1., 0], [1., 0], [0., 1], [0, 1]])
In [2]: b = torch.tensor([[1., 1., 0, 0], [0, 0, 1., 1]])
In [3]: p=torch.tensor([[1., 2.], [3., 4.]])
In [4]: q = torch.tensor([[1., 0], [0., 0]])
In [5]: a # p # b
Out[5]:
tensor([[1., 1., 2., 2.],
[1., 1., 2., 2.],
[3., 3., 4., 4.],
[3., 3., 4., 4.]])
In [6]: a # (p*q) # b
Out[6]:
tensor([[1., 1., 0., 0.],
[1., 1., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
I leave it to you as an exercise how to efficiently produce the "structure" matrices a and b given the sizes of the blocks.
Following is a general Tensorflow-based solution that works for input matrices p (large) and m (small) of arbitrary shapes as long as the sizes of p are divisible by the sizes of m on both axes.
def block_mul(p, m):
p_x, p_y = p.shape
m_x, m_y = m.shape
m_4d = tf.reshape(m, (m_x, 1, m_y, 1))
m_broadcasted = tf.broadcast_to(m_4d, (m_x, p_x // m_x, m_y, p_y // m_y))
mp = tf.reshape(m_broadcasted, (p_x, p_y))
return p * mp
Test:
import tensorflow as tf
tf.enable_eager_execution()
p = tf.reshape(tf.constant(range(36)), (6, 6))
m = tf.reshape(tf.constant(range(9)), (3, 3))
print(f"p:\n{p}\n")
print(f"m:\n{m}\n")
print(f"block_mul(p, m):\n{block_mul(p, m)}")
Output (Python 3.7.3, Tensorflow 1.13.1):
p:
[[ 0 1 2 3 4 5]
[ 6 7 8 9 10 11]
[12 13 14 15 16 17]
[18 19 20 21 22 23]
[24 25 26 27 28 29]
[30 31 32 33 34 35]]
m:
[[0 1 2]
[3 4 5]
[6 7 8]]
block_mul(p, m):
[[ 0 0 2 3 8 10]
[ 0 0 8 9 20 22]
[ 36 39 56 60 80 85]
[ 54 57 80 84 110 115]
[144 150 182 189 224 232]
[180 186 224 231 272 280]]
Another solution that uses implicit broadcasting is the following:
def block_mul2(p, m):
p_x, p_y = p.shape
m_x, m_y = m.shape
p_4d = tf.reshape(p, (m_x, p_x // m_x, m_y, p_y // m_y))
m_4d = tf.reshape(m, (m_x, 1, m_y, 1))
return tf.reshape(p_4d * m_4d, (p_x, p_y))
Don't know about the efficient method, but you can try these:
Method 1:
Using torch.cat()
import torch
def multiply(a, b):
x1 = a[0:2, 0:2]*b[0,0]
x2 = a[0:2, 2:]*b[0,1]
x3 = a[2:, 0:2]*b[1,0]
x4 = a[2:, 2:]*b[1,1]
return torch.cat((torch.cat((x1, x2), 1), torch.cat((x3, x4), 1)), 0)
a = torch.tensor([[1, 1, 2, 2],[1, 1, 2, 2],[3, 3, 4, 4,],[3, 3, 4, 4]])
b = torch.tensor([[1, 0],[0, 0]])
print(multiply(a, b))
output:
tensor([[1, 1, 0, 0],
[1, 1, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
Method 2:
Using torch.nn.functional.pad()
import torch.nn.functional as F
import torch
def multiply(a, b):
b = F.pad(input=b, pad=(1, 1, 1, 1), mode='constant', value=0)
b[0,0] = 1
b[0,1] = 1
b[1,0] = 1
return a*b
a = torch.tensor([[1, 1, 2, 2],[1, 1, 2, 2],[3, 3, 4, 4,],[3, 3, 4, 4]])
b = torch.tensor([[1, 0],[0, 0]])
print(multiply(a, b))
output:
tensor([[1, 1, 0, 0],
[1, 1, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
If the matrices are small, you are probably fine with cat or pad. The solution with factorization is very elegant, as the one with a block_mul implementation.
Another solution is turning the 2D block matrix in a 3D volume where each 2D slice is a block (P_1, P_2, P_3, P_4). Then use the power of broadcasting to multiply each 2D slice by a scalar. Finally reshape the output. Reshaping is not immediate but it's doable, port from numpy to pytorch of https://stackoverflow.com/a/16873755/4892874
In Pytorch:
import torch
h = w = 4
x = torch.ones(h, w)
x[:2, 2:] = 2
x[2:, :2] = 3
x[2:, 2:] = 4
# number of blocks along x and y
nrows=2
ncols=2
vol3d = x.reshape(h//nrows, nrows, -1, ncols)
vol3d = vol3d.permute(0, 2, 1, 3).reshape(-1, nrows, ncols)
out = vol3d * torch.Tensor([1, 0, 0, 0])[:, None, None].float()
# reshape to original
n, nrows, ncols = out.shape
out = out.reshape(h//nrows, -1, nrows, ncols)
out = out.permute(0, 2, 1, 3)
out = out.reshape(h, w)
print(out)
tensor([[1., 1., 0., 0.],
[1., 1., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
I haven't benchmarked this against the others, but this doesn't consume additional memory like padding would do and it doesn't do slow operations like concatenation. It has also ther advantage of being easy to understand and visualize.
You can generalize it to any situation by playing with h, w, nrows, ncols.
Although the other answer may be the solution, it is not an efficient way. I come up with another one to tackle the problem (but still is not perfect). The following implementation needs too much memory when our inputs are 3 or 4 dimensions. For example, for input size of 20*75*1024*1024, the following calculation needs around 12gb ram.
Here is my implementation:
import tensorflow as tf
tf.enable_eager_execution()
inps = tf.constant([
[1, 1, 1, 1, 2, 2, 2, 2],
[1, 1, 1, 1, 2, 2, 2, 2],
[1, 1, 1, 1, 2, 2, 2, 2],
[1, 1, 1, 1, 2, 2, 2, 2],
[3, 3, 3, 3, 4, 4, 4, 4],
[3, 3, 3, 3, 4, 4, 4, 4],
[3, 3, 3, 3, 4, 4, 4, 4],
[3, 3, 3, 3, 4, 4, 4, 4]])
on_cells = tf.constant([[1, 0, 0, 1]])
on_cells = tf.expand_dims(on_cells, axis=-1)
# replicate the value to block-size (4*4)
on_cells = tf.tile(on_cells, [1, 1, 4 * 4])
# reshape to a format for permutation
on_cells = tf.reshape(on_cells, (1, 2, 2, 4, 4))
# permutation
on_cells = tf.transpose(on_cells, [0, 1, 3, 2, 4])
# reshape
on_cells = tf.reshape(on_cells, [1, 8, 8])
# element-wise operation
print(inps * on_cells)
Output:
tf.Tensor(
[[[1 1 1 1 0 0 0 0]
[1 1 1 1 0 0 0 0]
[1 1 1 1 0 0 0 0]
[1 1 1 1 0 0 0 0]
[0 0 0 0 4 4 4 4]
[0 0 0 0 4 4 4 4]
[0 0 0 0 4 4 4 4]
[0 0 0 0 4 4 4 4]]], shape=(1, 8, 8), dtype=int32)

Weird behavior of multiply in tensorflow

I am trying to use multiply in my program, but I find the behavior of this op is unnormal. It seems that it is calculating the wrong results. Minimum example:
import tensorflow as tf
batchSize = 2
maxSteps = 3
max_cluster_size = 4
x = tf.Variable(tf.random_uniform(dtype=tf.int32, maxval=20, shape=[batchSize, maxSteps, max_cluster_size]))
y = tf.sequence_mask(tf.random_uniform(minval=1, maxval=max_cluster_size-1, dtype=tf.int32, shape=[batchSize, maxSteps]), maxlen=max_cluster_size)
y = tf.cast(y, tf.int32)
z = tf.multiply(x, y)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
x_v = sess.run(x)
y_v = sess.run(y)
z_v = sess.run(z)
print(x_v.shape)
print(x_v)
print('----------------------------')
print(y_v.shape)
print(y_v)
print('----------------------------')
print(z_v.shape)
print(z_v)
print('----------------------------')
Result:
(2, 3, 4)
[[[ 7 12 19 3]
[10 18 15 7]
[18 9 2 7]]
[[ 4 5 16 1]
[ 2 14 15 14]
[ 5 18 8 18]]]
----------------------------
(2, 3, 4)
[[[1 1 0 0]
[1 0 0 0]
[1 1 0 0]]
[[1 1 0 0]
[1 1 0 0]
[1 1 0 0]]]
----------------------------
(2, 3, 4)
[[[ 7 12 0 0]
[10 0 0 0]
[18 0 0 0]]
[[ 4 5 0 0]
[ 2 0 0 0]
[ 5 0 0 0]]]
----------------------------
Where z_v is expected to be:
[[[ 7 12 0 0]
[10 0 0 0]
[18 9 0 0]]
[[ 4 5 0 0]
[ 2 14 0 0]
[ 5 18 0 0]]]
When I test multiply in other programs, it goes just fine.
I suspect that this may be related to x and y are random variables. Anyone give a hint on this?
Instead of these lines:
x_v = sess.run(x)
y_v = sess.run(y)
z_v = sess.run(z)
you need to use this:
x_v, y_v, z_v = sess.run( [ x, y, z ] )
With the first, separate version, basically what ends up happening is that you create x_v, and then y_v, but when you run the sess.run(z) it will recalculate z's dependencies as well, so you end up seeing the output from different x's and y's than you print.