In a montecarlo simulation I have the following 7 pokercards for 2 players and 3 different montecarlo runs.
self.cards:
array([[[ 6., 12.],
[ 1., 6.],
[ 3., 3.],
[ 8., 8.],
[ 1., 1.],
[ 4., 4.],
[ 2., 2.]],
[[ 6., 7.],
[ 1., 1.],
[ 3., 3.],
[ 2., 2.],
[ 12., 12.],
[ 5., 5.],
[ 10., 10.]],
[[ 6., 3.],
[ 1., 11.],
[ 2., 2.],
[ 6., 6.],
[ 12., 12.],
[ 6., 6.],
[ 7., 7.]]])
The corresponding suits are:
self.suits
array([[[ 2., 1.],
[ 1., 2.],
[ 2., 2.],
[ 2., 2.],
[ 1., 1.],
[ 2., 2.],
[ 2., 2.]],
[[ 2., 0.],
[ 1., 3.],
[ 2., 2.],
[ 0., 0.],
[ 1., 1.],
[ 1., 1.],
[ 1., 1.]],
[[ 2., 2.],
[ 1., 0.],
[ 3., 3.],
[ 2., 2.],
[ 1., 1.],
[ 1., 1.],
[ 1., 1.]]])
Now I would like to 'merge' the arrays in a way that the cards array is expanded to the 4th dimension having a size of 4: 0 containing all suits==1, 1 all suits==2, 2 all suits==3 and 3 all suits ==4
I can easily create 4 different arrays:
club_cards=(self.suits == 1) * self.cards
diamond_cards=(self.suits == 2) * self.cards
heart_cards=(self.suits == 3) * self.cards
spade_cards=(self.suits == 4) * self.cards
and then stack them together:
stacked_array=np.stack((club_cards,diamond_cards, heart_cards, spade_cards),axis=0)
The result as expected has a shape of (4, 3, 8, 2)
array([[[[ 1., 12.],
[ 1., 1.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[-11., 0.]],
[[ 12., 12.],
[ 10., 10.],
[ 5., 5.],
[ 1., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.]],
[[ 12., 12.],
[ 7., 7.],
[ 6., 6.],
[ 1., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.]]],
[[[ 8., 8.],
[ 6., 6.],
[ 4., 4.],
[ 3., 3.],
[ 2., 2.],
[ 0., 0.],
[ 0., 0.],
[ -4., -4.]],
[[ 6., 3.],
[ 3., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ -6., -9.]],
[[ 6., 6.],
[ 6., 3.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ -6., -6.]]],
[[[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[-12., -12.]],
[[ 0., 1.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[-12., -11.]],
[[ 2., 2.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[-10., -10.]]],
[[[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[-12., -12.]],
[[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[-12., -12.]],
[[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[-12., -12.]]]])
While this might make sense in the above case, it is not always possible, especially if there are more than 4 cases that need to be stacked together, which brings me to my question:
How can I do this with broadcasting? Below my specific questions:
I have tried a few things.
Let's focus on the first step to get the booleans in doing suits==np.arange(4) (the second step is just a multiplication with the cards which will need to be broadcast in the same way as the suits). My understanding is that we want to add a dimension for the suits array, so shouldn't we signal this with the 3 dot notation: self.suits[...,:,:,:]==np.arange(4)? Instead the following seems to almost work: self.suits[:,:,:,None]==np.arange(4) (except that it adds the dimension at the wrong place). The following doesn't work either: self.suits[None,:,:,:]==np.arange(4). How can I extend the array in the first dimension so the results are the same as in the above stack?
In what circumstances do I need the ... and when the None? I would expect to use the ... as this would signal that this dimension needs to be expanded as necessary (in this case to a size of 4)? Why does this seem to be incorrect and a None is used instead?
You are stacking the indivdual card results along axis=0. So, when porting to a broadcasting based solution, we can create a range array of those scalars 1, 2, 3, 4 in a 4D array with all axes being singleton dimensions (dims with length = 1) except the first one. There could be different ways to create such a 4D array. One way would be : np.arange(1,5)[:,None,None,None], where we create a 1D array with np.arange and simply add three singleton dims as the last three ones with np.newaxis/None.
We perform equality comparison with this 4D array against b, which would allow internally broadcasting of b elements along the last three dims. Then, we multiply it with a as also done in the original code and get the desired output.
Thus, the implementation would be -
out = a*(b == np.arange(1,5)[:,None,None,None])
When/how to use ...(ellipsis) :
We use ...(ellipsis), when trying to add new axes into a multi-dimensional array and we don't want to specify colons per dim. Thus, to make a a 4D array with the last dim being a singleton, we would do : a[:,:,:,None]. Too much of typing! So, we use ... there to help us out : a[...,None]. Please note that this ... notation is used irrespective of the number of dimensions. So, if a were a 5D array and to add a new axis into it as the last one, we would do a[:,:,:,:,:,None] or simply with ellipsis : a[...,None]. Neat huh!
Related
I have a 3-D numpy array representing a model domain of 39 layers, 279 rows, 153 columns. The values in the array are either 0 or 1 and signify if the cell in the domain is inactive or active, respectively. I am trying to create a 2-D array of shape 279 rows and 153 columns where the array values equal the layer number for the uppermost active layer in the grid. Essentially, at each row, col location I want to loop through the layers to find the first one that is a 1 and not a 0 and then put that layer number in the 2-D array at that row, col location. For example:
If a four layer (layers 0-3) array looks like this:
array([[[ 0., 1., 0., 0.],
[ 1., 0., 0., 0.],
[ 1., 0., 0., 0.]],
[[ 0., 1., 1., 0.],
[ 1., 1., 0., 0.],
[ 1., 1., 0., 0.]],
[[ 0., 0., 1., 1.],
[ 0., 1., 1., 0.],
[ 0., 1., 1., 0.]],
[[ 0., 0., 1., 1.],
[ 0., 1., 1., 1.],
[ 0., 1., 1., 1.]]])
The 2-D array should look like this:
array([[[ 0., 0., 1., 2.],
[ 0., 1., 2., 3.],
[ 0., 1., 2., 3.]],
If the row-col location is not active (not equal to 1) in any layer , the value in the resulting array should be 0 (like at 1,1), the same as if it were active in layer 0.
I have tried modifying a couple of solutions where the z-axis values are summed, or averaged, but can't seem to figure out how to get exactly what I am looking for.
You could try numpy.argmax:
import numpy as np
a = np.array([[[ 0., 1., 0., 0.],
[ 1., 0., 0., 0.],
[ 1., 0., 0., 0.]],
[[ 0., 1., 1., 0.],
[ 1., 1., 0., 0.],
[ 1., 1., 0., 0.]],
[[ 0., 0., 1., 1.],
[ 0., 1., 1., 0.],
[ 0., 1., 1., 0.]],
[[ 0., 0., 1., 1.],
[ 0., 1., 1., 1.],
[ 0., 1., 1., 1.]]])
print(np.argmax(a,0))
array([[0, 0, 1, 2],
[0, 1, 2, 3],
[0, 1, 2, 3]])
This works because argmax returns the first max value when searching over the defined axis (in this case the 0th axis).
I would like to create a square numpy array such that it starts counting from the diagonal.
Do you know a one-liner for that?
Example with 5x5:
array([[ 1., 2., 3., 4., 5.],
[ 0., 1., 2., 3., 4.],
[ 0., 0., 1., 2., 3.],
[ 0., 0., 0., 1., 2.],
[ 0., 0., 0., 0., 1.]])
In [49]: np.identity(5).cumsum(axis=1).cumsum(axis=1)
Out[49]:
array([[ 1., 2., 3., 4., 5.],
[ 0., 1., 2., 3., 4.],
[ 0., 0., 1., 2., 3.],
[ 0., 0., 0., 1., 2.],
[ 0., 0., 0., 0., 1.]]
>>> mat = np.vstack((np.concatenate((np.zeros(i),np.arange(1,5-i+1))) for i in range(0,5)))
>>> mat
array([[1., 2., 3., 4., 5.],
[0., 1., 2., 3., 4.],
[0., 0., 1., 2., 3.],
[0., 0., 0., 1., 2.],
[0., 0., 0., 0., 1.]])
i have a piece of code
la=[0,0,0,0,0,0,1,1,1,1]
onehot = tf.one_hot(la, depth=2) #[[1,0],[1,0],[1,0],[1,0],[1,0],[1,0],[0,1],[0,1],[0,1],[0,1]]
image_batch,labels_batch=tf.train.batch([resized_image,onehot],batch_size=2,num_threads=1)
when i run
print(s.run([tf.shape(image_batch),labels_batch]))
it is batching all labes at a time,like
[array([ 2, 50, 50, 3]), array([[[ 1., 0.],
[ 1., 0.],
[ 1., 0.],
[ 1., 0.],
[ 1., 0.],
[ 1., 0.],
[ 0., 1.],
[ 0., 1.],
[ 0., 1.],
[ 0., 1.]],
[[ 1., 0.],
[ 1., 0.],
[ 1., 0.],
[ 1., 0.],
[ 1., 0.],
[ 1., 0.],
[ 0., 1.],
[ 0., 1.],
[ 0., 1.],
[ 0., 1.]]], dtype=float32)]
it should output something like
[array([ 2, 50, 50, 3]), array([[[ 1., 0.],
[[ 1., 0.]]], dtype=float32)]
doesn't it? as batch size is 2 and taking 2 images and it's corresponding labels at a time.
i'm new to CNN and machine learning.thanks beforehand.
According to the Tensorflow documentation of tf.train.batch (https://www.tensorflow.org/api_docs/python/tf/train/batch),
Since the enqueue_many=False by default and your input onehot have the shape of [10, 2], then the output (here labels_batch) shape become [batch_size, 10, 2].
if the enqueue_many=True, then only the output (here labels_batch) will become [batch_size,2].
Hope this helps.
Is there a higher (than two) dimensional equivalent of diag?
L = [...] # some arbitrary list.
A = ndarray.diag(L)
will create a diagonal 2-d matrix shape=(len(L), len(L)) with elements of L on the diagonal.
I'd like to do the equivalent of:
length = len(L)
A = np.zeros((length, length, length))
for i in range(length):
A[i][i][i] = L[i]
Is there a slick way to do this?
Thanks!
You can use diag_indices to get the indices to be set. For example,
x = np.zeros((3,3,3))
L = np.arange(6,9)
x[np.diag_indices(3,ndim=3)] = L
gives
array([[[ 6., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]],
[[ 0., 0., 0.],
[ 0., 7., 0.],
[ 0., 0., 0.]],
[[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 8.]]])
Under the hood diag_indices is just the code Jaime posted, so which to use depends on whether you want it spelled out in a numpy function, or DIY.
You can use fancy indexing:
In [2]: a = np.zeros((3,3,3))
In [3]: idx = np.arange(3)
In [4]: a[[idx]*3] = 1
In [5]: a
Out[5]:
array([[[ 1., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]],
[[ 0., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 0.]],
[[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 1.]]])
For a more general approach, you could set the diagonal of an arbitrarily sized array doing something like:
def set_diag(arr, values):
idx = np.arange(np.min(arr.shape))
arr[[idx]*arr.ndim] = values
I have just found a problem and I don't know if it is meant to be this way or I am just doing it wrong. When I use logical addressing in a numpy matrix to change all the values of a matrix that are, say, equal to a 1. All other matrices that somehow have something to do with this matrix will also be modified.
In [1]: import numpy as np
In [2]: from numpy import matrix as mtx
In [3]: A=mtx(np.eye(6))
In [4]: A
Out[4]:
matrix([[ 1., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 1.]])
In [5]: B=A
In [6]: C=B
In [7]: D=C
In [8]: A[A==1]=5
In [9]: A
Out[9]:
matrix([[ 5., 0., 0., 0., 0., 0.],
[ 0., 5., 0., 0., 0., 0.],
[ 0., 0., 5., 0., 0., 0.],
[ 0., 0., 0., 5., 0., 0.],
[ 0., 0., 0., 0., 5., 0.],
[ 0., 0., 0., 0., 0., 5.]])
In [10]: B
Out[10]:
matrix([[ 5., 0., 0., 0., 0., 0.],
[ 0., 5., 0., 0., 0., 0.],
[ 0., 0., 5., 0., 0., 0.],
[ 0., 0., 0., 5., 0., 0.],
[ 0., 0., 0., 0., 5., 0.],
[ 0., 0., 0., 0., 0., 5.]])
In [11]: C
Out[11]:
matrix([[ 5., 0., 0., 0., 0., 0.],
[ 0., 5., 0., 0., 0., 0.],
[ 0., 0., 5., 0., 0., 0.],
[ 0., 0., 0., 5., 0., 0.],
[ 0., 0., 0., 0., 5., 0.],
[ 0., 0., 0., 0., 0., 5.]])
In [12]: D
Out[12]:
matrix([[ 5., 0., 0., 0., 0., 0.],
[ 0., 5., 0., 0., 0., 0.],
[ 0., 0., 5., 0., 0., 0.],
[ 0., 0., 0., 5., 0., 0.],
[ 0., 0., 0., 0., 5., 0.],
[ 0., 0., 0., 0., 0., 5.]])
Can anyone tell me what am I doing wrong? is this a bug?
This is not a bug. Saying B=A in python means that both B and A point to the same object. You need to copy the matrix.
>>> import numpy as np
>>> from numpy import matrix as mtx
>>> A = mtx(np.eye(6))
>>> B = A.copy()
>>> C = A
#Check memory locations.
>>> id(A)
19608352
>>> id(C)
19608352 #Same object as A
>>> id(B)
19607992 #Different object then A
>>> A[A==1] = 5
>>> B #B is a different object then A
matrix([[ 1., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 1.]])
>>> C #C is the same object as A
matrix([[ 5., 0., 0., 0., 0., 0.],
[ 0., 5., 0., 0., 0., 0.],
[ 0., 0., 5., 0., 0., 0.],
[ 0., 0., 0., 5., 0., 0.],
[ 0., 0., 0., 0., 5., 0.],
[ 0., 0., 0., 0., 0., 5.]])
The same issue can be seen with python list:
>>> A = [5,3]
>>> B = A
>>> B[0] = 10
>>> A
[10, 3]
Note that this is different then returning a numpy view as in this case:
>>> A = mtx(np.eye(6))
>>> B = A[0] #B is a view and now points to the first row of A
>>> id(A)
28088720
>>> id(B) #Different objects!
28087568
#B still points to the memory location of A's first row, but through numpy trickery
>>> B
matrix([[ 1., 0., 0., 0., 0., 0.]])
>>> B *= 5 #In place multiplication, updates B which is the same as A's first row
>>> A
matrix([[ 5., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 1.]])
As the view B points to the first row of A, A is changed. Now lets force a copy.
>>> B = B*10 #Assigns B*10 to a different chunk of memory
>>> A
matrix([[ 5., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 1.]])
>>> B
matrix([[ 50., 0., 0., 0., 0., 0.]])