I want to have a 2D Matrix and then fill the elements of this matrix with different values. I know that I need to create a matrix first with the following definition:
Matrix = np.zeros(10,10)
Now my question is how I can fill of the elements of this matrix by a value lets say the element of [4][7] with value of 5. Thanks
Be careful, because the right sintax for a 10x10 matrix filled by zeros is Matrix = np.zeros((10,10)). Then you can simply write in a different line Matrix[4][7] = 5. I advice you to read a tutorial or a introductory book on Python.
Related
I am working with multidimensional arrays with dynamical axes. Now I want to select elements of the array along a specific axis.
For example, if I have a 3-dimensional array, I want to pick the elements like this
b = a[:, :, 1]
Now my problem is that after one iteration of code the same array becomes 4 dimensional. And again I want to pick the elements like this
b = a[:,:,1,:]
Thus I am looking for a general solution to pick all elements from the 3rd axis of the array. This is very simple if I had to choose a[1] and I could get a[1,:,:,:], but I am not aware how to chose for other axes.
Edit:
Also, I would be interested in a solution where the interested axis also changes for example with the same code and next iteration I would like to get
b = a[:,:,:,1]
I am trying to achieve something very simple in Tensorflow (and not native Python or NumPy or pandas) which can be done in any of the following ways:
Have 2 separate arrays/tensors with different sizes. Each element holds two values: a comparing-value and a weight. We want to compare the comparing-value in both tensors, and multiply their corresponding weights.
Have comparing-value and weights as different arrays. Then compare the comparing-values, get the indices, then use the index to find elements in the weight vectors and then multiply them.
In short I want to find indices of matching elements in both the tensors.
The closest solution I could find is to convert them to sets, but it does not give the exact index of the element.
I was able to achieve what I wanted using Pandas:
matched = pd.Index(v1).intersection(pd.Index(v2))
and native Python:
ind_v1, ind_v2 = [i for i, item in enumerate(v1_1) if item in v2_1],[i for i, item in enumerate(v2_1) if item in v1_1]
I wish to have this same in Tensorflow.
Given two matrixes A and B with dimension of (x,y,z) and (y,x,z) respectively, how to dot product on the first two dimension of the two matrices? The result should have dimension of (x,x,z).
Thanks!
Use np.einsum with literally the same string expression -
np.einsum('xyz,yiz->xiz',a,b) # a,b are input arrays
Note that we have used yiz as the string notation for the second array and not yxz, as that i is supposed to be a new dimension in the output array and is not to be aligned with the first axis of the first array for which we have already assigned x. The dimensions that are to be aligned are given the same string notation.
I'm afraid that I can't describe the problem so I draw a sketch of it.Anyway,what I need is to find the max values along the 0th axis in a numpy ndarray,i.e.array.shape(5,5,3), and their corresponding "layer numbers", and use the "layer numbers" to create a new 2d array with shape of (1,5,3).Hope I'm giving a clear description here..thanks a lot.
If you check the documentation of np.max, you'll see it takes an axis argument:
a.max(axis=0)
But that won't help you yet. However, there's a function argmax that gives you the indices of the maxima along a given axis:
a.argmax(axis=...)
So, let's find your first (5,5) array: it's a[...,0]. You can find the position of the maxima per rows (or columns) with a[...,0].max(axis=1) (or 0), and use that to find the values on the other sides.
I'm using one-vs-all to do a 21-class svm categorization.
I want the label -1 to mean "not in this class" and the label 1 to mean "indeed in this class" for each of the 21 kernels.
I've generated my pre-computed kernels and my test vectors using this standard.
Using easy.py everything went well for 20 of the classes, but for one of them the labels were switched so that all the inputs that should have been labelled with 1 for being in the class were instead labelled -1 and vice-versa.
The difference in that class was that the first vector in the pre-computed kernel was labelled 1, while in all the other kernels the first vector was labelled -1. This suggests that LibSVM relabels all of my vectors.
Is there a way to prevent this or a simple way to work around it?
You already discovered that libsvm uses the label -1 for whatever label it encounters first.
The reason is, that it allows arbitrary labels and changes them to -1 and +1 according to the order in which they appear in the label vector.
So you can either check this directly or you look at the model returned by libsvm.
It contains an entry called Label which is a vector containing the order in which libsvm encountered the labels. You can also use this information to switch the sign of your scores.
If during training libsvm encounters label A first, then during prediction
libsvm will use positive values for assigning object the label A and negative values for another label.
So if you use label 1 for positive class and 0 for negative, then to obtain right output values you should do the following trick (Matlab).
%test_data.y contains 0-s and 1-s
[labels,~,values] = svmpredict(test_data.y, test_data.X, model, ' ');
if (model.Label(1) == 0) % we check which label was encountered by libsvm first
values = -values;
end