I am calculating a weighted average value on numpy:
np.average(df['column1'], weights=df['column2'])
I received this error: Weights sum to zero, can't be normalized
Is there an argument I can use to solve this problem?
can you try:
(df['column1'].values*df['column2'].values).sum()/df['column2'].sum()
?
Related
my code
diabetes_x=np.array([[1],[2],[3]])
diabetes_x_train=diabetes_x
diabetes_x_test=diabetes_x
diabetes_y_train=np.array([3,2,4])
diabetes_y_test=np.array([3,2,4])
model=linear_model.LinearRegression()
model.fit(diabetes_x_train,diabetes_y_train)
diabetes_y_predict=model.predict(diabetes_x_test)
print("Mean Squared error is :",mean_squared_error(diabetes_y_test,diabetes_y_predict))
print("weights : ",model.coef_)
print("intercept : ",model.intercept_)
in this code we are taking diabetes_x value in 2-D but in diabetes_y_train and test why we are taking 1-D array. Can someone please explain me both of the concept of diabetes_x and _y
In machine learning terminology X is regarded as the input variable and y is regarded as output variable.
Suppose there is dataset with 5 columns where the last column is the result. So the input will consist of all the column except the last and the last column will be used to check if the mapping is correct after training or during validation to calculate the error.
Is there a way where I can calculate the inverse of a mxn non-square matrix using numpy? Since using la.inv(S) seems to give me an error of ValueError: expected square matrix
You are probably looking for np.linalg.pinv.
To calculate the non square matrix mxn, We can use np.linalg.pinv(S), here s is the data you want to pass.
For square matrix we use np.linalg.inv(S), The inverse of a matrix is such that if it is multiplied by the original matrix, it results in identity matrix.
note: np is numpy
We can also use np.linalg.inv(S) for non square matrix but in order to not get any error you need to slice the data S.
For more details on np.linalg.pinv : https://numpy.org/doc/stable/reference/generated/numpy.linalg.pinv.html
I am trying to solve the following problem: i have this function:
with I0(k) being this function:
.
In each step in my code, i have a known, different angle for the cosine and i want each time to know: for which k the function is maximum. Mathematically, i have to calculate the derivative, to set it equal to zero and to solve the equation. But how is it possible to implement that with tensorflow? What is the best way to solve this problem please ?
I would like to calculate the number of nonzero values in the weights of a neural network.
I tried the following code, but I obtained a ValueError. This may be due to the reason that each array has different shape.
h = model.get_weights() # return a list of numpy arrays
merged_h = []
for l in h:
merged_h += l
nzcounts = np.count_nonzero(merged_h)
ValueError: operands could not be broadcast together with shapes (0,) (3,3,3,32)
I wonder if there are other ways to compute the number of nonzero elements in the output of get_weights()? Thank you!
Essentially the problem is that model.get_weights() returns a list of arrays. I think the easiest way to do that is to apply np.count_nonzero() to each one of those arrays independently and then sum the results.
np.sum([np.count_nonzero(x) for x in model.get_weights()])
Is there an easy way to calculate relative gradient error in tensorflow? All what is available is tf.test.compute_gradient_error but it computes absolute gradient error and not relative error. Of courser there're methods which compute numeric and theoretical jacobians but they are private.
I found the answer myself tf.test.compute_gradient returns two jacobians, so I can use them to find the relative gradient error. I.e. if I use L-infinity norm, I can take tf.test.compute_gradient_error and divide in on the maximum element of both jacobians.