a = np.arange(3*4*5*6).reshape((3,4,5,6))
print(np.dot(a, a))
This always gives an ValueError: shapes (3,4,5,6) and (3,4,5,6) not aligned: 6 (dim 3) != 5 (dim 2)
I don't understand what is wrong?
Thanks for your Help :)
Related
I have a ndarray with shape (x,y,d).
How can I get the number of d dimensional non zero vectors (out of the x*y total d dimensional vectors)?
I tried np.count_nonzero but I don't think it has the option to do what I described.
actually I think that works:
sum(np.any(x) for x in p)
(p is the ndarray)
You need to apply np.count_nonzeo to the last axis(d). It will return a x * y array with the count of non zero elements in the d dimension. If the count is 0 then it is a zero array, so you just need to count the number of elements in the x * y array which are not equal to 0.
np.sum(np.apply_along_axis(np.count_nonzero, 2, your_arr) != 0)
The following throws error
oness = np.ones((100000, 8))
np.concatenate(oness, oness)
np.concatenate needs brackets to work otherwise throws :
10 oness = np.ones((100000, 8))
---> 11 np.concatenate(oness, oness)
<__array_function__ internals> in concatenate(*args, **kwargs)
The following works:
oness = np.ones((100000, 8))
np.concatenate([oness, oness])
Here's the definition for concatenate:
numpy.concatenate((a1, a2, ...), axis=0, out=None)
When you're writing np.concatenate(oness, oness) the second ones is being interpreted as the input to axis, which leads to the type error. But when you write np.concatenate([oness, oness]) or np.concatenate((oness, oness)) the inputs are interpreted correctly as a1, a2 and are concatenated along the default axis=0.
I have been trying to implement the square non-linearity activation function function as a custom activation function for a keras model. It's the 10'th function on this list https://en.wikipedia.org/wiki/Activation_function.
I tried using the keras backend but i got nowhere with the multiple if else statements i require so i also tried using the following:
import tensorflow as tf
def square_nonlin(x):
orig = x
x = tf.where(orig >2.0, (tf.ones_like(x)) , x)
x = tf.where(0.0 <= orig <=2.0, (x - tf.math.square(x)/4), x)
x = tf.where(-2.0 <= orig < 0, (x + tf.math.square(x)/4), x)
return tf.where(orig < -2.0, -1, x)
As you can see there's 4 different clauses i need to evaluate. But when i try to compile the Keras model i still get the error:
Using a `tf.Tensor` as a Python `bool` is not allowed
Could anyone help me to get this working in Keras? Thanks a lot.
I've just started a week ago digging into tensorflow and am actively playing around with different activation functions. I think I know what two of your problems are. In your second and third assignments you have compound conditionals you need to put them in under tf.logical_and. The other problem you have is that the last tf.where on the return line returns a -1 which is not a vector, which tensorflow expects. I haven't tried the function with Keras, but in my "activation function" tester this code works.
def square_nonlin(x):
orig = x
x = tf.where(orig >2.0, (tf.ones_like(x)) , x)
x = tf.where(tf.logical_and(0.0 <= orig, orig <=2.0), (x - tf.math.square(x)/4.), x)
x = tf.where(tf.logical_and(-2.0 <= orig, orig < 0), (x + tf.math.square(x)/4.), x)
return tf.where(orig < -2.0, 0*x-1.0, x)
As I said I'm new at this so to "vectorize" -1, I multiplied the x vector by 0 and subtracted -1 which produces a array filled with -1 of the right shape. Perhaps one of the more seasoned tensorflow practioners can suggest the proper way to do that.
Hope this helps.
BTW, tf.greater is equivlent to tf.__gt__ which means that orig > 2.0 expands under the covers in python to tf.greater(orig, 2.0).
Just a follow up. I tried it with the MNIST demo in Keras and the activation function works as coded above.
UPDATE:
The less hacky way to "vectorize" -1 is to use the tf.ones_like function
so replace the last line with
return tf.where(orig < -2.0, -tf.ones_like(x), x)
for a cleaner solution
I have some shapes, which include custom property data I would like to extract.
At the moment I do this using a loop what seems to be quite slow. Therfore I hope that there is a possibility to get all the data at once.
Code:
For n = 0 To ActiveDocument.Pages(Pages).Shapes.Item(Index).RowCount(visSectionProp) - 1
x = ActiveDocument.Pages(Pages).Shapes.Item(Index).CellsSRC(visSectionProp, n, visCustPropsValue).Formula
shape_props(i + shapecount_old, n) = x
Next
Does anyone has an idea?
Thank you in advance!
I am trying to do some calculations with arrays.
e.g. I want to solve Ax=y, so I use the following code to do so, where A is a square matrix and y is a col. vector. In VBA, A is an array with two dimension and y is one with one dimension. However, this code does not work...
x = WorksheetFunction.MMult(WorksheetFunction.MInverse(A), y)
Where did I get wrong? Thanks!
You can be committing one or more of many mistakes:
Arrays not defined as Variant (Most worksheetfunctions won't work if data type is something other than Variant).
Dimensions of A and y don't match up as they need to for matrix multiplication.
In particular, won't work if y size is (1,2) instead of (2,1) as in example below.
etc... Can be anything, really. You don't tell us what error message you get.
Here's an example that works:
Dim A As Variant
Dim y As Variant
Dim x As Variant
ReDim y(1 To 2, 1 To 1)
y(1, 1) = 2
y(2, 1) = 3
ReDim A(1 To 2, 1 To 2)
A(1, 1) = 3
A(2, 1) = 1
A(1, 2) = 4
A(2, 2) = 2
x = WorksheetFunction.MMult(WorksheetFunction.MInverse(A), y)
Let matrix A (3 x 3) be an array in Range("A1:C3"), matrix y (3 x 1) be an array in Range("E1:E3"), and matrix x (3 x 1) be an array in Range("G1:G3"). Then you can try this simple program:
Range("G1:G3") = WorksheetFunction.MMult(WorksheetFunction.MInverse(Range("A1:C3")), Range("E1:E3"))
By using the same procedure, you can do this to find the result of multiplication of a matrix (n x m) with a matrix (p x q). Of course for the simplification you should declare the variables first. I hope this answer can help you.