I have a 32 bit uint scalar, how do i bit shift it without upcasting the dtype?
x = np.uint32(123456789)
x << 11
int64 Output:
252839503872
Expected output:
3731400704
It is possible to get my desired output by doing np.uint32((x << 11) & 0xFFFFFFFF), but the syntax feels superfluous for such an easy operation.
Both arguments have to have the desired dtype:
In [80]: np.left_shift(x,np.uint32(11))
Out[80]: 3731400704
In [81]: x<<np.uint32(11)
Out[81]: 3731400704
x<<11 is an easy operation, but the dtype of the np.array(11) has control.
You can use np.uint32(x << 11) it'll will produce the expected output
Related
If I do the following:
A = [
2 1 3 0 0;
1 1 2 0 0;
0 6 0 2 1;
6 0 0 1 1;
0 0 -20 3 2
]
b = [10; 8; 0; 0; 0]
println(A\b)
The output is:
[8.000000000000002, 12.0, -6.000000000000001, -23.999999999999975, -24.000000000000043]
However, I would prefer it look similar to the way Numpy outputs the result of the same problem (EDIT: preferably keeping a trailing zero and the commas, though):
[ 8. 12. -6. -24. -24.]
Is there an easy way to do this? I could write my own function to do this, of course, but it would be pretty sweet if I could just set some formatting flag instead.
Thanks!
The standard way to do it is to change the IOContext:
julia> println(IOContext(stdout, :compact=>true), A\b)
[8.0, 12.0, -6.0, -24.0, -24.0]
You can write your function e.g. (I am not trying to be fully general here, but rather show you the idea):
printlnc(x) = println(IOContext(stdout, :compact=>true), x)
and then just call prinlnc in your code.
You could change the REPL behavior in Julia by overriding the Base.show method for floats. For an example:
Base.show(io::IO, f::Float64) = print(io, rstrip(string(round(f, digits=7)),'0') )
Now you have:
julia> println(A\b)
[8., 12., -6., -24., -24.]
As noted by #DNF Julia is using commas in vectors. If you want to have a horizontal vector (which is a 1xn matrix in fact) you would need to transpose:
julia> (A\b)'
1×5 adjoint(::Vector{Float64}) with eltype Float64:
8. 12. -6. -24. -24.
julia> println((A\b)')
[8. 12. -6. -24. -24.]
Numpy lies to you. It just hides the digits when printing. To check that it only manipulates the printing of the output, do print(A # X - b) and see the result.
print(A # X - b)
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 -3.55271368e-15 0.00000000e+00]
Julia, on the other hand, makes this clear upfront. If you do the same in Julia, you get the same result (I use Foat64 as Numpy does):
julia> X = A \ b;
julia> Float64.(A) * X - b
5-element Vector{Float64}:
0.0
0.0
0.0
-3.552713678800501e-15
0.0
You can use compact printing, however, similar to arrays to remove the unnecassary digits.
julia> round.(X, digits=7)
5-element Vector{Float64}:
8.0
12.0
-6.0
-24.0
-24.0
This, is much better than the "ugly" 8. 12. -6. -24. -24.
I have just made the following mistake:
a = np.array([0,3,2, 1])
a[0] = .001
I was expecting 0 to be replaced by .001 (and the dtype of my numpy array to automatically switch from int to float). However, print (a) returns:
array([0, 3, 2, 1])
Can somebody explain why numpy is doing that? I am confused because multiplying my array of integers by a floating point number will automatically change dtype to float:
b = a*.1
print (b)
array([0. , 0.3, 0.2, 0.1])
Is there a way to constrain numpy to systematically treat integers as floating-point numbers, in order to prevent this (and without systematically converting my numpy arrays in the first place using .astype(float)?
First lets look at the following two rules. These are defined for python :
In assignment, x[0]=y , y is cast to dtype of x and the dtype of x is not changed.
In case of multiplication of float and int results in a float. `enter code here
In assignment, x = y , x is cast to dtype of y.
When you do a = np.array([0,3,2, 1]) a[0] = .001
since a[0] is int, by rule 1, dtype of a[0] (and also a) remains unchanged.
While in case of b = a*.1 print (b) array([0. , 0.3, 0.2, 0.1])
By rule 2, the result of a*.1 is of dtype float (ie dtype(int * float) = float). and by rule 3, b is cast to type float
As #hpaulj mentioned, "a = np.array([1,2,3], float) is the closest to automatic float array notation. – hpaulj 18 hours ago". But ya, this is essentially the same as having to use .astype(float)
I cannot understand the need for a separate way that you require. Can you further detail why you'd like a way other than using .astype(float)?
numpy.array(value) evaluates to true, if value is int, float or complex. The result seems to be a shapeless array (numpy.array(value).shape returns ()).
Reshaping the above like so numpy.array(value).reshape(1) works fine and numpy.array(value).reshape(1).squeeze() reverses this and again results in a shapeless array.
What is the rationale behind this behavior? Which use-cases exist for this behaviour?
When you create a zero-dimensional array like np.array(3), you get an object that behaves as an array in 99.99% of situations. You can inspect the basic properties:
>>> x = np.array(3)
>>> x
array(3)
>>> x.ndim
0
>>> x.shape
()
>>> x[None]
array([3])
>>> type(x)
numpy.ndarray
>>> x.dtype
dtype('int32')
So far so good. The logic behind this is simple: you can process any array-like object the same way, regardless of whether is it a number, list or array, just by wrapping it in a call to np.array.
One thing to keep in mind is that when you index an array, the index tuple must have ndim or fewer elements. So you can't do:
>>> x[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: too many indices for array
Instead, you have to use a zero-sized tuple (since x[] is invalid syntax):
>>> x[()]
3
You can also use the array as a scalar instead:
>>> y = x + 3
>>> y
6
>>> type(y)
numpy.int32
Adding two scalars produces a scalar instance of the dtype, not another array. That being said, you can use y from this example in exactly the same way you would x, 99.99% of the time, since dtypes inherit from ndarray. It does not matter that 3 is a Python int, since np.add will wrap it in an array regardless. y = x + x will yield identical results.
One difference between x and y in these examples is that x is not officially considered to be a scalar:
>>> np.isscalar(x)
False
>>> np.isscalar(y)
True
The indexing issue can potentially throw a monkey wrench in your plans to index any array like-object. You can easily get around it by supplying ndmin=1 as an argument to the constructor, or using a reshape:
>>> x1 = np.array(3, ndmin=1)
>>> x1
array([3])
>>> x2 = np.array(3).reshape(-1)
>>> x2
array([3])
I generally recommend the former method, as it requires no prior knowledge of the dimensionality of the input.
FurtherRreading:
Why are 0d arrays in Numpy not considered scalar?
I know that numpy array has a method called shape that returns [No.of rows, No.of columns], and shape[0] gives you the number of rows, shape[1] gives you the number of columns.
a = numpy.array([[1,2,3,4], [2,3,4,5]])
a.shape
>> [2,4]
a.shape[0]
>> 2
a.shape[1]
>> 4
However, if my array only have one row, then it returns [No.of columns, ]. And shape[1] will be out of the index. For example
a = numpy.array([1,2,3,4])
a.shape
>> [4,]
a.shape[0]
>> 4 //this is the number of column
a.shape[1]
>> Error out of index
Now how do I get the number of rows of an numpy array if the array may have only one row?
Thank you
The concept of rows and columns applies when you have a 2D array. However, the array numpy.array([1,2,3,4]) is a 1D array and so has only one dimension, therefore shape rightly returns a single valued iterable.
For a 2D version of the same array, consider the following instead:
>>> a = numpy.array([[1,2,3,4]]) # notice the extra square braces
>>> a.shape
(1, 4)
Rather then converting this to a 2d array, which may not be an option every time - one could either check the len() of the tuple returned by shape or just check for an index error as such:
import numpy
a = numpy.array([1,2,3,4])
print(a.shape)
# (4,)
print(a.shape[0])
try:
print(a.shape[1])
except IndexError:
print("only 1 column")
Or you could just try and assign this to a variable for later use (or return or what have you) if you know you will only have 1 or 2 dimension shapes:
try:
shape = (a.shape[0], a.shape[1])
except IndexError:
shape = (1, a.shape[0])
print(shape)
I am not sure on a way to put this question into a title. But will show an example on the thing that I need help in using Tensorflow.
For an example:
matrix_1 shape = [4,2]
matrix_2 shape = [4,1]
matrix_1 * matrix 2
[[1,2],
[3,4],
[5,6],
[7,8]]
*
[[0.1],
[0.2],
[0.3],
[0.4]]
= [[0.1,0.2],
[0.6,0.8],
[1.5,1.8],
[2.8,3.2]]
Is there any algorithm to achieve this?
Thank you
This is the error that I am getting from the simplified problem example above:
ValueError: Dimensions must be equal, but are 784 and 100 for 'mul_13' (op: 'Mul') with input shapes: [100,784], [100]
The standard tf.multiply(matrix_1, matrix_2) operation (or the shorthand syntax matrix_1 * matrix_2) will perform exactly the computation that you want on matrix_1 and matrix_2.
However, it looks like the error message you are seeing is because matrix_2 has shape [100], whereas it must be [100, 1] to get the elementwise broadcasting behavior. Use tf.reshape(matrix_2, [100, 1]) or tf.expand_dims(matrix_2, 1) to convert it to the correct shape.