Why is the precision of numpy output different each time? - numpy

Run the following program:
for i in range(10):
a = np.random.uniform(0, 1)
print(a)
We have the result:
0.4418517709510906
0.05536715253773261
0.44633855235431785
0.3143041997189251
0.16175184090609163
0.8822875281567105
0.11367473012241913
0.9951703577237277
0.009103257465210124
0.5185580156093157
Why is the precision of each output different? Sometimes accurate to 16 decimal places, but sometimes accurate to 18 decimal places. Why does this happen?
Also, if I want to control the precision of the output, i.e., only 15 decimal places are output each time, how can I do this?
Edit: I try to use np.set_printoptions(precision=15)
np.set_printoptions(precision=15)
for i in range(10):
a = np.random.uniform(0, 1)
print(a)
But the output is:
0.3908531691561824
0.6363290508517755
0.3484260990246082
0.23792451272035053
0.5776808805593472
0.3631616619602701
0.878754651138258
0.6266540814279749
0.8309347174000745
0.5763464514883537
This still doesn't get the result I want. The result I want is something like below:
0.390853169156182
0.636329050851775
0.348426099024608
0.237924512720350
0.577680880559347
0.363161661960270
0.878754651138258
0.626654081427974
0.830934717400074
0.576346451488353

print(a) prints the shortest numeric string that yields the same float64 value as a.
Example:
a = 0.392820481778549002
b = 0.392820481778549
a_bits = np.asarray(a).view(np.int64).item()
b_bits = np.asarray(b).view(np.int64).item()
print(f"{a:.18f}", hex(a_bits))
print(f"{b:.18f}", hex(b_bits))
print(a == b)
Result:
0.392820481778549002 0x3fd923f8849c0570
0.392820481778549002 0x3fd923f8849c0570
True
You can use the f"{a:.18f}" syntax to get fixed-width output. The equivalent for numpy arrays is np.set_printoptions(precision=18, floatmode="fixed").

Related

Weird numpy matrix values

When i want to calculate the determinant of matrix using <<np.linalg.det(mat1)>> or calculate the inverse it gives weird value output . For example it gives 1.11022302e-16 instead of 0.
I tried to round the number for determinant but i couldn't do the same for matrix elements.
Maybe the computation is a not fix numbers so multiplication or division very close to zero but not equals.
You can define a delta that can determine if its close enough, and then compute the the absolute distance between the result and the expected value.
Maybe like this:
res = np.linalg.det(mat)
delta = 0.0001
if abs(math.floor(res)-res)<delta:
return math.floor(res)
if abs(math.ceil(res)-res)<delta:
return math.ceil(res)
return res

Strange roots `using numpy.roots`

Is there something wrong in the evaluation of the polinomial (1-alpha*z)**9 using numpy? For
alpha=3/sqrt(2) my list of coefficients is given in the array
psi_t0 = [1.0, -19.0919, 162.0, -801.859, 2551.5, -5412.55, 7654.5, -6958.99, 3690.56, -869.874]
According to numpy documentation, I have to invert this array in order to compute the zeros, i.e.
psi_t0 = psi_t0[::-1]
Thus giving
a = np.roots(psi_t0)
[0.62765842+0.06979364j 0.62765842-0.06979364j 0.52672941+0.14448097j 0.52672941-0.14448097j 0.42775926+0.13031547j 0.42775926-0.13031547j 0.36690056+0.07504044j 0.36690056-0.07504044j 0.34454214+0.j]
which is completely crap since the roots must be all equal to sqrt(2)/3.
As you take the 9th power you'll find that you create a very "wide" zero, indeed, if you step eps away from the true zero and evaluate you'll get something of O(eps^9). In view of that numerical inaccuracies are all but expected.
>>> np.set_printoptions(4)
>>> print(C)
[-8.6987e+02 3.6906e+03 -6.9590e+03 7.6545e+03 -5.4125e+03 2.5515e+03
-8.0186e+02 1.6200e+02 -1.9092e+01 1.0000e+00]
>>> np.roots(C)
array([0.4881+0.0062j, 0.4881-0.0062j, 0.4801+0.0154j, 0.4801-0.0154j,
0.4681+0.0172j, 0.4681-0.0172j, 0.458 +0.011j , 0.458 -0.011j ,
0.4541+0.j ])
>>> np.polyval(C,_)
array([1.4622e-13+6.6475e-15j, 1.4622e-13-6.6475e-15j,
1.2612e-13+1.5363e-14j, 1.2612e-13-1.5363e-14j,
1.0270e-13+1.3600e-14j, 1.0270e-13-1.3600e-14j,
1.1346e-13+9.7179e-15j, 1.1346e-13-9.7179e-15j,
1.0936e-13+0.0000e+00j])
As you can see the roots numpy returns are "good" in that the polynomial evaluates to something pretty close to zero at these points.

Numpy returning False even though both arrays are the same?

From my understanding of numpy, the np.equal([x, prod]) command compares the arrays element by element and returns True for each if they are equal. But every time I execute the command, it returns False for the first comparison. On the other hand, if I copy-paste the two arrays into the command, it returns True for both, as you can see in the screenshot. So, why is there a difference between the two?
You cannot compare floating-point numbers, as they are only an approximation. When you compare them by hardcoded values, they will be equal as they are approximated in the exact same way. But once you apply some mathematical operation on them, it's no longer possible to check if two floating-points are equal.
For example, this
a = 0
for i in range(10):
a += 1/10
print(a)
print(a == 1)
will give you 0.9999999999 and False, even though (1/10) * 10 = 1.
To compare floating-point values, you need to compare the two values against a small delta value. In other words, check if they're just a really small value apart. For example
a = 0
for i in range(10):
a += 1/10
delta = 0.00000001
print(a)
print(abs(a - 1) < delta)
will give you True.
For numpy, you can use numpy.isclose to get a mask or numpy.allclose if you only want a True or False value.

tf.round() to a specified precision

tf.round(x) rounds the values of x to integer values.
Is there any way to round to, say, 3 decimal places instead?
You can do it easily like that, if you don't risk reaching too high numbers:
def my_tf_round(x, decimals = 0):
multiplier = tf.constant(10**decimals, dtype=x.dtype)
return tf.round(x * multiplier) / multiplier
Mention: The value of x * multiplier should not exceed 2^32. So using the above method, should not rounds too high numbers.
The Solution of gdelab is very Good moving the required decimal point numbers to left for "." then get them later like "0.78969 * 100" will move 78.969 "2 numbers" then Tensorflow round will make it 78 then you divide it by 100 again making it 0.78 it smart one There is another workaround I would like to share for the Community.
You Can just use the NumPy round method by taking the NumPy matrix or vector then applying the method then convert the result to tensor again
#Creating Tensor
x=tf.random.normal((3,3),mean=0,stddev=1)
x=tf.cast(x,tf.float64)
x
#Grabing the Numpy array from tensor
x.numpy()
#use the numpy round method then convert the result to tensor again
value=np.round(x.numpy(),3)
Result=tf.convert_to_tensor(temp,dtype=tf.float64)
Result

python pandas: how to format big numbers in powers of ten in latex

I'm using pandas to generate some large LaTex tables with big/small numbers:
df = pd.DataFrame(np.array(outfile),columns=['Halo','$r_{v}$','etc'])
df.to_latex("uvFlux_table_{:.1f}.tex".format(z))
where "outfile" is just a table of numbers (3 columns)... How can I get the numbers in outfile to be formated like:
$1.5x10^{12}$ & $1.5x10^{12}$ & $1.5x10^{-12}$
-- like you'd see in a scientific publication... vs the default
0.15e13 & 0.15e13 & 0.15e-11
??
Defining
def format_tex(float_number):
exponent = np.floor(np.log10(float_number))
mantissa = float_number/10**exponent
mantissa_format = str(mantissa)[0:3]
return "${0}\times10^{{{1}}}$"\
.format(mantissa_format, str(int(exponent)))
you can applymap that function on a dataframe (and apply on a series)
df = pd.DataFrame({'col':[12345.1200000,100000000]})
df.applymap(lambda x:format_tex(x))
This gives already tex output in jupyter notebooks.
Note that escaping may be tricky here. Other, faster solutions here?
Thanks #Quickbeam2k1 for the answer. I've expanded to handle 0 and negative numbers:
# Define function for string formatting of scientific notation
def exp_tex(float_number):
"""
Returns a string representation of the scientific
notation of the given number formatted for use with
LaTeX or Mathtext.
"""
neg = False
if float_number == 0.0:
return r"$0.0"
elif float_number < 0.0:
neg = True
exponent = np.floor(np.log10(abs(float_number)))
mantissa = float_number/10**exponent
if neg:
mantissa = -mantissa
mantissa_format = str(mantissa)[0:3]
return "${0}\\times10^{{{1}}}$"\
.format(mantissa_format, str(int(exponent)))
I think it might be a better idea not to re-invent the wheel here, and use the float formatting tools that python gives us:
def float_exponent_notation(float_number, precision_digits, format_type="g"):
"""
Returns a string representation of the scientific
notation of the given number formatted for use with
LaTeX or Mathtext, with `precision_digits` digits of
mantissa precision, printing a normal decimal if an
exponent isn't necessary.
"""
e_float = "{0:.{1:d}{}}".format(float_number, precision_digits, format_type)
if "e" not in e_float:
return "${}$".format(e_float)
mantissa, exponent = e_float.split("e")
cleaned_exponent = exponent.strip("+")
return "${0} \\times 10^{{{1}}}$".format(mantissa, cleaned_exponent)
This version is safe for zeros, negatives, etc. It rounds properly (which none of the others do!). It also gives you the option of using the "g" format code, which will display a number using the decimal if the decimal point would be within the precision you've set.
Also, you can pass these as formatters to the .to_latex method if you also pass escape=False. I have a formatter wrapper that does type checks and escapes if it hits a string.