How to perform high precision calculation in numpy?
By high precision I mean 100 precision in decimal.
Numpy doesn't have arbitrary floating-point precision. You'll want to use decimal from the standard library, or a third-party library like mpmath. Both of those libraries use C extensions for their internal calculations, so they should be fairly fast.
Related
Is there a standard text representation for the floating-point numbers that is supported by the most popular languages?
What is the standard fro representing infinities and NaNs?
There isn't a general consensus, unfortunately.
However, there seems to be some convergence on hexadecimal notation for floats. See pg. 57/58 of http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf
The advantage of this notation is that you can precisely represent the value of the float as represented by the machine without worrying about any loss of precision. See this page for examples: https://www.exploringbinary.com/hexadecimal-floating-point-constants/
Note that NaN and Infinity values are not supported by hexadecimal-floats. There seems to be no general consensus on how to write these. Most languages actually don't even allow writing these as constants, so you resort to expressions such as 0/0 or 1/0 etc. instead.
Since you tagged this question with serialization, I'd recommend simply serializing using the bit-pattern you have for the float value. This will cost you 8-characters for single-precision and 16-characters for double-precision, (64 bits and 128 bits respectively, assuming 8-bit per character). Perhaps not the most efficient, but it'll ensure you can encode all possible values and transmit precisely.
Do modern GPUs optimize multiplication by powers of 2 by doing a bit shift? For example suppose I do the following in a shader:
float t = 0;
t *= 16;
t *= 17;
Is it possible the first multiplication will run faster than the second?
Floating point multiplication cannot be done by bit shift. Howerver, in theory floating point multiplication by power of 2 constants can be optimized. Floating point value is normally stored in the form of S * M * 2 ^ E, where S is a sign, M is mantissa and E is exponent. Multiplying by a power of 2 constant can be done by adding/substracting to the exponent part of the float, without modifying the other parts. But in practice, I would bet that on GPUs a generic multiply instruction is always used.
I had an interesting observation regarding the power of 2 constants while studying the disassembly output of the PVRShaderEditor (PowerVR GPUs). I have noticed that a certain range of power of 2 constants ([2^(-16), 2^10] in my case), use special notation, e.g. C65, implying that they are predefined. Whereas arbitrary constants, such as 3.0 or 2.3, use shared register notation (e.g. SH12), which implies they are stored as a uniform and probably incur some setup cost. Thus using power of 2 constants may yield some optimizational benefit at least on some hardware.
When I decrease the value of a coefficient in my code something stops working. Can I have a division by zero without an error message? Can this be solved by increasing the number of significant digits?
How can I increase the number of significant digits in numpy? Thank you
Numpy does not support arbitrary precision. see here. The scalar types they have are these.
Consider using fractions module or other library w arbitrary precision...
I have a database that is storing amounts and being displayed in a gridview. I have an amount that is input as 3,594,879.59 and when I look in the gridview I am getting 3,594,880.00.
The SQL Money type is the default Money, nothing was done in SQL when creating the table to customize the Money type. In Linq I am casting the amount to a float?
What is causing this to happen? It is only happening on big numbers (ex. I put 1.5 in the db and 1.5 shows in the gridview).
Cast the SQL money type to the CLR type decimal. Decimal is a floating-point numeric type that uses a base-10 internal representation and so can represent any decimal number within range without approximation.
It's slower than float, and you're trading range for precision, but for anything involving money, use decimal to avoid approximation errors.
EDIT: As for "why is this happening" - two reasons. Firstly, floating-point numbers use a base-2 internal representation, in which it is impossible to represent some decimal fractions exactly. Secondly, the reason floating-point numbers are called floating-point is that instead of using a fixed precision for the integer part and a fixed precision for the fractional part, they offer a continuous trade-off between magnitude and precision. Numbers where the integral part is relatively small - like 1.5 - allow the majority of the internal representation to be assigned to the fractional part, and so provide much greater accuracy. As the magnitude of the integral part increases, the bits that were previously used for precision are now needed to store the larger integer value and so the accuracy of the fractional part is compromised.
Very, very crudely, it's like having ten digits and you can put the decimal point wherever you like, so for small values, you can represent very accurate fractions:
1.0000000123
but for larger values, you don't have nearly so much fractional precision available:
1234567890.2
For details of how this actually works, check out the IEEE 754 standard.
If the destination is a standard 32-bit float, then you are getting exactly what you should. Try keeping it as money, or changed it to a scaled integer, or a double-precision (64-bit) floating point.
A 32-bit float has six to seven significant figures of precision. 64-bit floats have just under 16 digits of precision.
How can I do accurate decimal number arithmetic since using floats is not reliable?
I still want to return the answer to a textField.
You can either use int-s (e.g. with "cents" as a unit instead of "dollars"), or use NSDecimalNumber s.
Store your number multiplied by some power of ten of your choice, chosen by the amount of precision you need to the right of the decimal point. We'll call these scaled numbers. Convert entered values that need your precision into scaled numbers. Addition is easy: just add. Multiplication is slightly harder: just multiply two scaled numbers, and then divide by your scale factor. Division is easy: just divide, then multiply by the scale factor. The only inconvenience is you'll have to write your own numeric input/output conversion routines.