Procedure for Arithmetic (Division) in tcl - scripting

Need to divide two numbers (can be floating) in tcl and check if the number is an exact multiple.
!($x % $y) doesn't work as the operand expects integers.

Many floating point numbers used on computers are just an approximation of the specified value. So expecting to be able to check if one value is an exact multiple of another value will likely lead to disappointment.
For example: expr {fmod(1, 0.1)} => 0.09999999999999995 because 0.1 cannot be represented exactly in binary floating point format.
I'm afraid you will have to reconsider your requirements.
See also https://en.wikipedia.org/wiki/Floating-point_arithmetic

Related

Round SQL String Data to correct decimal place, then return string data without floating point errors

We are trying to implement a reporting system using software that queries our SQL database. Due to a variety of circumstances, we have a need to round data within the SQL queries. Our goal is to avoid floating point errors, unwanted trailing zeros, and complexity of nested functions (if possible).
The incoming data is always type nvarchar(...) and needs to remain in a string format, which is causing problems for us. Here is an example of what I mean (tested using w3schools.com):
SELECT
STR(235.415, 10, 2) AS StringValue1,
STR('235.415', 10, 2) AS StringValue2,
STR(ROUND(235.415, 2),10,2) AS RoundValue1,
STR(ROUND('235.415', 2),10,2) AS RoundValue2,
STR(CAST('235.415' As NUMERIC(8,2)),10,2) As CastValue1
And, the result:
I know that the issue is a conversion to floating point data type when handling strings. I think the last option, i.e. casting to numeric, is the answer to my issue. However, I can't tell if this output is correct because the CAST guarantees there will not be an error, or because I got lucky for this specific instance.
Is there any type of SQL round function (or combination of functions) that takes string input, outputs string data, and doesn't involve floating point arithmetic? -- Thanks in advance!
NUMERIC/DECIMAL and MONEY don´t uses floating point arithmetic. The are in fact integers with a fixed comma.
Be aware that if you have large sums or do some calculations with these values, your rounding error can get pretty big, pretty fast. So it is advisable to take some moments to think about where you store a value with which precision and when you want to round.

How to find the simplest human-readable float string which would yield the same bytes when converted back to float?

For most numbers, we know there will be some precision error with any floating point value. For a 32-bit float, that works out the be roughly 6 significant digits which will be accurate before you can expect to start seeing incorrect values.
I'm trying to store a human-readable value which can be read in and recreate a bit-accurate recreation of the serialized value.
For example, the value 555.5555 is stored as 555.55548095703125; but when I serialize 555.55548095703125, I could theoretically serialize it as anything in the range (555.5554504395, 555.555511475) (exclusive) and still get the same byte pattern. (Actually, probably that's not the exact range, I just don't know that there's value in calculating it more accurately at the moment.)
What I'd like is to find the most human-readable string representation for the value -- which I imagine would be the fewest digits -- which will be deserialized as the same IEEE float.
This is exactly a problem which was initially solved in 1990 with an algorithm the creators called "Dragon": https://dl.acm.org/citation.cfm?id=93559
There is a more modern technique from last year which is notably faster called "Ryu" (Japanese for "dragon"): https://dl.acm.org/citation.cfm?id=3192369
The GitHub for the library is here: https://github.com/ulfjack/ryu
According to their readme:
Ryu generates the shortest decimal representation of a floating point
number that maintains round-trip safety. That is, a correct parser can
recover the exact original number. For example, consider the binary
64-bit floating point number 00111110100110011001100110011010. The
stored value is exactly 0.300000011920928955078125. However, this
floating point number is also the closest number to the decimal number
0.3, so that is what Ryu outputs.

What is a good rule-of-thumb floating point comparison method selector?

I'm testing some bits of code, a number which involves computation using floating-point values - often very large numbers of these. I have some generic (C++-templated, but it doesn't really matter for the sake of this question) code which compares my outputs, be they scalar or arrays, against their expected values.
I'm faced with the problem of choosing a precision threshold, at least for the two C/C++ floating-point types float and double - for various functions I'm testing. As is well known, there is no one-size-fits-all with respect to comparing floating-point values, nor a single precision value which fits and computation based solely on the data type: Relative vs. absolute error, numerous operations which may magnify floating-point rounding errors a lot, computations which are supposed to arrive at 0 so you can't really normalize by the expected value, etc.
What is a generally-reasonable approach/algorith/rule-of-thumb to choosing a comparsion method (and equality thresholds) for floating point values?
I like the approach used in googletest, e.g. EXPECT_DOUBLE_EQ(a,b) and EXPECT_FLOAT_EQ(a,b): the numbers are approximately equal if they are within 4 units in the last position (4 ULP). To do this, you
convert signed-magnitude to offset
subtract as though they were integers
check that the difference <= 4.
This automatically scales for magnitude and relaxes to absolute near zero.
There is no generally-reasonable approach :-(
One important property of numbers is that the set of numbers can be divided into equivalence classes where all members of the same equivalence class are "equal" in some sense and all members of two different equivalence classes are "not equal". That property is essential for sorting algorithms and hashing.
If you take double with 53 bit mantissa, and just replace the last bits of the mantissa with zeroes, then you still have equivalence classes, and sorting / hashing will work just fine. On the other hand, two numbers can be arbitrarily close together and still compare equal with this method.
The other method is having an algorithm that decides if two numbers are "possibly equal". You can base everything else on this. For example, a is "definitely greater" than b if a > b and a is not "possibly equal" to b. a is "possibly greater" than b if a > b or a is "possibly equal" to b.
Sorting is problematic. You could have a "possibly equal" to b, and b "possibly equal" to c, but a is not "possibly equal" to c.
If you use double with 53 bits mantissa, then it is unlikely that two unrelated numbers are equal within even 45 bits. So you could check quite reasonably whether the absolute value of the difference is less than the absolute value of the larger number, divided by 2^45. Your mileage will vary considerably. Important is whether you think 0 should be equal to very small numbers or not.

Convert negative decimal to binary in T-SQL

I have tried to find information and how.
But it did not contain any information that would help.
With T-SQL, I want to convert negative decimal to binary
and convert it back.
Sample value: -9223372036854775543
I try in convert with Calculater this value to Bin result is ...
1000000000000000000000000000000000000000000000000000000100001001
and Convert back to Dec. It's OK.
How i can Convert like this with T-SQL(SQL2008) Script/Function ?
Long time to find information for how to.
Anyone who knows about this, please help.
There is no build in functionality.
for INT and BIGINT you can use CONVERT(VARCHAR(100),CAST(3 AS VARBINARY(100)),2) to get the hex representation as a string. then you can do a simple search replace as every hex digit represents exactly 4 binary digits. However, with values outside of the BIGINT range there is no standard as to how they are represented internally. You might get the right result or not and that behavior might even change between versions.
There is also no standard as to how negative numbers are represented. Most implementations of integers use the two's-complement representation. In that representation the top most bit indicates the sign of the number. How many bits you have is a metter of convention and fully dependent on your environment.
In mathematics -3 woud be -11 in binary and not 11111101.
To solve your problem you can either use a CLR function or you go through your number the old fashioned way:
Is it odd? -> output a 1
Is it even? -> output a 0
integer divide by 2
repeat until the value is 0
This will give you the digits in opposite order, so you have to flip the result. To get the two's-complement representation of a negative number n calculate 1-n, convert the result to binary using the above algorithm but with reversed digits (0 instead of 1 and vice versa). After flipping the digits into the right order prepend with enough 1s to fill your "box".

Why see -0,000000000000001 in access query?

I have an sql:
SELECT Sum(Field1), Sum(Field2), Sum(Field1)+Sum(Field2)
FROM Table
GROUP BY DateField
HAVING Sum(Field1)+Sum(Field2)<>0;
Problem is sometimes Sum of field1 and field2 is value like: 9.5-10.3 and the result is -0,800000000000001. Could anybody explain why this happens and how to solve it?
Problem is sometimes Sum of field1 and
field2 is value like: 9.5-10.3 and the
result is -0.800000000000001. Could
anybody explain why this happens and
how to solve it?
Why this happens
The float and double types store numbers in base 2, not in base 10. Sometimes, a number can be exactly represented in a finite number of bits.
9.5 → 1001.1
And sometimes it can't.
10.3 → 1010.0 1001 1001 1001 1001 1001 1001 1001 1001...
In the latter case, the number will get rounded to the closest value that can be represented as a double:
1010.0100110011001100110011001100110011001100110011010 base 2
= 10.300000000000000710542735760100185871124267578125 base 10
When the subtraction is done in binary, you get:
-0.11001100110011001100110011001100110011001100110100000
= -0.800000000000000710542735760100185871124267578125
Output routines will usually hide most of the "noise" digits.
Python 3.1 rounds it to -0.8000000000000007
SQLite 3.6 rounds it to -0.800000000000001.
printf %g rounds it to -0.8.
Note that, even on systems that display the value as -0.8, it's not the same as the best double approximation of -0.8, which is:
- 0.11001100110011001100110011001100110011001100110011010
= -0.8000000000000000444089209850062616169452667236328125
So, in any programming language using double, the expression 9.5 - 10.3 == -0.8 will be false.
The decimal non-solution
With questions like these, the most common answer is "use decimal arithmetic". This does indeed get better output in this particular example. Using Python's decimal.Decimal class:
>>> Decimal('9.5') - Decimal('10.3')
Decimal('-0.8')
However, you'll still have to deal with
>>> Decimal(1) / 3 * 3
Decimal('0.9999999999999999999999999999')
>>> Decimal(2).sqrt() ** 2
Decimal('1.999999999999999999999999999')
These may be more familiar rounding errors than the ones binary numbers have, but that doesn't make them less important.
In fact, binary fractions are more accurate than decimal fractions with the same number of bits, because of a combination of:
The hidden bit unique to base 2, and
The suboptimal radix economy of decimal.
It's also much faster (on PCs) because it has dedicated hardware.
There is nothing special about base ten. It's just an arbitrary choice based on the number of fingers we have.
It would be just as accurate to say that a newborn baby weighs 0x7.5 lb (in more familiar terms, 7 lb 5 oz) as to say that it weighs 7.3 lb. (Yes, there's a 0.2 oz difference between the two, but it's within tolerance.) In general, decimal provides no advantage in representing physical measurements.
Money is different
Unlike physical quantities which are measured to a certain level of precision, money is counted and thus an exact quantity. The quirk is that it's counted in multiples of 0.01 instead of multiples of 1 like most other discrete quantities.
If your "10.3" really means $10.30, then you should use a decimal number type to represent the value exactly.
(Unless you're working with historical stock prices from the days when they were in 1/16ths of a dollar, in which case binary is adequate anyway ;-) )
Otherwise, it's just a display issue.
You got an answer correct to 15 significant digits. That's correct for all practical purposes. If you just want to hide the "noise", use the SQL ROUND function.
I'm certain it is because the float data type (aka Double or Single in MS Access) is inexact. It is not like decimal which is a simple value scaled by a power of 10. If I'm remembering correctly, float values can have different denominators which means that they don't always convert back to base 10 exactly.
The cure is to change Field1 and Field2 from float/single/double to decimal or currency. If you give examples of the smallest and largest values you need to store, including the smallest and largest fractions needed such as 0.0001 or 0.9999, we can possibly advise you better.
Be aware that versions of Access before 2007 can have problems with ORDER BY on decimal values. Please read the comments on this post for some more perspective on this. In many cases, this would not be an issue for people, but in other cases it might be.
In general, float should be used for values that can end up being extremely small or large (smaller or larger than a decimal can hold). You need to understand that float maintains more accurate scale at the cost of some precision. That is, a decimal will overflow or underflow where a float can just keep on going. But the float only has a limited number of significant digits, whereas a decimal's digits are all significant.
If you can't change the column types, then in the meantime you can work around the problem by rounding your final calculation. Don't round until the very last possible moment.
Update
A criticism of my recommendation to use decimal has been leveled, not the point about unexpected ORDER BY results, but that float is overall more accurate with the same number of bits.
No contest to this fact. However, I think it is more common for people to be working with values that are in fact counted or are expected to be expressed in base ten. I see questions over and over in forums about what's wrong with their floating-point data types, and I don't see these same questions about decimal. That means to me that people should start off with decimal, and when they're ready for the leap to how and when to use float they can study up on it and start using it when they're competent.
In the meantime, while it may be a tad frustrating to have people always recommending decimal when you know it's not as accurate, don't let yourself get divorced from the real world where having more familiar rounding errors at the expense of very slightly reduced accuracy is of value.
Let me point out to my detractors that the example
Decimal(1) / 3 * 3 yielding 1.999999999999999999999999999
is, in what should be familiar words, "correct to 27 significant digits" which is "correct for all practical purposes."
So if we have two ways of doing what is practically speaking the same thing, and both of them can represent numbers very precisely out to a ludicrous number of significant digits, and both require rounding but one of them has markedly more familiar rounding errors than the other, I can't accept that recommending the more familiar one is in any way bad. What is a beginner to make of a system that can perform a - a and not get 0 as an answer? He's going to get confusion, and be stopped in his work while he tries to fathom it. Then he'll go ask for help on a message board, and get told the pat answer "use decimal". Then he'll be just fine for five more years, until he has grown enough to get curious one day and finally studies and really grasps what float is doing and becomes able to use it properly.
That said, in the final analysis I have to say that slamming me for recommending decimal seems just a little bit off in outer space.
Last, I would like to point out that the following statement is not strictly true, since it overgeneralizes:
The float and double types store numbers in base 2, not in base 10.
To be accurate, most modern systems store floating-point data types with a base of 2. But not all! Some use or have used base 10. For all I know, there are systems which use base 3 which is closer to e and thus has a more optimal radix economy than base 2 representations (as if that really mattered to 99.999% of all computer users). Additionally, saying "float and double types" could be a little misleading, since double IS float, but float isn't double. Float is short for floating-point, but Single and Double are float(ing point) subtypes which connote the total precision available. There are also the Single-Extended and Double-Extended floating point data types.
It is probably an effect of floating point number implementations. Sometimes numbers cannot be exactly represented, and sometimes the result of operations is slightly off what we may expect for the same reason.
The fix would be to use a rounding function on the values to cut off the extraneous digits. Like this (I've simply rounded to 4 significant digits after the decimal, but of course you should use whatever precision is appropriate for your data):
SELECT Sum(Field1), Sum(Field2), Round(Sum(Field1)+Sum(Field2), 4)
FROM Table
GROUP BY DateField
HAVING Round(Sum(Field1)+Sum(Field2), 4)<>0;