Is there a way to format a number to use comma but not limit the number of decimals? - vba

I am using this
Format(Evaluate(strg), "standard")
but it's not exactly what I want.
If there is not decimal needed I don't want to see ".00" tacked onto the end.
If the decimal can go out further than 2 digits I would like to see more accuracy than 2 digits. Not limited to 2.
I like all the commas for 1000's

Related

Double in Mariadb

I have created a column with
day_endnav double precision
When I insert a number: 58.320856084100 in database its stored as 58.3208560841 .
The 2 zeros at the end are removed.
Is there any way to say to mariadb to keep what is entered as it is. Not to roundof or removed zeros at the end?
The two zeros were not "removed". DOUBLE has 53 significant bits, which is about 16 significant decimal digits. The display of the number probably decided they were irrelevant. What tool displayed them?
Whether you insert 58.320856084100 or 58.32085608410000000000000, you will get the same value stored into DOUBLE.
Trailing zeros (at least after the decimal point) have no mathematical meaning to FLOAT or DOUBLE. If you have some meaning, then I guess you need to store it as a string, or DECIMAL.
DECIMAL(mm, 12) will store and display 58.320856084100 (if mm >= 14). However, DECIMAL is "fixed-point". That is, DECIMAL(20,12) will always have exactly 12 decimal places, no more, no fewer.
Please state your goal; maybe I have not touched on that point yet.

Convert negative decimal to binary in T-SQL

I have tried to find information and how.
But it did not contain any information that would help.
With T-SQL, I want to convert negative decimal to binary
and convert it back.
Sample value: -9223372036854775543
I try in convert with Calculater this value to Bin result is ...
1000000000000000000000000000000000000000000000000000000100001001
and Convert back to Dec. It's OK.
How i can Convert like this with T-SQL(SQL2008) Script/Function ?
Long time to find information for how to.
Anyone who knows about this, please help.
There is no build in functionality.
for INT and BIGINT you can use CONVERT(VARCHAR(100),CAST(3 AS VARBINARY(100)),2) to get the hex representation as a string. then you can do a simple search replace as every hex digit represents exactly 4 binary digits. However, with values outside of the BIGINT range there is no standard as to how they are represented internally. You might get the right result or not and that behavior might even change between versions.
There is also no standard as to how negative numbers are represented. Most implementations of integers use the two's-complement representation. In that representation the top most bit indicates the sign of the number. How many bits you have is a metter of convention and fully dependent on your environment.
In mathematics -3 woud be -11 in binary and not 11111101.
To solve your problem you can either use a CLR function or you go through your number the old fashioned way:
Is it odd? -> output a 1
Is it even? -> output a 0
integer divide by 2
repeat until the value is 0
This will give you the digits in opposite order, so you have to flip the result. To get the two's-complement representation of a negative number n calculate 1-n, convert the result to binary using the above algorithm but with reversed digits (0 instead of 1 and vice versa). After flipping the digits into the right order prepend with enough 1s to fill your "box".

MATLAB dealing with approximation-- singles to doubles

I am pulling financial data into Matlab from SQL, where it is unfortunately stored as a 'Real' (which is an approximate data-type).
For example, a value got loaded into SQL as "96.194" which is the correct value (this could have any number of decimals 1-5). I know in SQL it is stored as something like 96.19400024 because it is an approximation, but SQL Server somehow knows to display it as 96.194.
When I pull it into matlab, it gets pulled in as 96.194, which is what I want. Unfortunately, it turns out it's not actually 96.194, as demonstrated:
>>price
price =
96.194
>> price==96.194
ans =
0
>> class(price)
ans =
single
>> double(price)
ans =
96.1940002441406
So my question is, is there a way to convert a single to a double exactly as it appears as a single (i.e. truncate all the decimals which are the approximation? Note: I cannot just round it because I don't know how many decimals it's supposed to have.
The vpa function lets you specify a number of significant (nonzero) digits that is different from the current digits setting. For example:
vpa(price, num_of_digits_required)
or in your case:
vpa(double(price),7)
(6 or 8 significant digits will yield the same result)
Edit
To use vpa you'll need the Symbolic Math Toolbox, there are alternatives found on the web, such as this FEX file.
Single precision floating point values have only about 7 digits of precision (23 bit fractional component, log10(2^24) ≈ 7.225 decimal digits) so you could round off all but the 7 most significant digits.

Print a number in decimal

Well, it is a low-level question
Suppose I store a number (of course computer store number in binary format)
How can I print it in decimal format. It is obvious in high-level program, just print it and the library does it for you.
But how about in a very low-level situation where I don't have this library.
I can just tell what 'character' to output. How to convert the number into decimal characters?
I hope you understand my question. Thank You.
There are two ways of printing decimals - on CPUs with division/remainder instructions (modern CPUs are like that) and on CPUs where division is relatively slow (8-bit CPUs of 20+ years ago).
The first method is simple: int-divide the number by ten, and store the sequence of remainders in an array. Once you divided the number all the way to zero, start printing remainders starting from the back, adding the ASCII code of zero ('0') to each remainder.
The second method relies on the lookup table of powers of ten. You define an array of numbers like this:
int pow10 = {10000,1000,100,10,1}
Then you start with the largest power, and see if you can subtract it from the number at hand. If you can, keep subtracting it, and keep the count. Once you cannot subtract it without going negative, print the count plus the ASCII code of zero, and move on to the next smaller power of ten.
If integer, divide by ten, get both the result and the remainder. Repeat the process on the result until zero. The remainders will give you decimal digits from right to left. Add 48 for ASCII representation.
Basically, you want to tranform a number (stored in some arbitrary internal representation) into its decimal representation. You can do this with a few simple mathematical operations. Let's assume that we have a positive number, say 1234.
number mod 10 gives you a value between 0 and 9 (4 in our example), which you can map to a character¹. This is the rightmost digit.
Divide by 10, discarding the remainder (an operation commonly called "integer division"): 1234 → 123.
number mod 10 now yields 3, the second-to-rightmost digit.
continue until number is zero.
Footnotes:
¹ This can be done with a simple switch statement with 10 cases. Of course, if your character set has the characters 0..9 in consecutive order (like ASCII), '0' + number suffices.
It doesnt matter what the number system is, decimal, binary, octal. Say I have the decimal value 123 on a decimal computer, I would still need to convert that value to three characters to display them. Lets assume ASCII format. By looking at an ASCII table we know the answer we are looking for, 0x31,0x32,0x33.
If you divide 123 by 10 using integer math you get 12. Multiply 12*10 you get 120, the difference is 3, your least significant digit. we go back to the 12 and divide that by 10, giving a 1. 1 times 10 is 10, 12-10 is 2 our next digit. we take the 1 that is left over divide by 10 and get zero we know we are now done. the digits we found in order are 3, 2, 1. reverse the order 1, 2, 3. Add or OR 0x30 to each to convert them from integers to ascii.
change that to use a variable instead of 123 and use any numbering system you like so long as it has enough digits to do this kind of work
You can go the other way too, divide by 100...000, whatever the largest decimal you can store or intend to find, and work your way down. In this case the first non zero comes with a divide by 100 giving a 1. save the 1. 1 times 100 = 100, 123-100 = 23. now divide by 10, this gives a 2, save the 2, 2 times 10 is 20. 23 - 20 = 3. when you get to divide by 1 you are done save that value as your ones digit.
here is another given a number of seconds to convert to say hours and minutes and seconds, you can divide by 60, save the result a, subtract the original number - (a*60) giving your remainder which is seconds, save that. now take a and divide by 60, save that as b, this is your number of hours. subtract a - (b*60) this is the remainder which is minutes save that. done hours, minutes seconds. you can then divide the hours by 24 to get days if you want and days and then that by 7 if you want weeks.
A comment about divide instructions was brought up. Divides are very expensive and most processors do not have one. Expensive in that the divide, in a single clock, costs you gates and power. If you do the divide in many clocks you might as well just do a software divide and save the gates. Same reason most processors dont have an fpu, gates and power. (gates mean larger chips, more expensive chips, lower yield, etc). It is not a case of modern or old or 64 bit vs 8 bit or anything like that it is an engineering and business trade off. the 8088/86 has a divide with a remainder for example (it also has a bcd add). The gates/size if used might be better served than for a single instruction. Multiply falls into that category, not as bad but can be. If operand sizes are not done right you can make either instruction (family) not as useful to a programmer. Which brings up another point, I cant find the link right now but a way to avoid divides but convert from a number to a string of decimal digits is that you can multiply by .1 using fixed point. I also cant find the quote about real programmers not needing floating point related to keeping track of the decimal point yourself. its the slide rule vs calculator thing. I believe the link to the article on dividing by 10 using a multiply is somewhere on stack overflow.

SQL Server Rounding Issue

I'm using SQL Server 2005. And I'm using ROUND T-SQL function to round a decimal column value. But it seems that the rounded value is incorrect.
PRINT ROUND(1890.124854, 2) => 1890.120000
As shown the ROUND function is returning 1890.12 where as it should be 1890.13. Does anyone encountered this and what should be the correct way of rounding so that I get the expected value 1890.13..?
Thanks.
ROUND() is working as it was intended to. You specified to round to 2 decimal places, and that's what you got.
Returns a numeric value, rounded to the specified length or precision.
Rounding means that a digit of 5 or above goes up to nearest, less than 5 down to nearest.
so,
PRINT ROUND(1890.125000, 2)
produces 1890.130000
Whereas
PRINT ROUND(1890.124999, 2)
produces 1890.120000
Your rounding issue is related to the rounding algorithm used by SQL Server. I believe SQL Server uses the "Round to Even" (sometimes known as Banker's Rounding) algorithm.
In Banker's Rounding, a digit get rounded down if the least significant digit to the right of it is less than five or rounded up if the least significant digit to the right of it is greater than five.
If the least significant digit to the right of it is equal to five, then the digit to the left of the five is rounded up to the nearest even number.
In your example of 1890.124854, as the rounding begins at the right-most digit and works to the left, the 8 causes the 4 to the left of it to get rounded up to 5. The five has an even number (2) to the left of it so, since it's already even, it leaves it alone. Thus, rounding to two decimal places should yield 1890.12.
However, if your example was instead 1890.134854, then as the rounding works from right to left, the 8 rounds the 4 up to 5 and then the 3 next to the 5 gets rounded up to the next even number which is 4. The result of rounding to two decimal places should then yield 1890.14.
The theory is that 1890.125 is neither closer to 1890.12 or 1890.13. It is exactly in between. Therefore, to always round up every digit to the left of a 5 would give an undesired upward bias that can skew calculations toward an artificially high result. This bias upward becomes more exaggerated in complex calculations or those involving multiple iterations where a five as the least-significant digit may be encountered numerous times. However, in general calculations, the number to the left of 5 is statistically just as likely to be odd as even. Because of this, rounding to the even number causes the calculation to statistically hover close to the true mean of the rounded number.
Anymore, almost everything uses this "Round to Even" algorithm. Many years ago, I used to develop in a programming language that didn't. It used the more "traditional" rounding where everything to the left of a 5 got rounded up, regardless of being odd or even. We ran into the biasing problem I mentioned above.