This code:
DECLARE #remise decimal(10,2)
set #remise = 10 / 100
select #remise
results in 0.00. Why doesn't it result in 0.10, which I have been expecting?
You have two integers - 10 and 100. So when considering the division operator, the system decides to perform integer division. Integer division disregards any remainder and cannot produce a decimal result, so the result is 0.
It doesn't matter that you're subsequently planning to store the result in a decimal(10,2).
One way to solve it would be to use a non-integer as one of the inputs to the division. If you're not using literals, commonly this can be done by multiplying one of the inputs by 1.0:
DECLARE #remise decimal(10,2)
set #remise = (10 * 1.0) / 100
select #remise
Is this SQL Server?
10 and 100 are both integers. The integer division 10 / 100 results in 0. If you put that 0 into a decimal variable, the zero still stays zero.
You want
DECLARE #remise decimal(10,2)
set #remise = 10.0 / 100.0
select #remise
Related
I have been calculating different integer percentages with different numbers but each time I get floor rounded number. select 13*100/60 gives me 21 and the actual number is 21.66 which using a round function should give us 22 but it can only give me 21 for all different decimal numbers.
I am using SQL 2017. please help
This is due to the fact that you are dividing ints and not floating-point numbers. Integer division returns an integer.
Try the following instead (noting the .0 on the end of the 60):
SELECT 13 * 100 / 60.0
Making one of the components a floating-point number will automatically output the result as a floating-point number.
Output:
21.666666
Incidentally, if you are working with variables and one of them is a FLOAT, it will automatically produce the output you expect:
DECLARE #A FLOAT
DECLARE #B INT
DECLARE #C INT
SET #A = 13
SET #B = 100
SET #C = 60
SELECT #A * #B / #C
Output:
21.6666666666667
This post has the following code:
DECLARE #A DECIMAL(3, 0), #B DECIMAL(18, 0), #F FLOAT
SET #A = 3
SET #B = 3
SET #F = 3
SELECT 1 / #A * 3.0, 1 / #B * 3.0, 1 / #F * 3.0
SELECT 1 / #A * 3 , 1 / #B * 3 , 1 / #F * 3
Using float, the expression evaluates to 1. Using Decimal, the expression evaluates to some collection of 9s after the decimal point. Why does float yield the more accurate answer in this case? I thought that Decimal is more accurate / exact per Difference between numeric, float and decimal in SQL Server and Use Float or Decimal for Accounting Application Dollar Amount?
The decimal values that you have declared are fixed width, and there are no points after the decimal place. This affects the calculations.
SQL Server has a rather complex formula for how to calculate the precision of arithmetical expressions containing decimal numbers. The details are in the documentation. You also need to take into account that numeric constants are in decimal format, rather than numeric.
Also, in the end, you need to convert back to a decimal format with the precision that you want. In that case, you might discover that float and decimal are equivalent.
DECLARE #a int;
DECLARE #b int;
SET #a = 9;
SET #b = 2;
SELECT CEILING (#a/#b);
It is returning as 4 instead of 5. Why?
Edit: I would like to get next smallest integer if the quotient is not whole number.
Try:
SELECT CEILING (#a/CAST(#b AS float))
And consider NULLIF(#b,0) too, to avoid a Division By Zero error.
After dividing 9 by 2 a decimal fraction is Truncated to its integer part - 4, not Rounded to 5. Try:
SELECT 9/2
Resilt is 4. Then CEILING(4) = 4
To get next integer declare variables as data types that can handle decimal part: NUMERIC,FLOAT, REAL.
SQL Server does integer division. So 9/2 = 4 in SQL Server.
Taking the ceiling of an integer is the same integer.
The database I am using is SQL Server 2005. I am trying to round values DOWN to the nearest .05 (nickel).
So far I have:
SELECT ROUND(numberToBeRounded / 5, 2) * 5
which almost works - what I need is for the expression, when numberToBeRounded is 1.99, to evaluate to 1.95, not 2.
Specify a non-zero value for a third parameter to truncate instead of round:
SELECT ROUND(numberToBeRounded / 5, 2, 1) * 5
Note: Truncating rounds toward zero, rather than down, but that only makes a difference if you have negative values. To round down even for negative values you can use the floor function, but then you can't specify number of decimals so you need to multiply instead of dividing:
SELECT FLOOR(numberToBeRounded * 20) / 20
If your data type is numeric (ISO decimal) or `money, you can round towards zero quite easily, to any particular "unit", thus:
declare #value money = 123.3499
declare #unit money = 0.05
select value = value ,
rounded_towards_zero = value - ( value % #unit )
from #foo
And it works regardless of the sign of the value itself, though the unit to which you're rounding should be positive.
123.3499 -> 123.3000
-123.3499 -> -123.3000
In SQL Server I declare a decimal variable, then set it from an equation. The decimal variable loses precision. However, if I just select the equation, the precision is intact. How do I set the decimal variable without losing precision?
SQL:
DECLARE #_oDiscount decimal(10,2)
SET #_oDiscount = CAST(9.99 AS decimal(10,2)) * CAST(.5 AS float)
SELECT #_oDiscount AS DecimalVariable, CAST(9.99 AS decimal(10,2)) * CAST(.5 AS float) AS Equation
OUTPUT:
DecimalVariable | Equation
-------------------------------
5.00 | 4.995
Well, YOU'VE defined the decimal to contain only 2 digits after the comma!
Therefore, the result of the calculation being 4.995, it will be rounded to 5.00.
If you would be using DECIMAL(12,4) instead, then there is no loss of precision!
DECLARE #_oDiscount decimal(10,4)
SET #_oDiscount = CAST(9.99 AS decimal(10,4)) * CAST(0.5 AS DECIMAL(10,4))
SELECT #_oDiscount
--> return 4.9950
Also: I would recommend against using FLOAT whenever possible!
And furthermore: there's really no need for all those casts..... just use
SET #_oDiscount = 9.99 * 0.5
and you'll get just the same results.
The DECIMAL(p, s) defines how precise the decimal value will be: p (precision) stands for the total number of digits, while s (scale) stands for the number of digits after the decimal point.