When I try to get the difference between two apparently equal numbers, I get a number different than zero.
product_price min_product_price price_dif
40,609756097561 40,609756097561 -2,1316282072803E-14
I understand this can be a difficult question to answer without all the queries that lead to this, but I'll try to explain,
product_price comes straight from the ERP database.
min_product_price is obtained with a
MIN(ItemSellingPrices.UnitPrice) as min_product_price
together with a group by clause. Shouldn't this mean the numbers are the same?
I have no experience with this kind of issues, so I apologize if this is too basic.
Looks like a floating point issue.
If you're storing money values in float or real columns, try using a decimal/numeric data type instead.
For storing 12 decimal values, you could use decimal(18, 12), for example.
Related
Very new and learning SQL. Trying to calculate a percentage from two columns as such:
Select (total_deaths/total_cases)*100 AS death_percentage
From covid_deaths
I’m getting the column but it’s showing as an Integer and all values are zero.
I’ve tried using CAST to make it a decimal but i don’t have the syntax right. Very noob question but seems simple enough. Do I have to declare the numeric type of all calculated columns?
In addition to the answer linked by Stefan Zivkovik in a comment above, it may be good to handle division by zero. Even if you don't ever anticipate total_cases will be zero, someone may reuse this part of the code (for instance, if total_cases is later broken into subcategories).
You probably also want to ROUND to a certain number of decimal places
SELECT
CASE WHEN total_cases > 0 THEN
ROUND((total_deaths::NUMERIC/total_cases)*100,1)
END AS death_percentage
FROM covid_deaths
By not specifying an ELSE clause, the column will be null when total_cases is zero. If this doesn't work for your purposes, you could specify another default value (like zero) with ELSE.
I'm pulling in some external data into my MSSQL server. Several columns of incoming data are marked as 'number' (it's a json file). It's millions of rows in size and many of the columns appear to be decimal (18,2) like 23.33. But I can't be sure that it will always be like that, in fact a few have been 23.333 or longer numbers like 23.35555555 which will mess up my import.
So my question is given a column is going to have some kind of number imported into it, but I can't be sure really how big or how many decimal places it's going to have... do I have to resort to making my column a varchar or is there a very generic number kind of column I'm not thinking of?
Is there a max size decimal, sort of like using VARCHAR(8000) or VARCHAR(MAX) ?
update
This is the 'data type' of number that I'm pulling in:
https://dev.socrata.com/docs/datatypes/number.html#
Looks like it can be pretty much any number, as per their writing:
"Numbers are arbitrary precision, arbitrary scale numbers."
The way I handle things like this is to import the raw data into a staging table in a varchar(max) column.
Then I use TRY_PARSE() or TRY_CONVERT() when moving it to the desired datatype in my final destination table.
The point here is that the shape of the incoming data shouldn't determine the datatype you use. The datatype should be determined by the usage of the data once it's in your table. And if the incoming data doesn't fit, there are ways of making it fit.
What do those numbers represent? If they are just values to show you could just set float as datatype and you're good to go.
But if they are coordinates or currencies or anything you need for absolute precise calculations float might sometimes give rounding problems. Then you should set your desired minimal precision with decimal and simply truncate what's eventually over.
For instance if most of the numbers have two decimals, you could go with 3 or 4 decimal points to be sure, but over that it will be cut.
I am trying to replicate tables from a remote SQL 2000 database into my local SQL 2012 instance.
As a quick way of checking for values which have changed, I am using the "UNION ALL...GROUP BY" technique found on Simple Talk (scroll about half-way down).
Unfortunately, the remote data types are set as REAL and as this is an approximate data type this is not very reliable as it finds differences where I don't want it to (even though those differences exist computationally).
I have tried using CONVERT to change the values to a NUMERIC (exact) data type. However, different columns have different numbers of decimal places and finding a one size fits all solution is proving difficult.
One thing I noticed is that if I run the following query (TimeID is an INT and Value1 is a REAL):
SELECT [TimeID], [Value1], CONVERT(DECIMAL(19,10), [Value1]) AS [CONV19,10], CONVERT(DECIMAL(19,3), [Value1]) AS [CONV19,3], CONVERT(DECIMAL(19,4), [Value1]) AS [CONV19,4]
FROM [DATABASE].[SCHEMA].[TABLE]
WHERE [TimeID] = 12345
I get the following results:
[TimeID] [Value1] [CONV19,10] [CONV19,3] [CONV19,4]
12345 1126.089 1126.0885009766 1126.089 1126.0885
Note that SQL Server Management Studio displays Value1 to 3 decimal places when in its native format (i.e. without me converting it).
So my question is: how does SSMS know that it should be displayed to 3 decimal places? How does it know that 1126.0885 is not the actual number stored, but instead is 1126.089?
Ideally I'd like to understand it's algorithm so I can replicate it to convert my data to the correct number of decimal places.
This won't answer your question but will give you a starting point to answer it yourself?
First read this:
http://msdn.microsoft.com/en-us/library/ms187912.aspx
Notably, "The behavior of float and real follows the IEEE 754 specification on approximate numeric data types."
Now read this:
http://en.wikipedia.org/wiki/IEEE_floating_point
So now you should know how float/ real numbers are stored and why they are "approximate" numbers.
As for how SSMS "knows" how many decimals are in a real/ float I don't really know, but it is going to be something to do with the IEEE 754 specification?
A simple script to demonstrate this is:
DECLARE #MyNumber FLOAT(24) = 1.2345;
SELECT #MyNumber, CONVERT(NUMERIC(19, 4), #MyNumber), CONVERT(NUMERIC(19, 10), #MyNumber), CONVERT(NUMERIC(19, 14), #MyNumber);
I don't know if this is the case, but I suspect that SSMS is using .NET numeric string formatting.
I was having a similar situation, I simply wanted to SELECT into a VARCHAR the exact same thing that SSMS was displaying in the query results grid.
In the end I got what I wanted with the FORMAT function, using the General format specifier.
For example:
DECLARE #Floats TABLE([FloatColumn] FLOAT);
INSERT INTO #Floats
VALUES
(123.4567),
(1.23E-7),
(PI());
SELECT
Number = f.FloatColumn,
Text = FORMAT(f.FloatColumn, 'G'),
IncorrectText = CONVERT(NVARCHAR(50), f.FloatColumn)
FROM #Floats f;
I have to give the disclaimer that I don't know if this will work as desired in all cases, but it worked for everything I needed it to.
I'm sure this is very useful after six years.
I don´t have a field as such, but I am making a new field which is the result dividing an existing field, i.e. cost/1.15
Is there a way to restrict the result of this calculation to two decimal places?
You could change the column type to a NUMERIC(p, 2) where p is the precision, especially if it is money (I'm guessing from cost that it might be money).
Also making a column which is derived from another column is generally a bad idea as the two can get out of sync. Consider making a view instead.
Sounds like you need the ROUND() function.
Eg. ROUND(cost/1.15, 2)
I am working on a legacy ASP application. I am attempting to insert a value (40.33) into a field in SQL Server 2000 that happens to be a float type. Every place I can see (via some logging) in the application is sending 40.33 to the Stored Procedure. When I run SQL Profiler against the database while the call is happening, the value that I see in the trace is 4.033000183105469e+001
Where is all the extra garbage coming from (the 183105469)?
Why is it that when I pass in 40, or 40.25 there is nothing extra?
Is this just one of the weird side effects of using float? When I am writing something I normally use money or decimal or something else, so not that familiar with the float datatype.
Yes, this is a weird, although well-known, side effect of using FLOAT.
In Microsoft SQL Server, you should use exact numeric datatypes such as NUMERIC, DECIMAL, MONEY or SMALLMONEY if you need exact numerics with scale.
Do not use FLOAT.
I think this is probably just a precision issue - the 0.33 part of the number can't be represented exactly in binary - this is probably the closest that you can get to.
The problem is that floats are not 100% accurate. If you need your numbers to be exact (especially when dealing with monetary values)... you should use a Decimal type.