If i have a column with datatype decimal(p,s) what is the standard for expected result's precision and scale when i execute average aggregate.?
i.e result = select avg(decimal(p,s)) from table1;
what is must be the result decimal precision and scale.?
Some links from existing databases like
1. https://docs.oracle.com/javadb/10.6.2.1/ref/rrefsqlj36146.html#rrefsqlj36146
2. https://learn.microsoft.com/en-us/sql/t-sql/functions/avg-transact-sql?view=sql-server-2017
But unable to see any standard followed here. So is there a well accepted standard of this in sql or is it always implementation basis.?
Personally not aware of a standard, have always adjusted based on the level of detail that is required of the column.
Related
If we specify column as decimal(5,2), which means we have precision is 5 and scale is 2.
If I understand it correctly, precision is maximum number of digits in a number and scale is maximum number of digits that can be present to the right of the decimal point.
So from this logic if I try to insert 100.999 it should fail as scale is 3 for this number which is not allowed. But when I am using online edition such as:
https://www.tutorialspoint.com/execute_sql_online.php
https://www.mycompiler.io/new/sql
and running following queries:
CREATE TABLE nd (nv decimal(5, 2));
INSERT INTO nd VALUES(100.999);
INSERT INTO nd VALUES(1001.2);
INSERT INTO nd VALUES(10011.2);
INSERT INTO nd VALUES(100111.299999);
Select * from nd;
This gives me output as:
100.999
1001.2
10011.2
100111.2991999
Can anyone explain why this is the case?
The only database that would not complain and run your code is SQLite, because in SQLite there is no DECIMAL data type.
So, maybe you think that by defining a column as decimal(5, 2) you are defining the column to be numeric with 5 as precision and 2 as scale, but all of them are just ignored.
For SQLite you are just defining a column with NUMERIC affinity and nothing more.
The proper data type that you should have used is REAL and there is no way to define precision and scale (unless you use more complex constraints).
You can find all about SQLite's data types and affinities here: Datatypes In SQLite Version 3
Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number. For example, the number 123.45 has a precision of 5 and a scale of 2.
Those online SQL executors probably uses SQLite, which ignores the decimal and corresponding scale and precision, as #forpas said.
If you try to execute the queries in database like postgresql, you'll get the error you expected, numeric field overflows exception.
Try this engine to execute your queries, hope you'll understand what is happening.
I have a issue with round, trunc function from BigQuery standard query .
I expected at 3953.7, 3053.67, 3053.667. f1_, f2_ result is different. It is a bug??
I expected at 3.195, 3.195, 3.1955, 3.1965, 3.1945.
f1_, f3_ result is different. Is it my fault?
The ROUND() is used to round a numeric field to the nearest number of decimals specified.
There is a limitation of floating point values.
They can only represent binary values, but cannot precisely represent decimal digits after the decimal point (see here).
In case of SELECT ROUND(3053.665,2) you receive: 3053.66, you can overcome it by using: ROUND(value + 0.005, 2), which allows you to receive 3053.67.
Anyway, if you want to take care about precise decimal results, you should use the NUMERIC type. The following query gives results that you expect:
SELECT ROUND(3953.65,1), ROUND(numeric '3053.665',2), ROUND(numeric '3053.6665',3)
TRUNC(), the following query gives results that you expect:
SELECT TRUNC(3.1955,3), TRUNC(numeric'3.195',3), TRUNC(3.1955,4), TRUNC(numeric '3.1965',4), TRUNC(3.1945,4)
BigQuery parses fractional numbers as floating point by default for better performance, while other databases parses fractional numbers as NUMERIC by default. This means the other databases would interpret TRUNC(3.03,2) in the same way BigQuery interprets TRUNC(numeric '3.03',2).
I hope it will helps you.
This is due to the fact that, in BigQuery, digits are stored as floating point values by default.
You can fin more information about how these work in Wikipedia, but the main idea is that some numbers are not stored as they are but as the closest approximation its representation allows. If instead of 3.03 it is internally represented as 3.0299999999..., when you trunc it the result will be 3.02.
Same thing happens with round function, if 3053.665 is internally stored as 3053.6649999999..., the result of rounding it will be 3053.66.
If you specify it to be stored as NUMERIC, it then works as "expected":
select trunc(numeric '3.195', 3)
gives as result 3.195
You can find more information about Numeric Types in the official BigQuery Documentation.
In Oracle when you have a NUMBER data type and do not specify precision and scale like NUMBER(18,2) for example and use it like this NUMBER instead it will store the value as given.
From the manual
If a precision is not specified, the column stores values as given
Now I want to know if there is a way to let NUMERIC data type in SQL Server do the same. I have to use NUMERIC and not DECIMAL or other data types and I am not allowed to specify the precision or scale since I have no possibility to test if the data that will be used will cause errors because I have no access to the data. I just know that the data did not cause any trouble with our Oracle database which uses only NUMBER datatype without any specifications.
No, numeric needs a precision and scale and has defaults if none are set. Simple like that.
https://learn.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-2017
Quote:
decimal[ (p[ ,s] )] and numeric[ (p[ ,s] )] Fixed precision and scale
numbers. When maximum precision is used, valid values are from - 10^38
+1 through 10^38 - 1. The ISO synonyms for decimal are dec and dec(p, s). numeric is functionally equivalent to decimal.
Like often, documentation is your friend.
Currently I have a working To_Char in Oracle:
To_Char($Num,'FM' || RPAD(RPAD(LPAD(LPAD('.',least($intmaxlength,$intminlength)+1,'0'),$intmaxlength+1,'9'),$intmaxlength+1+$decminlength,'0'),$intmaxlength+1+$decmaxlength,'9'))
My goal is to convert a number to a string, fitting into four parameters for integers and decimals.
I would like to add minimum and maximum precision. For example, the integers to the left of the decimal point in 1234567.89 should have a minimum of 1 but a maximum of five (so the extra integers would be trimmed). In addition, I'd like to do the same for scale - the decimals to the right, by setting a minimum of two decimal places and a maximum of four. These numbers are just examples, the numbers will be updated dynamically.
I have minimal experience in MSSQL, but from what I can see some equivalent functions like Least are missing in it versus Oracle.
Here are string functions for MSQL
https://msdn.microsoft.com/en-us/library/ms181984.aspx
for least i dont think there is any equivalent.
But i found this getting-the-minimum-of-two-values-in-sql
I am using postgresql.The table in my database have column with type REAL.When real value having more than seven zeros postgresql stores it for example as 1e+007.When it is retrieved by query,value return also as 1e+007.But I need the value as 10000000 .What is the solution
You'll have more problems than that if you use floating point numbers for things they aren't suitable for, including pretty much anything where you care about the exact presentation of the number.
I would recommend that you use NUMERIC, a base-10 (decimal) number data type that lets you control precision and scale. See Numeric types. NUMERIC is slower to perform calculations with and consumes more storage so it isn't always the right answer, but it's ideal for a great many applications.
You can use floats, it's just harder because you can't safely compare for exact equality, have to use rounding and formatting functions for display, etc.
Example:
regress=> select '1.2'::float8 - '1.0'::float8;
?column?
----------------------
0.199999999999999956
(1 row)
regress=> select '1.2'::numeric - '1.0'::numeric;
?column?
----------
0.2
(1 row)
Another common solution to problems like these is to use an application defined fixed point representation. If you need (say) 2 decimal places, you just store the number 2000.11 as 200011 in the database, multiplying by 100. This is a common technique in financial applications though it's now more common to use proper decimal data types.
Use SQL CAST. Works for me in DB2
select cast(cast(1e+007 as real) as decimal (15,2)) from sysibm.sysdummy1;
10000000.00
You can set decimal places as per your desired value (15,0).
select to_char(1e+007::real, '9999999999')
More details in the manual: http://www.postgresql.org/docs/current/static/functions-formatting.html