How to store NUMERIC value as given in SQL Server - sql

In Oracle when you have a NUMBER data type and do not specify precision and scale like NUMBER(18,2) for example and use it like this NUMBER instead it will store the value as given.
From the manual
If a precision is not specified, the column stores values as given
Now I want to know if there is a way to let NUMERIC data type in SQL Server do the same. I have to use NUMERIC and not DECIMAL or other data types and I am not allowed to specify the precision or scale since I have no possibility to test if the data that will be used will cause errors because I have no access to the data. I just know that the data did not cause any trouble with our Oracle database which uses only NUMBER datatype without any specifications.

No, numeric needs a precision and scale and has defaults if none are set. Simple like that.
https://learn.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-2017
Quote:
decimal[ (p[ ,s] )] and numeric[ (p[ ,s] )] Fixed precision and scale
numbers. When maximum precision is used, valid values are from - 10^38
+1 through 10^38 - 1. The ISO synonyms for decimal are dec and dec(p, s). numeric is functionally equivalent to decimal.
Like often, documentation is your friend.

Related

Difference between Numeric and Decimal in SQL

What's the difference between numeric[(p[,s])] and decimal[(p[,s])] as SQL datatype?
NUMERIC(p, s) takes two arguments: precision (p) and scale (s). Numeric datatype enforces the exact precision and scale that you have specified.
On other side, DECIMAL(p, s) also takes the same two arguments. However, with the DECIMAL data type, the precision can be greater than the value you have supplied. Thus, this data type can provide you with more flexibility.

Unable to understand precision and scale of numeric datatype in DBMS

If we specify column as decimal(5,2), which means we have precision is 5 and scale is 2.
If I understand it correctly, precision is maximum number of digits in a number and scale is maximum number of digits that can be present to the right of the decimal point.
So from this logic if I try to insert 100.999 it should fail as scale is 3 for this number which is not allowed. But when I am using online edition such as:
https://www.tutorialspoint.com/execute_sql_online.php
https://www.mycompiler.io/new/sql
and running following queries:
CREATE TABLE nd (nv decimal(5, 2));
INSERT INTO nd VALUES(100.999);
INSERT INTO nd VALUES(1001.2);
INSERT INTO nd VALUES(10011.2);
INSERT INTO nd VALUES(100111.299999);
Select * from nd;
This gives me output as:
100.999
1001.2
10011.2
100111.2991999
Can anyone explain why this is the case?
The only database that would not complain and run your code is SQLite, because in SQLite there is no DECIMAL data type.
So, maybe you think that by defining a column as decimal(5, 2) you are defining the column to be numeric with 5 as precision and 2 as scale, but all of them are just ignored.
For SQLite you are just defining a column with NUMERIC affinity and nothing more.
The proper data type that you should have used is REAL and there is no way to define precision and scale (unless you use more complex constraints).
You can find all about SQLite's data types and affinities here: Datatypes In SQLite Version 3
Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number. For example, the number 123.45 has a precision of 5 and a scale of 2.
Those online SQL executors probably uses SQLite, which ignores the decimal and corresponding scale and precision, as #forpas said.
If you try to execute the queries in database like postgresql, you'll get the error you expected, numeric field overflows exception.
Try this engine to execute your queries, hope you'll understand what is happening.

Float type storing values in format "2.46237846387469E+15"

I have a table ProductAmount with columns
Id [BIGINT]
Amount [FLOAT]
now when I pass value from my form to table it gets stored in format 2.46237846387469E+15 whereas actual value was 2462378463874687. Any ideas why this value is being converted and how to stop this?
It is not being converted. That is what the floating point representation is. What you are seeing is the scientific/exponential format.
I am guessing that you don't want to store the data that way. You can alter the column to use a fixed format representation:
alter table ProductAmount alter amount decimal(20, 0);
This assumes that you do not want any decimal places. You can read more about decimal formats in the documentation.
I would strongly discourage you from using float unless:
You have a real floating point number (say an expected value from a statistical calculation).
You have a wide range of values (say, 0.00000001 to 1,000,000,000,000,000).
You only need a fixed number of digits of precision over a wide range of magnitudes.
Floating point numbers are generally not needed for general-purpose and business applications.
The value gets stored in a binary format, because this is what you specified by requesting FLOAT as the data type for the column.
The value that you store in the field is represented exactly, because 64-bit FLOAT uses 52 bits to represent the mantissa*. Even though you see 2.46237846387469E+15 when selecting the value back, it's only the presentation that is slightly off: the actual value stored in the database matches the data that you inserted.
But i want to store 2462378463874687 as a value in my db
You are already doing it. This is the exact value stored in the field. You just cannot see it, because querying tool of SQL Management Studio formats it using scientific notation. When you do any computations on the value, or read it back into a double field in your program, you will get back 2462378463874687.
If you would like to see the exact number in your select query in SQL Management Studio, use CONVERT:
CONVERT (VARCHAR(50), float_field, 128) -- See note below
Note 1: 128 is a deprecated format. It will work with SQL Server-2008, which is one of the tags of your question, but in versions of SQL Server 2016 and above you need to use 3 instead.
Note 2: Since the name of the column is Amount, good chances are that you are looking for a different data type. Look into decimal data types, which provide a much better fit for representing monetary amounts.
* 2462378463874687 is right on the border for exact representation, because it uses all 52 bits of mantissa.

float or double precision

When I tell postgreSQL to show a column as float, I always get as a result "double precision".
Is it the same?
Like Damien quoted from the documentation:
PostgreSQL also supports the SQL-standard notations float and float(p) for specifying inexact numeric types.
Here, p specifies the minimum acceptable precision in binary digits.
PostgreSQL accepts float(1) to float(24) as selecting the real type,
while float(25) to float(53) select double precision.
Values of p outside the allowed range draw an error.
float with no precision specified is taken to mean double precision.
PostgreSQL, like other databases, supports the SQL standard by supplying an appropriate data type when a certain SQL standard type is requested. Since real or double precision fit the bill here, they are taken instead of creating new redundant types.
The disadvantage is that the data type of a column may read different from what you requested, but as long as it handles your data the way it should, is that a problem?

Float type in MySQL

I have a MySQL table with column of type float(10, 6).
If I insert 30.064742 into the column the value stored in the database is 30.064741.
Why?
Floating-point numbers imply a certain amount of imprecision. Use a DECIMAL column if you need to be certain to retain every digit.
It's a general problem with rounding numbers to a precision which can be stored in the database. Floats will round to multiples of powers of two. If you want something that is easier to think about, you can use the Decimal type, which will round to powers of ten.
More details in the documentation for numeric types:
When such a column is assigned a value with more digits following the decimal point than are allowed by the specified scale, the value is converted to that scale. (The precise behavior is operating system-specific, but generally the effect is truncation to the allowable number of digits.)