Sql Server 2005 Data Types - sql

What is the diff between real, float, decimal and money. And most important, when would I use them. Like I understand - real and float are approx. types, meaning they dont store the exact value. Why would you ever want this?
Thanks

real and float numeric types are useful to handle a very wide range of values as is encountered with physical dimensions or mathematical results.
The loss of precision they incur, for example when adding values which are not in the same range, for example 0.00002468 + 1.23E9 (i.e. 1,230,000) is typically acceptable for practical uses. This is a small tribute to pay to the relatively compact storage requirements of these floating point types.
The decimal and money types do not cover such a broad range (yet they cover ranges that are beyond most typical accounting applications), and do not exhibit any of this lossy behavior with rounding and such.
See MS-SQL document for detailed information. The following table provides an indicative precision, range and storage requirement for various types.
Type Max value precision(*) Storage
money +/-922,000,000,000,000 3 (4?) 8 bytes
smallmoney +/-200,000 3? 4 bytes
decimal varies (as defined) varies varies 3 to 17
real +/- 3.4 * 10^38 7 digits 4 bytes
float "56" +/- 1.7 * 10 ^308 15 digits 8 bytes (float can also be declared to be just like a real)
(*) precision : For the "exact" types, this is the number of digits after the decimal point. For the "lossy" reals and floats, this is the number of significant digits.

Money is an exact data type. as in it is continuous between its upper and lower bound. You would generally use it when you want to store values of money and don't want to lose precision and get rounding errors caused by IEEE754. Decimal is a similarly an exact data type that isn't lossy up to a certain number of decimal places (which you can specify). Real is equivalent to float(24).
To be clear, precision loss can still occur when using division, but all other basic mathematical operations do not incur precision loss for Money and decimal types.
See here for an explanation of the various Transact SQL data types.

Related

Why when converting SQL Real to Numeric does the scale slightly increase?

I'm storing a value (0.15) as a Real datatype in a Quantity field in SQL.
Just playing around, when I cast as numeric, there are some very slight changes to scale.
I'm unsure why this occurs, and why these particular numbers?
select CAST(Quantity AS numeric(18,18)) -- Quantity being 0.15
returns
0.150000005960464480
Real and float are approximate numerics, not exact ones. If you need exact ones, use DECIMAL.
The benefit of the estimated ones is that they allow storing very large numbers using fewer storage bytes.
https://learn.microsoft.com/en-us/sql/t-sql/data-types/float-and-real-transact-sql?view=sql-server-2017
PS:Numeric and decimal are synonymous.
PS2: See Eric's Postpischil clarification comment below:
"Float and real represent a number as a significand multiplied by a power of two. decimal represents a number as a significand multiplied by a power of ten. Both means of representation are incapable of representing all real numbers, and both means of representation are subject to rounding errors. As I wrote, dividing 1 by 3 in a decimal format will have a rounding error"

Proper Data Type in SQL Server to store Scientific Notation value? (Ex. 10^3)

Say I have test results values for a lab procedure that come in as 103. What would be the best way to store this in SQL Server? I would think since this is numerical data it would be improper to just store it as string text and then program around calculating the data value from the string.
If you want to use your data in numeric calculations, it is probably best to represent your data using once of SQL servers native numeric data type. Since you show scientific notation, it is likely you will want to use either REAL or FLOAT.
Real is basically 7 decimal digits of precision and float has 15 digits of precision (at least this is how they are normally used). You can actually specify reduced precision for FLOAT, but in practice most people just use REAL in that case. REAL takes 4 bytes of storage, and FLOAT requires 8 bytes.
The other numeric types are for fixed decimal point arithmetic.
Numbers in scientific notation like this have three pieces of information:
The significand
The precision of the significand
The exponent of 10
Presuming we want to keep all this information as exact as possible, it may be best to store these in three non-floating point columns (floating-point values are inexact):
DECIMAL significand
INT precision (# of decimal places)
INT exponent
The downside to the approach of separating these parts out, of course, is that you'll have to put the values back together when doing calculations -- but by doing that you'll know the correct number of significant figures for the result. Storing these three parts will also take up 25 bytes per value (17 for the DECIMAL, and 4 each for the two INTs), which may be a concern if you're storing a very large quantity of values.
Update per explanatory comments:
Given that your goal is to store an exponent from 1-8, you really only need to store the exponent, since you know the mantissa is always 10. Therefore, if your value is always going to be a whole number, you can just use a single INT column; if it will have decimal places, you can use a FLOAT or REAL per Gary Walker, or use a DECIMAL to store a precise decimal to a specified number of places.
If you specify a DECIMAL, you can provide two arguments in the column type; the first is the total number of digits to be stored, while the second is the number of digits to the right of the decimal point. So if your values are going to be accurate to the tenths place, you might create a column of DECIMAL(2,1). SQL Server MSDN documentation: DECIMAL and NUMERIC types

Which datatype should be used for currency?

Seems like Money type is discouraged as described here.
My application needs to store currency, which datatype shall I be using? Numeric, Money or FLOAT?
Your source is in no way official. It dates to 2011 and I don't even recognize the authors. If the money type was officially "discouraged" PostgreSQL would say so in the manual - which it doesn't.
For a more official source, read this thread in pgsql-general (from just this week!), with statements from core developers including D'Arcy J.M. Cain (original author of the money type) and Tom Lane:
Related answer (and comments!) about improvements in recent releases:
Jasper Report: unable to get value for field 'x' of class 'org.postgresql.util.PGmoney'
Basically, money has its (very limited) uses. The Postgres Wiki suggests to largely avoid it, except for those narrowly defined cases. The advantage over numeric is performance.
decimal is just an alias for numeric in Postgres, and widely used for monetary data, being an "arbitrary precision" type. The manual:
The type numeric can store numbers with a very large number of digits.
It is especially recommended for storing monetary amounts and other
quantities where exactness is required.
Personally, I like to store currency as integer representing Cents if fractional Cents never occur (basically where money makes sense). That's more efficient than any other of the mentioned options.
Numeric with forced 2 units precision. Never use float or float like datatype to represent currency because if you do, people are going to be unhappy when the financial report's bottom line figure is incorrect by + or - a few dollars.
The money type is just left in for historical reasons as far as I can tell.
Take this as an example: 1 Iranian Rial equals 0.000030 United States Dollars. If you use fewer than 5 fractional digits then 1 IRR will be rounded to 0 USD after conversion. I know we're splitting rials here, but I think that when dealing with money you can never be too safe.
Your choices are:
bigint : store the amount in cents. This is what EFTPOS transactions use.
decimal(12,2) : store the amount with exactly two decimal places. This what most general ledger software uses.
float : terrible idea - inadequate accuracy. This is what naive developers use.
Option 2 is the most common and easiest to work with. Make the precision (12 in my example, meaning 12 digits in all) as large or small as works best for you.
Note that if you are aggregating multiple transactions that were the result of a calculation (eg involving an exchange rate) into a single value that has business meaning, the precision should be higher to provide a accurate macro value; consider using something like decimal(18, 8) so the sum is accurate and the individual values can be rounded to cent precision for display.
Use a 64-bit integer stored as bigint
Store in the small currency unit (cents) or use a big multiplier to create larger integers if cents are not granular enough. I recommend something like micro-dollars where dollars are divided by 1 million.
For example: $5,123.56 can be stored as 5123560000 microdollars.
Simple to use and compatible with every language.
Enough precision to handle fractions of a cent.
Works for very small per-unit pricing (like ad impressions or API charges).
Smaller data size for storage than strings or numerics.
Easy to maintain accuracy through calculations and apply rounding at the final output.
I keep all of my monetary fields as:
numeric(15,6)
It seems excessive to have that many decimal places, but if there's even the slightest chance you will have to deal with multiple currencies you'll need that much precision for converting. No matter what I'm presenting a user, I always store to US Dollar. In that way I can readily convert to any other currency, given the conversion rate for the day involved.
If you never do anything but one currency, the worst thing here is that you wasted a bit of space to store some zeroes.
Use BigInt to store currency as a positive integer representing the monetary value in the smallest currency unit (e.g., 100 cents to store $1.00 or 100 to store ¥100 (Japanese yen, a zero-decimal currency). This is what Stripe does--one the most important financial service companies for global ecommerce.
Source: see "Zero-decimal currencies" at https://stripe.com/docs/currencies
This is not a direct answer, but an example of why float is not the best data type for currency.
Because of the way floating point is represented internally, it is more susceptible to round off errors.
In our own decimal system, you’ll get round off errors whenever you divide by anything other than 2 or 5, which are the factors of 10. In binary, it’s only 2 and not 5, so even “clean” decimals, such as 0.2 (1/5) are at risk.
You can see this if you try the following:
select
0.1::float + 0.2::float as floats, -- 0.30000000000000004
0.1::numeric + 0.2::numeric as numerics --- 0.3
;
That’s the sort of thing that drives auditors round the bend.
My personal recommendation is decimal with the precision according to your needs. Decimal with precision = 0 can be the option if you want to store the integer number of currency minor units (e.g. cents) and you have troubles handling decimals in your programming language.
To find out the needed precision you need to consider the following:
Types of currencies you support (they can have different number of decimals). Cryptocurrencies have up to 18 decimals (ETH). The number of decimals can change over time due to inflation.
Storing prices of small units of goods (probably as a result of conversion from another currency) or having accumulators (accumulate 10% fee from 1 cent transactions until the sum reaches 1 cent) can require using more decimals than are defined for a currency
Storing integer number of minimal units can lead to the need of rescaling values in the future if you need to change the precision. If you use decimals, it's much easier.
Note, that you also need to find the corresponding data type in the programming language you use.
More details and caveats in the article.

Difference between numeric, float and decimal in SQL Server

What are the differences between numeric, float and decimal datatypes and which should be used in which situations?
For any kind of financial transaction (e.g. for salary field), which one is preferred and why?
use the float or real data types only if the precision provided by decimal (up to 38 digits) is insufficient
Approximate numeric data types(see table 3.3) do not store the exact values specified for many numbers; they store an extremely close approximation of the value. (Technet)
Avoid using float or real columns in WHERE clause search conditions, especially the = and <> operators. It is best to limit float and real columns to > or < comparisons. (Technet)
so generally choosing Decimal as your data type is the best bet if
your number can fit in it. Decimal precision is 10E38[~ 38 digits]
smaller storage space (and maybe calculation speed) of Float is not important for you
exact numeric behavior is required, such as in financial applications, in operations involving rounding, or in equality checks. (Technet)
Exact Numeric Data Types decimal and numeric - MSDN
numeric = decimal (5 to 17 bytes)
will map to Decimal in .NET
both have (18, 0) as default (precision,scale) parameters in SQL server
scale = maximum number of decimal digits that can be stored to the right of the decimal point.
money(8 byte) and smallmoney(4 byte) are also Exact Data Type and will map to Decimal in .NET and have 4 decimal points (MSDN)
Approximate Numeric Data Types float and real - MSDN
real (4 byte)
will map to Single in .NET
The ISO synonym for real is float(24)
float (8 byte)
will map to Double in .NET
All exact numeric types always produce the same result, regardless of which kind of processor architecture is being used or the magnitude of the numbers
The parameter supplied to the float data type defines the number of bits that are used to store the mantissa of the floating point number.
Approximate Numeric Data Type usually uses less storage and have better speed (up to 20x) and you should also consider when they got converted in .NET
What is the difference between Decimal, Float and Double in C#
Decimal vs Double Speed
SQL Server - .NET Data Type Mappings (From MSDN)
main source : MCTS Self-Paced Training Kit (Exam 70-433): Microsoft® SQL Server® 2008 Database Development - Chapter 3 - Tables, Data Types, and Declarative Data Integrity Lesson 1 - Choosing Data Types (Guidelines) - Page 93
Guidelines from MSDN: Using decimal, float, and real Data
The default maximum precision of numeric and decimal data types is 38.
In Transact-SQL, numeric is functionally equivalent to the decimal
data type. Use the decimal data type to store numbers with decimals
when the data values must be stored exactly as specified.
The behavior of float and real follows the
IEEE 754 specification on approximate numeric data types. Because of the approximate nature of the float and real data types, do not use these data types when exact
numeric behavior is required, such as in financial applications, in
operations involving rounding, or in equality checks. Instead, use the
integer, decimal, money, or smallmoney data types. Avoid using float
or real columns in WHERE clause search conditions, especially the =
and <> operators. It is best to limit float and real columns to > or <
comparisons.
They Differ in Data Type Precedence
Decimal and Numeric are the same functionally but there is still data type precedence, which can be crucial in some cases.
SELECT SQL_VARIANT_PROPERTY(CAST(1 AS NUMERIC) + CAST(1 AS DECIMAL),'basetype')
The resulting data type is numeric because it takes data type precedence.
Exhaustive list of data types by precedence:
Reference link
Not a complete answer, but a useful link:
"I frequently do calculations against decimal values. In some cases casting decimal values to float ASAP, prior to any calculations, yields better accuracy. "
http://sqlblog.com/blogs/alexander_kuznetsov/archive/2008/12/20/for-better-precision-cast-decimals-before-calculations.aspx
The case for Decimal
What it the underlying need?
It arises from the fact that, ultimately, computers represent, internally, numbers in binary format. That leads, inevitably, to rounding errors.
Consider this:
0.1 (decimal, or "base 10") = .00011001100110011... (binary, or "base 2")
The above ellipsis [...] means 'infinite'. If you look at it carefully, there is an infinite repeating pattern (='0011')
So, at some point the computer has to round that value. This leads to accumulation errors deriving from the repeated use of numbers that are inexactly stored.
Say that you want to store financial amounts (which are numbers that may have a fractional part). First of all, you cannot use integers obviously (integers don't have a fractional part).
From a purely mathematical point of view, the natural tendency would be to use a float. But, in a computer, floats have the part of a number that is located after a decimal point - the "mantissa" - limited. That leads to rounding errors.
To overcome this, computers offer specific datatypes that limit the binary rounding error in computers for decimal numbers. These are the data type that
should absolutely be used to represent financial amounts. These data types typically go by the name of Decimal. That's the case in C#, for example. Or, DECIMAL in most databases.
Float is Approximate-number data type, which means that not all values in the data type range can be represented exactly.
Decimal/Numeric is Fixed-Precision data type, which means that all the values in the data type range can be represented exactly with precision and scale. You can use decimal for money saving.
Converting from Decimal or Numeric to float can cause some loss of precision. For the Decimal or Numeric data types, SQL Server considers each specific combination of precision and scale as a different data type. DECIMAL(2,2) and DECIMAL(2,4) are different data types. This means that 11.22 and 11.2222 are different types though this is not the case for float. For FLOAT(6) 11.22 and 11.2222 are same data types.
You can also use money data type for saving money. This is native data type with 4 digit precision for money. Most experts prefers this data type for saving money.
Reference
1
2
3
Decimal has a fixed precision while float has variable precision.
EDIT (failed to read entire question):
Float(53) (aka real) is a double-precision (64-bit) floating point number in SQL Server. Regular Float is a single-precision (32-bit) floating point number. Double is a good combination of precision and simplicty for a lot of calculations. You can create a very high precision number with decimal -- up to 136-bit -- but you also have to be careful that you define your precision and scale correctly so that it can contain all your intermediate calculations to the necessary number of digits.
Although the question didn't include the MONEY data type some people coming across this thread might be tempted to use the MONEY data type for financial calculations.
Be wary of the MONEY data type, it's of limited precision.
There is a lot of good information about it in the answers to this Stackoverflow question:
Should you choose the MONEY or DECIMAL(x,y) datatypes in SQL Server?

Storing money in a decimal column - what precision and scale?

I'm using a decimal column to store money values on a database, and today I was wondering what precision and scale to use.
Since supposedly char columns of a fixed width are more efficient, I was thinking the same could be true for decimal columns. Is it?
And what precision and scale should I use? I was thinking precision 24/8. Is that overkill, not enough or ok?
This is what I've decided to do:
Store the conversion rates (when applicable) in the transaction table itself, as a float
Store the currency in the account table
The transaction amount will be a DECIMAL(19,4)
All calculations using a conversion rate will be handled by my application so I keep control of rounding issues
I don't think a float for the conversion rate is an issue, since it's mostly for reference, and I'll be casting it to a decimal anyway.
Thank you all for your valuable input.
If you are looking for a one-size-fits-all, I'd suggest DECIMAL(19, 4) is a popular choice (a quick Google bears this out). I think this originates from the old VBA/Access/Jet Currency data type, being the first fixed point decimal type in the language; Decimal only came in 'version 1.0' style (i.e. not fully implemented) in VB6/VBA6/Jet 4.0.
The rule of thumb for storage of fixed point decimal values is to store at least one more decimal place than you actually require to allow for rounding. One of the reasons for mapping the old Currency type in the front end to DECIMAL(19, 4) type in the back end was that Currency exhibited bankers' rounding by nature, whereas DECIMAL(p, s) rounded by truncation.
An extra decimal place in storage for DECIMAL allows a custom rounding algorithm to be implemented rather than taking the vendor's default (and bankers' rounding is alarming, to say the least, for a designer expecting all values ending in .5 to round away from zero).
Yes, DECIMAL(24, 8) sounds like overkill to me. Most currencies are quoted to four or five decimal places. I know of situations where a decimal scale of 8 (or more) is required but this is where a 'normal' monetary amount (say four decimal places) has been pro rata'd, implying the decimal precision should be reduced accordingly (also consider a floating point type in such circumstances). And no one has that much money nowadays to require a decimal precision of 24 :)
However, rather than a one-size-fits-all approach, some research may be in order. Ask your designer or domain expert about accounting rules which may be applicable: GAAP, EU, etc. I vaguely recall some EU intra-state transfers with explicit rules for rounding to five decimal places, therefore using DECIMAL(p, 6) for storage. Accountants generally seem to favour four decimal places.
PS Avoid SQL Server's MONEY data type because it has serious issues with accuracy when rounding, among other considerations such as portability etc. See Aaron Bertrand's blog.
Microsoft and language designers chose banker's rounding because hardware designers chose it [citation?]. It is enshrined in the Institute of Electrical and Electronics Engineers (IEEE) standards, for example. And hardware designers chose it because mathematicians prefer it. See Wikipedia; to paraphrase: The 1906 edition of Probability and Theory of Errors called this 'the computer's rule' ("computers" meaning humans who perform computations).
We recently implemented a system that needs to handle values in multiple currencies and convert between them, and figured out a few things the hard way.
NEVER USE FLOATING POINT NUMBERS FOR MONEY
Floating point arithmetic introduces inaccuracies that may not be noticed until they've screwed something up. All values should be stored as either integers or fixed-decimal types, and if you choose to use a fixed-decimal type then make sure you understand exactly what that type does under the hood (ie, does it internally use an integer or floating point type).
When you do need to do calculations or conversions:
Convert values to floating point
Calculate new value
Round the number and convert it back to an integer
When converting a floating point number back to an integer in step 3, don't just cast it - use a math function to round it first. This will usually be round, though in special cases it could be floor or ceil. Know the difference and choose carefully.
Store the type of a number alongside the value
This may not be as important for you if you're only handling one currency, but it was important for us in handling multiple currencies. We used the 3-character code for a currency, such as USD, GBP, JPY, EUR, etc.
Depending on the situation, it may also be helpful to store:
Whether the number is before or after tax (and what the tax rate was)
Whether the number is the result of a conversion (and what it was converted from)
Know the accuracy bounds of the numbers you're dealing with
For real values, you want to be as precise as the smallest unit of the currency. This means you have no values smaller than a cent, a penny, a yen, a fen, etc. Don't store values with higher accuracy than that for no reason.
Internally, you may choose to deal with smaller values, in which case that's a different type of currency value. Make sure your code knows which is which and doesn't get them mixed up. Avoid using floating point values even here.
Adding all those rules together, we decided on the following rules. In running code, currencies are stored using an integer for the smallest unit.
class Currency {
String code; // eg "USD"
int value; // eg 2500
boolean converted;
}
class Price {
Currency grossValue;
Currency netValue;
Tax taxRate;
}
In the database, the values are stored as a string in the following format:
USD:2500
That stores the value of $25.00. We were able to do that only because the code that deals with currencies doesn't need to be within the database layer itself, so all values can be converted into memory first. Other situations will no doubt lend themselves to other solutions.
And in case I didn't make it clear earlier, don't use float!
When handling money in MySQL, use DECIMAL(13,2) if you know the precision of your money values or use DOUBLE if you just want a quick good-enough approximate value.
So if your application needs to handle money values up to a trillion dollars (or euros or pounds), then this should work:
DECIMAL(13, 2)
Or, if you need to comply with GAAP then use:
DECIMAL(13, 4)
The money datatype on SQL Server has four digits after the decimal.
From SQL Server 2000 Books Online:
Monetary data represents positive or negative amounts of money. In Microsoft® SQL Server™ 2000, monetary data is stored using the money and smallmoney data types. Monetary data can be stored to an accuracy of four decimal places. Use the money data type to store values in the range from -922,337,203,685,477.5808 through +922,337,203,685,477.5807 (requires 8 bytes to store a value). Use the smallmoney data type to store values in the range from -214,748.3648 through 214,748.3647 (requires 4 bytes to store a value). If a greater number of decimal places are required, use the decimal data type instead.
4 decimal places would give you the accuracy to store the world's smallest currency sub-units. You can take it down further if you need micropayment (nanopayment?!) accuracy.
I too prefer DECIMAL to DBMS-specific money types, you're safer keeping that kind of logic in the application IMO. Another approach along the same lines is simply to use a [long] integer, with formatting into ¤unit.subunit for human readability (¤ = currency symbol) done at the application level.
If you were using IBM Informix Dynamic Server, you would have a MONEY type which is a minor variant on the DECIMAL or NUMERIC type. It is always a fixed-point type (whereas DECIMAL can be a floating point type). You can specify a scale from 1 to 32, and a precision from 0 to 32 (defaulting to a scale of 16 and a precision of 2). So, depending on what you need to store, you might use DECIMAL(16,2) - still big enough to hold the US Federal Deficit, to the nearest cent - or you might use a smaller range, or more decimal places.
Sometimes you will need to go to less than a cent and there are international currencies that use very large demoniations. For example, you might charge your customers 0.088 cents per transaction. In my Oracle database the columns are defined as NUMBER(20,4)
If you're going to be doing any sort of arithmetic operations in the DB (multiplying out billing rates and so on), you'll probably want a lot more precision than people here are suggesting, for the same reasons that you'd never want to use anything less than a double-precision floating point value in application code.
I would think that for a large part your or your client's requirements should dictate what precision and scale to use. For example, for the e-commerce website I am working on that deals with money in GBP only, I have been required to keep it to Decimal( 6, 2 ).
A late answer here, but I've used
DECIMAL(13,2)
which I'm right in thinking should allow upto 99,999,999,999.99.