What's the best data type to store data of a bank? [duplicate] - sql

This question already has answers here:
Which datatype should be used for currency?
(8 answers)
Currency modeling in database
(4 answers)
Closed 3 years ago.
I was wondering what's the best data type for transaction amounts (in euro) of a bank ?
Example :
The person "A" sends 120.59 euros to "B"
What's the best data type to store this data (120.59) in a database ?
the transaction amount is positive, 2 digits after the decimal, and it will be used in calculations after (sum of amounts, averages, variance and standard-deviations...etc).
Is it okey to use REAL ? DECIMAL is ok ?

You do not want to store monetary amounts using floating point numbers.
You want to store them using fixed point -- that is numeric/decimal. For your example, it would be something like numeric(10, 2). However, you might want fractions of a cent for some reason, so larger precision and scale such as numeric(20, 4) is a good idea.

Related

More precision needed from SQL Server Money data type

I am working on this old SQL Server database which store numeric values in MONEY datatype. This has been good for years, now for some currency rate conversions we need up to 10 decimal places. We are exploring the possible conversion from MONEY datatype to DECIMAL.
I see that a MONEY field is equivalent to a DECIMAL(19, 4). Would it be safe just use a broader DECIMAL(25, 10) to accommodate 10 decimal digits?
What if we want ensure more space for future request, what would be the limit that would not fit anymore the Classic ASP application built on the database (using Double datatype)?
Thanks
you should define the type as decimal(25,10) which could hold the federal deficit up to an accuracy of 10 digits. (25 digits with ten after the decimal place).

Decimal(19,4) or Decimal(19.2) - which should I use?

This sounds like a silly question, but I've noticed that in a lot of table designs for e-commerce related projects I almost always see decimal(19, 4) being used for currency.
Why the 4 on scale? Why not 2?
Perhaps I'm missing a potential calculation issue down the road?
First off - you are receiving some incorrect advice from other answers. Obseve the following (64-bit OS on 64-bit architecture):
declare #op1 decimal(18,2) = 0.01
,#op2 decimal(18,2) = 0.01;
select result = #op1 * #op2;
result
---------.---------.---------.---------
0.0001
(1 row(s) affected)
Note the number of underscores underneath the title - 39 in all. (I changed every tenth to a period to aid counting.) That is precisely enough for 38 digits (the maximum allowable, and the default on a 64 bit CPU) plus a decimal point on display. Although both operands were declared as decimal(18,2) the calculation was performed, and reported, in decimal(38,4) datatype. (I am running SQL 2012 on a 64 bit machine - some details may vary based on machine architecture and OS.)
Therefore, it is clear that no precision is being lost. On the contrary, only overflow can occur, not precision loss. This is a direct consequence of all calculations on decimal operands being performed as integer arithmetic. You will occasionally see artifacts of this in intelli-sense when the type of intermediate fields of decimal type are reported as being int instead.
Consider the example above. The two operands are both of type decimal(18,2) and are stored as being integers of value 1, with a scale of 2. When multiplied the product is still 1, but the scale is evaluated by adding the scales, to create a result of integer value 1 and scale 4, which is a value of 0.0001 and of type decimal(18,4), stored as an integer with value 1 and scale 4.
Read that last paragraph again.
Rinse and repeat once more.
In practice, on a 64 bit machine and OS, this is actually stored and carried forward as being of type *decimal (38,4) because the calculations are being done on a CPU where the extra bits are free.
To return to your question - All major currencies of the world (that I am aware of) only require 2 decimal places, but there are a handful where 4 are required, and there are financial transactions such as currency transactions and bond sales where 4 decimal places are mandated by law. When devising the money datatype Microsoft appears to have opted for the maximum scale that might be required rather than the normal scale required. Given how few transactions, and corporations, actually require precision greater than 19 digits this seems eminently sensible.
If you have:
A high expectation of only dealing with major currencies (which at the current time only require 2 digits of scale); and
No expectation of dealing with transactions that are mandated by law to require 4 digits of scale
then you would be safe to use type decimal with scale 2 (such as decimal(19,2) or decimal(18,2) or decimal(38,2)) instead of money. This will ease some of your conversions and, given the assumptions above, have no cost. A typical case where these assumptions are met is in a GL or Subledger accounting system tracking transactions to the penny. However, a stock- or bond-trading system would not meet these assumptions because 4 digits of scale are mandated by law in those case.
A way to distinguish the two cases is whether transactions are reported in cents or percents, which only require 2 digits of scale, or in basis points which require 4 digits of scale.
If you are at all unsure as to which case applies to your programming circumstance, consult your Controller or Director of Finance as to the legal and GAAP requirements for your application. (S)he will be able to give you definitive advice.
In SQL the 19 is amount of integers, the 4 is amount of decimals.
If you only have 2 decimals and you store maybe a result of some calculations, which results in more than 2 decimals, theres "no way" to store those additional decimals.
Some currencies operates with more than 2 decimals.
Use the data type decimal, not money.
Things like gas prices would use the extra "scale" positions. You've seen gas at $1.959 per gallon, right?
When use decimal it's up to you how you want to use according to your business requirements.
But when you will use Money data type in sql by default it stores with 4 decimal places.
although the OP's question is about the scale, let's lament on why 19 is a popular precision for decimal on SQL server.
according to this document, this is how much storage a decimal uses:
Precision
Storage bytes
1 - 9
5
10-19
9
20-28
13
29-38
17
so 1 precision uses as much space as 9, and 10 uses as much as 19.
in a real world scenario 9 can easily be too little for money, especially if you opt for a scale of 4, leaving you between -99999.9999 and 99999.9999.
but 19 is plenty for any imaginable cases, that's why SQL Server's money data type uses that.
one can use 28 or 38 to prevent errors at conversions in case some erroneous data hides in the database.

Decimal vs Money datatype

Money datatype has been used for column like VendorHours, OverTime, Expenses in one of table.
I am designing another table which is related to this same table and will have similar columns but I am thinking about using the decimal datatype instead of money as decimal is more precise.
Later I found out that money datatype is being used because it take 8 bytes where as decimal would use 10 for 10-19 Precision.
Columns like Visit Hour, OverTime would fit into decimal with Precision 9 and would take only 5 bytes. So is this a good idea to use decimal(9,2) instead of money?
I will be doing a lot of calculation on those fields inside the stored procedure for reports.
It depends... if you're not going to do anything to the values other than add or subtract... no problems. If you're going to do anything else (like multiply or divide), you'll need at least 4 decimal places to do the calculations without loosing overall accuracy.
MONEY more accurately represents the real world situation, where each value is rounded to the nearest cent as calculated, then the average is again rounded. In a long calculation chain, the difference can wind up being considerably larger than one cent ... but due to the business-logic constraint that all intermediate values contain non-fractional cents, the MONEY result will be accurate, whereas the "more precise" DECIMAL will not.

Which datatype should be used for currency?

Seems like Money type is discouraged as described here.
My application needs to store currency, which datatype shall I be using? Numeric, Money or FLOAT?
Your source is in no way official. It dates to 2011 and I don't even recognize the authors. If the money type was officially "discouraged" PostgreSQL would say so in the manual - which it doesn't.
For a more official source, read this thread in pgsql-general (from just this week!), with statements from core developers including D'Arcy J.M. Cain (original author of the money type) and Tom Lane:
Related answer (and comments!) about improvements in recent releases:
Jasper Report: unable to get value for field 'x' of class 'org.postgresql.util.PGmoney'
Basically, money has its (very limited) uses. The Postgres Wiki suggests to largely avoid it, except for those narrowly defined cases. The advantage over numeric is performance.
decimal is just an alias for numeric in Postgres, and widely used for monetary data, being an "arbitrary precision" type. The manual:
The type numeric can store numbers with a very large number of digits.
It is especially recommended for storing monetary amounts and other
quantities where exactness is required.
Personally, I like to store currency as integer representing Cents if fractional Cents never occur (basically where money makes sense). That's more efficient than any other of the mentioned options.
Numeric with forced 2 units precision. Never use float or float like datatype to represent currency because if you do, people are going to be unhappy when the financial report's bottom line figure is incorrect by + or - a few dollars.
The money type is just left in for historical reasons as far as I can tell.
Take this as an example: 1 Iranian Rial equals 0.000030 United States Dollars. If you use fewer than 5 fractional digits then 1 IRR will be rounded to 0 USD after conversion. I know we're splitting rials here, but I think that when dealing with money you can never be too safe.
Your choices are:
bigint : store the amount in cents. This is what EFTPOS transactions use.
decimal(12,2) : store the amount with exactly two decimal places. This what most general ledger software uses.
float : terrible idea - inadequate accuracy. This is what naive developers use.
Option 2 is the most common and easiest to work with. Make the precision (12 in my example, meaning 12 digits in all) as large or small as works best for you.
Note that if you are aggregating multiple transactions that were the result of a calculation (eg involving an exchange rate) into a single value that has business meaning, the precision should be higher to provide a accurate macro value; consider using something like decimal(18, 8) so the sum is accurate and the individual values can be rounded to cent precision for display.
Use a 64-bit integer stored as bigint
Store in the small currency unit (cents) or use a big multiplier to create larger integers if cents are not granular enough. I recommend something like micro-dollars where dollars are divided by 1 million.
For example: $5,123.56 can be stored as 5123560000 microdollars.
Simple to use and compatible with every language.
Enough precision to handle fractions of a cent.
Works for very small per-unit pricing (like ad impressions or API charges).
Smaller data size for storage than strings or numerics.
Easy to maintain accuracy through calculations and apply rounding at the final output.
I keep all of my monetary fields as:
numeric(15,6)
It seems excessive to have that many decimal places, but if there's even the slightest chance you will have to deal with multiple currencies you'll need that much precision for converting. No matter what I'm presenting a user, I always store to US Dollar. In that way I can readily convert to any other currency, given the conversion rate for the day involved.
If you never do anything but one currency, the worst thing here is that you wasted a bit of space to store some zeroes.
Use BigInt to store currency as a positive integer representing the monetary value in the smallest currency unit (e.g., 100 cents to store $1.00 or 100 to store ¥100 (Japanese yen, a zero-decimal currency). This is what Stripe does--one the most important financial service companies for global ecommerce.
Source: see "Zero-decimal currencies" at https://stripe.com/docs/currencies
This is not a direct answer, but an example of why float is not the best data type for currency.
Because of the way floating point is represented internally, it is more susceptible to round off errors.
In our own decimal system, you’ll get round off errors whenever you divide by anything other than 2 or 5, which are the factors of 10. In binary, it’s only 2 and not 5, so even “clean” decimals, such as 0.2 (1/5) are at risk.
You can see this if you try the following:
select
0.1::float + 0.2::float as floats, -- 0.30000000000000004
0.1::numeric + 0.2::numeric as numerics --- 0.3
;
That’s the sort of thing that drives auditors round the bend.
My personal recommendation is decimal with the precision according to your needs. Decimal with precision = 0 can be the option if you want to store the integer number of currency minor units (e.g. cents) and you have troubles handling decimals in your programming language.
To find out the needed precision you need to consider the following:
Types of currencies you support (they can have different number of decimals). Cryptocurrencies have up to 18 decimals (ETH). The number of decimals can change over time due to inflation.
Storing prices of small units of goods (probably as a result of conversion from another currency) or having accumulators (accumulate 10% fee from 1 cent transactions until the sum reaches 1 cent) can require using more decimals than are defined for a currency
Storing integer number of minimal units can lead to the need of rescaling values in the future if you need to change the precision. If you use decimals, it's much easier.
Note, that you also need to find the corresponding data type in the programming language you use.
More details and caveats in the article.

Storing and computing with real numbers up to an arbitrary precision in vb.net [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
.NET Framework Library for arbitrary digit precision
How can I store a real number, eg, root 2 or one third, up to an arbitrary precision (the precision I need is infinate precision) in vb.net?
I would like to be able to store real numbers and perform operations on them (ie root 2 times root 2) without losing any accuracy - IE storing 1/3 would return the value 1/3 if I needed to retrieve this value.
I was thinking of using a fractal encoding but I am unsure as to the best way to do this.
Storage capacity is not an issue, I just need the real numbers to be 100% accurate.
Will that be a single real number there or does it need to be an arbitrary number of (almost) arbitrary figures? (Sorry for "answer" - for some reason i can't add comments now...)