Keep original amount format after ratio multiplication - sql

I have a problem with format conversion in SQL after a ratio multiplication.
I have several amounts in this form :
00000000008846
00000000002258
00000000000003
00000000006088
00000000696714
00000000636292
00000000043845
For each amount, I have a ratio currency in this form:
000000875000
000001030000
000001512000
000001480000
000000980000
000001950000
What I want to do is, after multiplying the amount with the currency, getting back the original amount format.
Currently, I get numbers like this after multiplying:
9531000000
8846000000
2258000000
3000000
6088000000
738516840000
655380760000
What I want is a 14 digits number like the original ammount:
00000000009531
00000000008846
00000000002258
00000000000003
00000000006088
00000000738517
00000000655381
You can see the result is rounded for the last 2.
How can this be done?

You'll have to convert your results back to varchar2 data type, either by
to_char(:your_result_value,'fm00000000000000')
or by
lpad(:your_result_value, 14, '0')
Enjoy.

Related

how to extract thousands and units from a number

I have a number for example 1550,I need to get thousands and units from this number.
To extract thousand I am using the following formula:
Select TRUNC(1550/1000) FROM DUAL
I will get 1, now I need to get 550 from the above number.
What will be the best formula to get the remaining units from the amount, please also consider that the amount can be 550, 12501, 50, etc.
Thanks.
You are looking for the mod() function:
select mod(1550, 1000)
The specific operation is called the modulus, and it calculates the remainder. This can be a little tricky if you have negative numbers. Do you want mod(-1, 5) to be -1 or 4?
Depending on what you want, you can also calculate the value directly:
select 1550 - floor(1550/1000)*1000

How does the Average function work in relational databases?

I'm trying to find geometric average of values from a table with millions of rows. For those that don't know, to find the geometric average, you mulitply each value times each other then divide by the number of rows.
You probably already see the problem; The number multiplied number will quickly exceed the maximum allowed system maximum. I found a great solution that uses the natural log.
http://timothychenallen.blogspot.com/2006/03/sql-calculating-geometric-mean-geomean.html
However that got me to wonder wouldn't the same problem apply with the arithmetic mean? If you have N records, and N is very large the running sum can also exceed the system maximum.
So how do RDMS calculate averages during queries?
I don't know an exact implementation for arithmetic mean in an RDBMS, nor did you specify one in your original question. But the RDBMS does not need to sum a million rows in a column in order to obtain the arithmetic mean. Consider the following summation:
sum = (x1 + x2 + x3 + ... + x1000000)
Then the mean can be written as
mean = sum / N = (x1 + x2 + x3 + ... + x1000000) / N, for N = 1,000,000
But this expression can be broken up into pieces like this:
mean = [(x1 + x2 + x3) / N ] + [(x4 + x5 + x6) / N] + ...
In other words, the RDBMS can simply scan down the million rows in a column and find the mean section by section, without running the risk of an overflow. And since each number in the column is presumably within range for the type storing it, there is no chance of the mean value itself overflowing.
Most databases don't support a product() function the way they support an average.
However, you can use do what you want with logs. The product (simplified) is like:
select exp(sum(ln(x)) as product
The average would be:
select power(exp(sum(ln(x))), 1.0 / count(*)) as geoaverage
or
select EXP(AVG(LN(x))) as geoaverage
The LN() function might be LOG() on some platforms...
These are schematics. The functions for exp() and ln() and power() vary, depending on the database. Plus, if you have to take into account zero or negative numbers, the logic is more complicated.
Very easy to check. For example, SQL Server 2008.
DECLARE #T TABLE(i int);
INSERT INTO #T(i) VALUES
(2147483647),
(2147483647);
SELECT AVG(i) FROM #T;
result
(2 row(s) affected)
Msg 8115, Level 16, State 2, Line 7
Arithmetic overflow error converting expression to data type int.
There is no magic. Column type is int, server adds values together using internal variable of the same type int and intermediary result exceeds range for int.
You can run the similar check for any other DBMS that you use. Different engines may behave differently, but I would expect all of them to stick to the original type of the column. For example, averaging two int values 100 and 101 may result in 100 or 101 (still int), but never 100.5.
For SQL Server this behavior is documented. I would expect something similar for all other engines:
AVG () computes the average of a set of values by dividing the sum of
those values by the count of nonnull values. If the sum exceeds the
maximum value for the data type of the return value an error will be
returned.
So, you have to be careful when calculating simple average as well, not just product.
Here is extract from SQL 92 Standard:
6) Let DT be the data type of the < value expression >.
9) If SUM or AVG is specified, then:
a) DT shall not be character string, bit string, or datetime.
b) If SUM is specified and DT is exact numeric with scale S, then the
data type of the result is exact numeric with implementation-defined
precision and scale S.
c) If AVG is specified and DT is exact numeric, then the data type of
the result is exact numeric with implementation- defined precision not
less than the precision of DT and implementation-defined scale not
less than the scale of DT.
d) If DT is approximate numeric, then the data type of the result is
approximate numeric with implementation-defined precision not less
than the precision of DT.
e) If DT is interval, then the data type of the result is inter- val
with the same precision as DT.
So, DBMS can convert int to larger type when calculating AVG, but it has to be an exact numeric type, not floating-point. In any case, depending on the values you can still get arithmetic overflow.
Some DBMS — specifically, the Informix DBMS — convert from an INT type to a floating point type to do the calculation:
SQL[2148]: create table t(i int);
SQL[2149]: insert into t values(214748347);
SQL[2150]: insert into t values(214748347);
SQL[2151]: insert into t values(214748347);
SQL[2152]: select avg(i) from t;
214748347.0
SQL[2153]: types on;
SQL[2154]: select i from t;
INTEGER
214748347
214748347
214748347
SQL[2155]: select avg(i) from t;
DECIMAL(32)
214748347.0
SQL[2156]:
Similarly with other types. This can still end with an overflow under some circumstances; you then get a runtime error. However, it is rather seldom that you exceed the precision — it typically takes a very large number of rows for the sum to exceed the limits, even if you're counting the US deficit over the next century in atto-Zimbabwean dollars circa 2009.

wrong calculation of power query - 101

my data in the table is
2.8202148
1.810577904
4.399182566
78.56037454
4.62585733
3.905997503
3.877795355
normal sum gives the result as 99.9999999954482
but in pivot table (power query) it gives 101 ! somehow...
any suggestions ?
Thanks,
If I round those numbers to their nearest integer, and then sum then, I get 101. You've probably set up something to use integers instead of floating point numbers. Change to to floating point and you should be fine.

Oracle TO_CHAR Format Mask for displaying both integral numbers and floating point numbers

I'm trying to find the correct Oracle format mask to display numbers on an Apex page in a report in a certain way.
Most of the times these numbers are integers but sometimes these numbers can be floating point numbers.
Let's say I have the following three queries:
Query 1
SELECT TO_CHAR(1, '<Format Mask>', 'NLS_NUMERIC_CHARACTERS = '',.''') FROM DUAL;
Query 2
SELECT TO_CHAR(0.1, '<Format Mask>', 'NLS_NUMERIC_CHARACTERS = '',.''') FROM DUAL;
Query 3
SELECT TO_CHAR(0.01, '<Format Mask>', 'NLS_NUMERIC_CHARACTERS = '',.''') FROM DUAL;
Now I want to use one single format mask which will give me the following results:
Result 1
1
Result 2
0,1
Result 3
0,01
Can anyone provide me with the correct format mask to achieve this?
I've tried a format mask like FM990D999 but it leaves me with a comma trailing the 1 in Query 1.
There are ways to alter your column value in the query while still retaining (some) of the functionality in the report(s). However, having multiple such columns and multiple report you might find there is a lot of overhead for little gain.
Look at this post on the OTN forums: order by date in IR
The issue is much the same: the data in the column represents a date but is actually not a date. This post contains a solution to use in apex < 4.2.
From 4.2 onwards you have a better option called the HTML expression.
Again, linked from OTN: Re: Report formatting/sorting issue
Quoted from linked post, user fac586
Include both variance and abs(variance) in the query:
SELECT
region,
estimate,
actual,
(estimate - actual) AS variance,
ABS(estimate - actual) AS abs_variance,
(CASE
WHEN (estimate - actual)>=0 THEN 'green'
WHEN (estimate - actual)<0 THEN 'red'
ELSE NULL
END) AS variance_color
from
expenses
And the HTML Expression for the "variance" column is:
<span style="color: #VARIANCE_COLOR#; font-weight: bold;">#ABS_VARIANCE#</span>
Hide the #VARIANCE_COLOR# and #ABS_VARIANCE# columns.
#ABS_VARIANCE# is the value shown in the column, but the sort is
performed in the underlying SQL using the original variance value.
This is much like Alex suggested but is a bit more work: formatting in the source, adding an html expression, hiding the other column.
I suppose it depends on how far you want to drive it. Why not just apply the format to the column through its attributes?
Also be aware it is possible to use string substitution syntax in those fields. You could have a couple application items containing format masks, and then reference the correct mask in the format mask field.
Eg:
Application item AI_FORMAT_MASK1 has a value FM9990D00.
In the format mask field you can then use &AI_FORMAT_MASK1.

What should be the best way to store a percent value in SQL-Server?

I want to store a value that represents a percent in SQL server, what data type should be the prefered one?
You should use decimal(p,s) in 99.9% of cases.
Percent is only a presentation concept: 10% is still 0.1.
Simply choose precision and scale for the highest expected values/desired decimal places when expressed as real numbers. You can have p = s for values < 100% and simply decide based on decimal places.
However, if you do need to store 100% or 1, then you'll need p = s+1.
This then allows up to 9.xxxxxx or 9xx.xxxx%, so I'd add a check constraint to keep it maximum of 1 if this is all I need.
decimal(p, s) and numeric(p, s)
p (precision):
The maximum total number of decimal digits that will be stored (both to the left and to the right of the decimal point)
s (scale):
The number of decimal digits that will be stored to the right of the decimal point (-> s defines the number of decimal places)
0 <= s <= p.
p ... total number of digits
s ... number of digits to the right of the decimal point
p-s ... number of digits to the left of the decimal point
Example:
CREATE TABLE dbo.MyTable
( MyDecimalColumn decimal(5,2)
,MyNumericColumn numeric(10,5)
);
INSERT INTO dbo.MyTable VALUES (123, 12345.12);
SELECT MyDecimalColumn, MyNumericColumn FROM dbo.MyTable;
Result:
MyDecimalColumn: 123.00 (p=5, s=2)
MyNumericColumn: 12345.12000 (p=10, s=5)
link: msdn.microsoft.com
I agree, DECIMAL is where you should store this type of number. But to make the decision easier, store it as a percentage of 1, not as a percentage of 100. That way you can store exactly the number of decimal places you need regardless of the "whole" number. So if you want 6 decimal places, use DECIMAL(9, 8) and for 23.3436435%, you store 0.23346435. Changing it to 23.346435% is a display problem, not a storage problem, and most presentation languages / report writers etc. are capable of changing the display for you.
I think decimal(p, s) should be used while s represents the percentage capability.
the 'p' could of been even 1 since we will never need more than one byte since each digit in left side of the point is one hunderd percent, so the p must be at least s+1, in order you should be able to store up to 1000%.
but SQL doesn't allow the 'p' to be smaller than the s.
Examples:
28.2656579879% should be decimal(13, 12) and should be stored 00.282656579879
128.2656579879% should be decimal(13, 12) and should be stored 01.282656579879
28% should be stored in decimal(3,2) as 0.28
128% should be stored in decimal(3,2) as 1.28
Note: if you know that you're not going to reach the 100% (i.e. your value will always be less than 100% than use decimal(s, s), if it will, use decimal(s+1, s).
And so on
The datatype of the column should be decimal.