I'm doing some integration with SAP and a custom application. The calculated values for 'Total' and 'Price after Discount' are off half of the time.
Given 'Quantity', 'Unit Price' and 'Discount %', how is their calculations formulated?
This is the formula I've been using to get 'Total' so far and it doesn't always match up:
Let R = Round to two decimal places away from zero
Total = R(Quantity * Unit Price) - R(R(Quantity* Unit Price) * R(Discount/100))
But as you can see, if I plug in the first line of Quantity: 11217, Unit Price 0.3 and Discount: 65, I get a different result of 1177.78
What tweak to my formula then should I make so I consistently match with SAP's ('Total' and 'Price after Discount')?
After a lot of trial and errors, I finally figured out SAP's algorithm to get 'Total':
Let R = Round to two decimal places away from zero
Total = R(Quantity * Unit Price * (100 - Discount%) / 100)
I had to take out rounding on each step and not use 'Original Price - Discount'
Related
I am allocating a single unit across multiple rows using a calculation and storing the results into a table. I am then sum() the allocations and the sums are resulting in numbers that are not whole numbers. What is going on is that some of the allocations are ending up as numbers with repeating decimals, and then the sum of those not adding back up to the whole number (ala 1/3 + 1/3 + 1/3 != 1).
I have tried casting the numbers into different formats, however, Athena keep rounding the decimals at some arbitrary precision resulting in the problem.
I would like the sum of the allocations to equal the sum of the original units.
My Database is AWS Athena which I understand to use the Presto SQL language.
Example of my allocation:
case
when count_of_visits = 1 then 1
when count_of_visits = 2 then .5
when count_of_visits >= 3 then
case
when visit_seq_number = min_visit_seq_number then .4
when visit_seq_number = max_visit_seq_number then .4
else .2 / (count_of_visits - 2 )
end
else 0
end as u_shp_alloc_leads
In this allocation, the first and last visits get 40% of the allocation and all visits in between split 20%
A unit that is being allocated to 29 visits ends up dividing the 20% by 27 which equals 0.00740Repeating. The table is storing 0.007407407407407408 which when I go to sum the numbers the result is 1.0000000000000004 I would like the result to be 1
This is a limitation of databases or computers in general. When you work with fractions like that, some sort of rounding will always take place.
I would apply a reasonable degree of rounding to the x-th decimal on the sums you retrieve from your table, that will just cut off these residual decimals at the end.
If that's not sufficient for you, something you can do to at least theoretically have full precision is to store numerator and denominator separately in two columns. When computing sum( numerator_column/denominator_column ) you will see the same rounding effects, so summing up the numbers would be something a little more complicated like this:
SELECT sum(numerator_sum/denominator)
FROM (
SELECT
denominator,
sum(numerator) as numerator_sum
FROM your_allocation_table
GROUP BY denominator
)
I was automating a trading software using AHK script. During price submission the app does not accept in between values, say between 1.50 and 1.55. It only accepts multiples of 0.05 because its tick-size is 0.05 hence i have to do the price conversion using AHK script.
You can just use
Round(N / 0.05) * 0.05
Example
Round(1.53 / 0.05) * 0.05 ; Returns 1.55
Explanation
I'm going to explain with 0.01 because it's easier to visualize, but it applies to any number. The number to round is 1.234. Anything after 3 will need to be discarded in the rounding process.
If we divide the number 1.234 by 100, we get 123.4. Note how the period is after 3. So we can just use Round now.
After rounding, the number is 123. Now we just need to scale it back down by multiplying by 0.01, which results in 1.23.
Formatting
As 0x464e pointed out in the comments, due how floating-point numbers work, the result may be imprecise. So if you're going to convert them to strings, do so using Format("{:.2f}", number).
The 2 comes from the minimum number of decimal places needed to accurately represent any number multiple of 0.05. By the way, this can be calculated by using:
Ceil(Log(1 / 0.05)) ; Returns 2
The following AHK function Roundoff converts and returns the price in its nearest tick-size multiple.
Roundoff(price) ;round off to nearest 5 paisa value
{
price := Round(price, 2) ;ronded to 2 decimal places
priceNoDecimal := Floor(price)
decimalPart := price - priceNoDecimal
decimalPart := Floor( Round(decimalPart * 100, 2) )
numerator := Floor(decimalPart / 5)
remainder := Mod(decimalPart, 5)
if ( remainder > 2)
retval := (numerator * 5) + 5
if ( remainder < 3)
retval := (numerator * 5)
retval := Round(priceNoDecimal + (retval / 100), 2)
return retval
}
I have a column in a database table called CONDITION_PERCENT.
It has values like:
1
0.1
0.01
Which are meant to represent:
100%
10%
1%
I suspect that referring to these values as percentages might be incorrect. Calling them percentages certainly seems to be misleading my users (everyone expects the values to be fractions of 100, not fractions of 1).
What is the proper name for percentage values that are a fraction of 1?
From the examples you have given, the term you are looking for is "Decimal Fraction" i.e.
"a fraction whose denominator is some power of 10, usually indicated
by a dot (decimal point or point) written before the numerator: as 0.4
= 4/10; 0.126 = 126/1000."
I am looking for a popularity algorithm that calculates based on 'views' and 'likes'.
It seems the answer is to use the Lower bound of Wilson score confidence interval for a Bernoulli parameter and the algorithm is provided here:
http://www.evanmiller.org/how-not-to-sort-by-average-rating.html
The algorithm is provided in several forms on that page - mathematical formula, Ruby and SQL.
I need an SQL version, unfortunately the SQL form given on that website is different from the other two versions in that it seems to calculate on both positive and negative votes, while the Ruby version needs only pos number of positive votes and n total number of votes.
I am looking for an SQL statement (Postgres compatible) to calculate based on positive votes only, and I will count 'views' as my n total number of votes.
(I did think I could treat positive + negative as n in their SQL, but then I am puzzled by what to do with SQRT((positive * negative) / (positive + negative) + 0.9604))
THe "algorithm" is simply taking the lower bound of the confidence interval of a ratio.
If you have only positive votes, then just use the number of positive votes. The purpose of what you reference is to balance positive votes, negative votes, and total votes. You don't need any such balancing, because positive votes = total votes.
If you have the total number of votes and the positive votes, then you can use:
SELECT widget_id, ((positive + 1.9208) / (positive + negative) -
1.96 * SQRT((positive * negative) / (positive + negative) + 0.9604) /
(positive + negative)) / (1 + 3.8416 / (positive + negative))
AS ci_lower_bound
FROM (select w.*, (total - positive) as negative
from widgets w
)
WHERE positive + negative > 0
ORDER BY ci_lower_bound DESC;
By the way, I'm not sure that the Wilson correction gives any better results than a one standard deviation lower bound for the positive score:
SELECT widget_id, positive/total - sqrt(positive*negative/total)/total
I have a space ship, and am wanting to calculate how long it takes to turn 180 degrees. This is my current code to turn the ship:
.msngFacingDegrees = .msngFacingDegrees + .ROTATION_RATE * TV.TimeElapsed
My current .ROTATION_RATE is 0.15, but it will change.
I have tried:
Math.Ceiling(.ROTATION_RATE * TV.TimeElapsed / 180)
But always get an answer of 1. Please help.
To explain why you get 1 all the time:
Math.Ceiling simply rounds up to the next integer, so your sum contents must always be < 1.
Rearranging your sum gives TV.TimeElapsed = 180 * (1/.ROTATION_Rate). With a ROTATION_Rate of 0.15 we know that TV.TimeElapsed needs to reach 1200 before your overall function returns > 1.
Is it possible that you're always looking at elapsed times less than this threshold?
Going further to suggest what your sum should be is harder - Its not completely clear without more context.