I am programming in Solidity and trying to write a formula to calculate a compounding amount, I can write the formula in something like Excel as:
1 * (1 + 0.0025) ^ 2212
However, this obviously doesn't work is Solidity, so I tried;
uint rate = 25;
return 1 * (1 + (rate / 10000)) ** 2212;
Which still doesn't work.. explanation in edit below.
Anyone have some guidance on how I can get this working?
Thanks!
EDIT: by doesn't work, I mean that Solidity does not support floating point numbers in any form, 25/10000 returns 0 (not 0.0025). Which then results in the whole equation returning 1 (the real answer is 250).
My assumption is that the only way I can really do this is by re-working this equation into something that doesn't involve decimals places (or very large numbers, because raising to-the-power-of quickly gets out of hand).
Related
println(log(it.toDouble(), 10.0).toInt()+1) // n1
println(log10(it.toDouble()).toInt() + 1) // n2
I had to count the "length" of the number in n-base for non-related to the question needs and stumbled upon a bug (or rather unexpected behavior) that for it == 1000 these two functions give different results.
n1(1000) = 3,
n2(1000) = 4.
Checking values before conversion to int resulted in:
n1_double(1000) = 3.9999999999999996,
n2_double(1000) = 4.0
I understand that some floating point arithmetics magic is involved, but what is especially weird to me is that for 100, 10000 and other inputs that I checked n1 == n2.
What is special about it == 1000? How I ensure that log gives me the intended result (4, not 3.99..), because right now I can't even figure out what cases I need to double-check, since it is not just powers of 10, it is 1000 (and probably some other numbers) specifically.
I looked into implementation of log() and log10() and log is implemented as
if (base <= 0.0 || base == 1.0) return Double.NaN
return nativeMath.log(x) / nativeMath.log(base) //log() here is a natural logarithm
while log10 is implemented as
return nativeMath.log10(x)
I suspect this division in the first case is the reason of an error, but I can't figure out why it causes an error only in specific cases.
I also found this question:
Python math.log and math.log10 giving different results
But I already know that one is more precise than another. However there is no analogy for log10 for some base n, so I'm curious of reason WHY it is specifically 1000 that goes wrong.
PS: I understand there are methods of calculating length of a number without fp arithmetics and log of n-base, but at this point it is a scientific curiosity.
but I can't figure out why it causes an error only in specific cases.
return nativeMath.log(x) / nativeMath.log(base)
//log() here is a natural logarithm
Consider x = 1000 and nativeMath.log(x). The natural logarithm is not exactly representable. It is near
6.90775527898213_681... (Double answer)
6.90775527898213_705... (closer answer)
Consider base = 10 and nativeMath.log(base). The natural logarithm is not exactly representable. It is near
2.302585092994045_901... (Double)
2.302585092994045_684... (closer answer)
The only exactly correct nativeMath.log(x) for a finite x is when x == 1.0.
The quotient of the division of 6.90775527898213681... / 2.302585092994045901... is not exactly representable. It is near 2.9999999999999995559...
The conversion of the quotient to text is not exact.
So we have 4 computation errors with the system giving us a close (rounded) result instead at each step.
Sometimes these rounding errors cancel out in a way we find acceptable and the value of "3.0" is reported. Sometimes not.
Performed with higher precision math, it is easy to see log(1000) was less than a higher precision answer and that log(10) was more. These 2 round-off errors in the opposite direction for a / contributed to the quotient being extra off (low) - by 1 ULP than hoped.
When log(x, 10) is computed for other x = power-of-10, and the log(x) is slightly more than than a higher precision answer, I'd expect the quotient to less often result in a 1 ULP error. Perhaps it will be 50/50 for all powers-of-10.
log10(x) is designed to compute the logarithm in a different fashion, exploiting that the base is 10.0 and certainly exact for powers-of-10.
I am looking for a way to map a minimum necessary duty cycle in an optimization model.
After several attempts, however, I have now reached the end of my knowledge and hope for some inspiration here.
The idea is that a variable (binary) mdl.ontime is set so that the sum of successive ontime values is greater than or equal to the minimum duty cycle:
def ontime(mdl,t):
min_on_time = 3 # minimum on time in h
if t < min_on_time: return mdl.ontime[t] == 0
return sum(mdl.ontime[t-i] for i in range(min_on_time)) >= min_on_time
That works so far, if the variable mdl.ontime will not be recognized at all.
Then I tried three different constraints, unfortunately they all gave the same result: CPLEX only finds inf. results.
The first variant was:
def flag(mdl,t):
return mdl.ontime[t] + (mdl.production[t]>=0.1) >= 2
So if mdl.ontime is 1 and mdl.production is greater or equal 0.1 (the assumption is just exact enough), it should be greater or equal 2: a logical addition therm.
The second attemp was quite similar to the first:
def flag(mdl,t):
return (mdl.ontime[t]) >= (mdl.production[t] >= 0.1)
If mdl.ontime is 1, it should be greater or equal mdl.production compared with 0.1.
And the third with a big M variable:
def flag(mdl,t):
bigM = 10**6
return mdl.ontime[t] * bigM >= mdl.production[t]
bigM instead should be great enough in my case...
All of them do not work at all...and I have no idea, why CPLEX returns the error that there is only an infeasible solution.
Basically the model runs if I don't consider the ontime-integration.
Do you guys have any more ideas how I could implement this?
Many greetings,
Mathias
It isn't real clear what the desired relationship is between your variables/constraints. That said, I don't think this is legal. I'm surprised that it isn't popping an error....and if not popping an error, I'm pretty sure it isn't doing what you think:
def flag(mdl,t):
return mdl.ontime[t] + (mdl.production[t]>=0.1) >= 2
You are essentially burying an inferred binary variable in there with the test on mdl.production, which isn't going to work, I believe. You probably need to introduce another variable or such.
I am using Firebird 3.0.4 (both in Windows and Linux) and I have the following procedure that clearly demonstrates my problem with floating point numbers, and that also demonstrates a possible workaround:
create or alter procedure test_float returns (res double precision,
res1 double precision,
res2 double precision)
as
declare variable z1 double precision;
declare variable z2 double precision;
declare variable z3 double precision;
begin
z1=15;
z2=1.1;
z3=0.49;
res=z1*z2*z3; /* one expects res to be 8.085, but internally, inside the procedure
it is represented as 8.084999999999.
The procedure-internal representation is repaired when then
res is sent to the output of the procedure, but the procedure-internal
representation (which is worng) impacts the further calculations */
res1=round(res, 2);
res2=round(round(res, 8), 2);
suspend;
end
On can see the result of the procedure with:
select proc.res, proc.res1, proc.res2
from test_float proc
The result is
RES RES1 RES2
8,085 8,08 8,09
But one can expect that RES2 should be 8.09.
One can clearly see that the internal representation of the res contains 8.0849999 (e.g. one can assign res to the exception message and then raise this exception), it is repaired during output but it leads to the failed calculations when such variable is used in the further calculations.
RES2 demonstrates the repair: I can always apply ROUND(..., 8) to repair the internal representation. I am ready to go with this solution, but my question is - is it acceptable workaround (when the outer ROUND is with strictly less than 5 decimal places) or is there better workaround.
All my tests pass with this workaround, but the feeling is bad.
Of course, I know the minimum that every programmer should know about floats (there is article about that) and I know that one should not use double for business calculations.
This is an inherent problem with calculating with floating point numbers, and is not specific to Firebird. The problem is that the calculation of 15 * 1.1 * 0.49 using double precision numbers is not exactly 8.085. In fact, if you would do 8.085 - RES, you'd get a value that is (approximately) 1.776356839400251e-015 (although likely your client will just present it as 0.00000000).
You would get similar results in different languages. For example, in Java
DecimalFormat df = new DecimalFormat("#.00");
df.format(15 * 1.1 * 0.49);
will also produce 8.08 for exactly the same reason.
Also, if you would change the order of operations, you would get a different result. For example using 15 * 0.49 * 1.1 would produce 8.085 and round to 8.09, so the actual results would match your expectations.
Given round itself also returns a double precision, this isn't really a good way to handle this in your SQL code, because the rounded value with a higher number of decimals might still yield a value slightly less than what you'd expect because of how floating point numbers work, so the double round may still fail for some numbers even if the presentation in your client 'looks' correct.
If you purely want this for presentation purposes, it might be better to do this in your frontend, but alternatively you could try tricks like adding a small value and casting to decimal, for example something like:
cast(RES + 1e-10 as decimal(18,2))
However this still has rounding issues, because it is impossible to distinguish between values that genuinely are 8.08499999999 (and should be rounded down to 8.08), and values where the result of calculation just happens to be 8.08499999999 in floating point, while it would be 8.085 in exact numerics (and therefor need to be rounded up to 8.09).
In a similar vein, you could try to use double casting to decimal (eg cast(cast(res as decimal(18,3)) as decimal(18,2))), or casting the decimal and then rounding (eg round(cast(res as decimal(18,3)), 2). This would be a bit more consistent than double rounding because the first cast will convert to exact numerics, but again this has similar downside as mentioned above.
Although you don't want to hear this answer, if you want exact numeric semantics, you shouldn't be using floating point types.
I have a report that presents information and I'm getting inconsistent information based on what appears to be some issue with a SQL view or possibly a SQL Function nested within the view. I've tried finding a way to debug the SQL View, however, it looks like SSMS only will debug Stored Procedures, so I'm not really sure how to step through and see what is happening. It really has me stumped and I can't help but wonder if it isn't a rounding issue.
GetItemAverageCost RETURNS DECIMAL(12,2) and the DataType in sitli.QuantityIssuedAtStockUOM is System.Int64 / bigint (sidenote: I'm confused about why LINQPad shows 2 data types for that column. In the tree on the left, after expanding the sitli table and hovering over the QuantityIssuedAtStockUOM the balloon BigInt NOT NULL pops up, but when I Take(100) and hover over the column in the result set it says System.Int64). Anyroad, here is the COALESCE function.
COALESCE((dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor, 0),1),0) -- 259.73
--ROUND(COALESCE((dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor, 0),1),0),2) -- 259.73
--COALESCE(ROUND((dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor,2), 0),1),0) -- 259.70
--COALESCE((ROUND(dbo.GetItemAverageCost(ItemModel.IDItemModel),2)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor, 0),1),0) -- 259.73
original / wrong coalesce:
COALESCE(dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM,0)
I'm not sure what else to include, but I haven't found many resources online that offer insight into this kind of a situation. Many thanks in advance for your time.
EDIT: GetItemAverageCost:
ALTER FUNCTION GetItemAverageCost
(
#IDItemModel varchar(8000)
)
RETURNS DECIMAL(16,4)
--RETURNS DECIMAL(12,2)
AS
BEGIN
RETURN
(
SELECT
COALESCE(AVG(poli.UnitPrice),0) as AvgCost
-- COALESCE(ROUND(AVG(poli.UnitPrice),0),2) as AvgCost 260.00
FROM ItemModel im
LEFT JOIN VendorItem vi
ON im.IDItemModel = vi.IDItemModel
JOIN POLineItem poli
ON vi.IDVendorItem = poli.IDVendorItem
WHERE
im.IDItemModel = #IDItemModel
GROUP BY
im.IDItemModel,
im.ItemNumber
)
END
To fix; have your function return 16,4 instead of 12,2 and then ROUND two two decimals after multiplying by the quantity.
"When a given report is run, there are no errors thrown. But the calculations are off. For example a part number 12 shows a quantity of 24 were issued at a cost of $259.73. However, each part costs $10.82 so the calculation should be $259.68. I'm not sure where the difference of 5 cents is coming from. The $259.73 is the result of the COALESCE function above. Hopefully this makes sense"
Run the SQL only for part 12 independent of the function and you'll see the average is 10.822083333333333333333333333333 (10.82 5/24ths)
24*unitprice = $259.73
unitprice = 259.73/24
unit price = $10.82 5/24.
You'll see the variance is $.05
10.82 5/24ths. *24 = 259.73
10.82 * 24 = 259.68
That difference of 5cents doesn't go evenly into the remaining 24. thus the rounding error when using your function.
When you think of going to the store and buying something it's always at amounts to the whole penny. When you go to the gas station they charge to the nearest .00001 cents. (or in your case 4 decimals)
The rounding when using fractions of pennies isn't done until multiplied by the quantity or when actual cash needs to change hands. If done too early you get rounding errors you are seeing.
Thus you eliminate over/under charging rounding errors and at most you'll charge a fraction of a penny less or more than you should.
Okay, so many thanks to all who helped along the way. There were a couple of issues preventing me from getting the correct answer. For one thing, I was working with the incorrect expression for much of the time. Secondly, after I figured out which expression to use, it was a matter of placing the ROUND function in the correct place.
So, the expression I should have been using to get my average cost is:
COALESCE(dbo.GetItemAverageCost(Item.IDItemModel) / ISNULL(NULLIF(UOMFactor, 0),1),0)
When I moved this into the WorkOrderItemInstructionPartCosts View, my report was then producing $10.82. Then I added *sitli.QuantityIssuedAtStockUOM to the line and was getting $259.73. Then I applied the ROUND function to the COALESCE function and voila! the correct value ($259.68) is being produced.
The final line looks like this:
ROUND(COALESCE(dbo.GetItemAverageCost(ItemModel.IDItemModel) / ISNULL(NULLIF(UOMFactor, 0),1),0),2)*sitli.QuantityIssuedAtStockUOM
Once again, thank you to all who helped me in the effort to resolve this and sorry for not having accurate information to begin with.
Best,
Jonathan
I am doing some calculations in vb.net, this is my equation:
rms = (20 * (Math.Log(rms / 0.7746))) 'also tried (Math.Log10(rms / 0.7746))
I have tried various different methods of writing this, including separating out the calculations into various steps. However the final result is quite far out.
I have tried declaring my variable 'rms' as a decimal and a double. It does contain decimal places.
In Excel, I have tried the same calculation using this formula:
=20*(LOG(C2/0.7746)) ' where C2 is the RMS value
And the results are consistent with a website I used to check, as well as my pocket calculator.
I have also tried rounding the number to 3 decimal places:
rms = Math.Round(rms, 3)
This too has a minimal effect on the final result.
I can only assume it's the 'operator precedence' in VB but I'm struggling to work this one out.
Any help greatly appreciated as always, thanks.
After a marathon debugging session I found an error in my code.
I have a routine that uses the voltage at a given frequency to "normalise" all the plots I do to 0dBu.
My normalisation routine was broken. Badly.
And finally - to get the correct output from the log maths, I had to change the order in which the calculation was performed.
It was originally rms = (Math.Log10(rms / 0.7746) * 20)
In trying to find the issue, I changed it to rms = (20 * (Math.Log(rms / 0.7746)))
Which yields a different (and incorrect) result.
In any case - it's fixed now.
Thanks to all who responded.