Big loop in Solidity - solidity

I calculate price by this loop
uint tokenPrice = 1e15;
uint tokenPriceStep = 1e11;//
uint sendValue = 3e18; //3 ETH
uint balance = 0;
do {
balance += 1e18;
sendValue -= tokenPrice;
tokenPrice += tokenPriceStep;
} while (sendValue > tokenPrice);
if(sendValue > 0){
balance += sendValue * 1e18 / tokenPrice;
tokenPrice += (sendValue * 1e18 / tokenPrice) * tokenPriceStep / 1e18;
sendValue = 0;
}
But this is cost to much gas. And REMIX just crashed when i run this. What can i must do? I need this loop to calculate price like this:
For example logics looks like this:
First colum is number
Second colum is my value (my money)
Third is my token balance
Forth colum is price (that grow by 1 or any else)
0 - 3000.00000 -> 0.00000 -> 100.00000
1 - 2900.00000 -> 1.00000 -> 101.00000
2 - 2799.00000 -> 2.00000 -> 102.00000
3 - 2697.00000 -> 3.00000 -> 103.00000
4 - 2594.00000 -> 4.00000 -> 104.00000
5 - 2490.00000 -> 5.00000 -> 105.00000
6 - 2385.00000 -> 6.00000 -> 106.00000
7 - 2279.00000 -> 7.00000 -> 107.00000
8 - 2172.00000 -> 8.00000 -> 108.00000
9 - 2064.00000 -> 9.00000 -> 109.00000
10 - 1955.00000 -> 10.00000 -> 110.00000
11 - 1845.00000 -> 11.00000 -> 111.00000
12 - 1734.00000 -> 12.00000 -> 112.00000
13 - 1622.00000 -> 13.00000 -> 113.00000
14 - 1509.00000 -> 14.00000 -> 114.00000
15 - 1395.00000 -> 15.00000 -> 115.00000
16 - 1280.00000 -> 16.00000 -> 116.00000
17 - 1164.00000 -> 17.00000 -> 117.00000
18 - 1047.00000 -> 18.00000 -> 118.00000
19 - 929.00000 -> 19.00000 -> 119.00000
20 - 810.00000 -> 20.00000 -> 120.00000
21 - 690.00000 -> 21.00000 -> 121.00000
22 - 569.00000 -> 22.00000 -> 122.00000
23 - 447.00000 -> 23.00000 -> 123.00000
24 - 324.00000 -> 24.00000 -> 124.00000
25 - 200.00000 -> 25.00000 -> 125.00000
26 - 75.00000 -> 26.00000 -> 126.00000
27 - 0.00000 -> 26.59524 -> 126.59524
Please help, i can't know what to do!

I think you can do it like this without a loop:
uint times = sendValue / tokenPrice; // this is how many time the loop would run
balance += times * (1e18);
tokenPrice += times * tokePriceStep;
sendValue -= times * tokenPrice;
After that the rest is the same.
You should always try to avoid loops in your contract since it is very easy to run out of gas or sometimes even exceed the block's gas limit.

Related

Prime numbers in Idris

In idris 0.9.17.1,
with inspiration from https://wiki.haskell.org/Prime_numbers,
I've written the following code for generating prime numbers
module Main
concat: List a -> Stream a -> Stream a
concat [] ys = ys
concat (x :: xs) ys = x :: (concat xs ys)
generate: (Num a, Ord a) => (start:a) -> (step:a) -> (max:a) -> List a
generate start step max = if (start < max) then start :: generate (start + step) step max else []
mutual
sieve: Nat -> Stream Int -> Int -> Stream Int
sieve k (p::ps) x = concat (start) (sieve (k + 1) ps (p * p)) where
fs: List Int
fs = take k (tail primes)
start: List Int
start = [n | n <- (generate (x + 2) 2 (p * p - 2)), (all (\i => (n `mod` i) /= 0) fs)]
primes: Stream Int
primes = 2 :: 3 :: sieve 0 (tail primes) 3
main:IO()
main = do
printLn $ take 10 primes
In the REPL, if I write take 10 primes, the REPL correctly shows [2, 3, 5, 11, 13, 17, 19, 29, 31, 37] : List Int
But if I try :exec, nothing happen and if I try to compile ans execute the program I get Segmentation fault: 11
Can someone help me to debug this problem ?
Your concat function can be made lazy to fix this. Just change its type to
concat : List a -> Lazy (Stream a) -> Stream a
This will do it.
Note:
To get all primes, change the < inside the generate function into <=
(Currently some are missing, e.g. 7 and 23).

Spin random error, bakery lock

I've created a bakery lock using Spin
1 int n=3;
2 int choosing[4] ; // initially 0
3 int number[4]; // initially 0
4
5 active [3] proctype p()
6 {
7
8 choosing[_pid] = 1;
9 int max = 0;
10 int i=0;
11
12 do
13 ::(number[i] > max) -> max=number[i];
14 ::i++;
15 :: (i == n) -> break;
16 od;
17
18 number[_pid] = max + 1;
19 choosing[_pid] = 0;
20
21 int j=0;
22
23 do
24 ::(j==n) -> break;
25 :: do
26 ::(choosing[j] == 0)-> break;
27 od;
28 :: if
29 ::(number[j] ==0) -> j++;
30 ::(number[j] > number[_pid]) -> j++;
31 ::((number[j] == number[_pid]) && ( j> _pid)) -> j++;
32 fi;
33 od;
34
35 number[_pid]=0
36
37 }
when I test it I get an error: pan:1: assertion violated - invalid array index (at depth 5)
when I run the trail I get this back
1: proc 2 (p) Bakery_lock.pml:8 (state 1) [choosing[_pid] = 1]
2: proc 2 (p) Bakery_lock.pml:10 (state 2) [max = 0]
2: proc 2 (p) Bakery_lock.pml:12 (state 3) [i = 0]
3: proc 2 (p) Bakery_lock.pml:14 (state 6) [i = (i+1)]
4: proc 2 (p) Bakery_lock.pml:14 (state 6) [i = (i+1)]
5: proc 2 (p) Bakery_lock.pml:14 (state 6) [i = (i+1)]
spin: indexing number[3] - size is 3
spin: Bakery_lock.pml:13, Error: indexing array 'number'
6: proc 2 (p) Bakery_lock.pml:13 (state 4) [((number[i]>max))]
Can anyone tell me why it skips this line (i == n) -> break; ?
It doesn't 'skip' that line. Spin executes every line that is executable. In your do the line i++ is always executable and therefore, because Spin explores all possible executions, the i++ line will be executed even when (i == n). The fix is:
do
:: (number[i] > max) -> max=number[i];
:: (i < n) -> i++
:: (i == n) -> break;
od;

To memoize or not to memoize

... that is the question. I have been working on an algorithm which takes an array of vectors as input, and part of the algorithm repeatedly picks pairs of vectors and evaluates a function of these two vectors, which doesn't change over time. Looking at ways to optimize the algorithm, I thought this would be a good case for memoization: instead of recomputing the same function value over and over again, cache it lazily and hit the cache.
Before jumping to code, here is the gist of my question: the benefits I get from memoization depend on the number of vectors, which I think is inversely related to number of repeated calls, and in some circumstances memoization completely degrades performance. So is my situation inadequate for memoization? Am I doing something wrong, and are there smarter ways to optimize for my situation?
Here is a simplified test script, which is fairly close to the real thing:
open System
open System.Diagnostics
open System.Collections.Generic
let size = 10 // observations
let dim = 10 // features per observation
let runs = 10000000 // number of function calls
let rng = new Random()
let clock = new Stopwatch()
let data =
[| for i in 1 .. size ->
[ for j in 1 .. dim -> rng.NextDouble() ] |]
let testPairs = [| for i in 1 .. runs -> rng.Next(size), rng.Next(size) |]
let f v1 v2 = List.fold2 (fun acc x y -> acc + (x-y) * (x-y)) 0.0 v1 v2
printfn "Raw"
clock.Restart()
testPairs |> Array.averageBy (fun (i, j) -> f data.[i] data.[j]) |> printfn "Check: %f"
printfn "Raw: %i" clock.ElapsedMilliseconds
I create a list of random vectors (data), a random collection of indexes (testPairs), and run f on each of the pairs.
Here is the memoized version:
let memoized =
let cache = new Dictionary<(int*int),float>(HashIdentity.Structural)
fun key ->
match cache.TryGetValue(key) with
| true, v -> v
| false, _ ->
let v = f data.[fst key] data.[snd key]
cache.Add(key, v)
v
printfn "Memoized"
clock.Restart()
testPairs |> Array.averageBy (fun (i, j) -> memoized (i, j)) |> printfn "Check: %f"
printfn "Memoized: %i" clock.ElapsedMilliseconds
Here is what I am observing:
* when size is small (10), memoization goes about twice as fast as the raw version,
* when size is large (1000), memoization take 15x more time than raw version,
* when f is costly, memoization improves things
My interpretation is that when the size is small, we have more repeat computations, and the cache pays off.
What surprised me was the huge performance hit for larger sizes, and I am not certain what is causing it. I know I could improve the dictionary access a bit, with a struct key for instance - but I didn't expect the "naive" version to behave so poorly.
So - is there something obviously wrong with what I am doing? Is memoization the wrong approach for my situation, and if yes, is there a better approach?
I think memoization is a useful technique, but it is not a silver bullet. It is very useful in dynamic programming where it reduces the (theoretical) complexity of the algorithm. As an optimization, it can (as you would probably expect) have varying results.
In your case, the cache is certainly more useful when the number of observations is smaller (and f is more expensive computation). You can add simple statistics to your memoization:
let stats = ref (0, 0) // Count number of cache misses & hits
let memoized =
let cache = new Dictionary<(int*int),float>(HashIdentity.Structural)
fun key ->
let (mis, hit) = !stats
match cache.TryGetValue(key) with
| true, v -> stats := (mis, hit + 1); v // Increment hit count
| false, _ ->
stats := (mis + 1, hit); // Increment miss count
let v = f data.[fst key] data.[snd key]
cache.Add(key, v)
v
For small size, the numbers I get are something like (100, 999900) so there is a huge benefit from memoization - the function f is computed 100x and then each result is reused 9999x.
For big size, I get something like (632331, 1367669) so f is called many times and each result is reused just twice. In that case, the overhead with allocation and lookup in the (big) hash table is much bigger.
As a minor optimization, you can pre-allocate the Dictionary and write new Dictionary<_, _>(10000,HashIdentity.Structural), but that does not seem to help much in this case.
To make this optimization efficient, I think you would need to know some more information about the memoized function. In your example, the inputs are quite regular, so there is porbably no point in memoization, but if you know that the function is more often called with some values of arguments, you can perhaps only memoize only for these common arguments.
Tomas's answer is great for when you should use memoization. Here's why memoization is going so slow in your case.
It sounds like you're testing in Debug mode. Run your test again in Release and you should get a faster result for memoization. Tuples can cause a large performance hit while in Debug mode. I added a hashed version for comparison along with some micro optimizations.
Release
Raw
Check: 1.441687
Raw: 894
Memoized
Check: 1.441687
Memoized: 733
memoizedHash
Check: 1.441687
memoizedHash: 552
memoizedHashInline
Check: 1.441687
memoizedHashInline: 493
memoizedHashInline2
Check: 1.441687
memoizedHashInline2: 385
Debug
Raw
Check: 1.409310
Raw: 797
Memoized
Check: 1.409310
Memoized: 5190
memoizedHash
Check: 1.409310
memoizedHash: 593
memoizedHashInline
Check: 1.409310
memoizedHashInline: 497
memoizedHashInline2
Check: 1.409310
memoizedHashInline2: 373
Source
open System
open System.Diagnostics
open System.Collections.Generic
let size = 10 // observations
let dim = 10 // features per observation
let runs = 10000000 // number of function calls
let rng = new Random()
let clock = new Stopwatch()
let data =
[| for i in 1 .. size ->
[ for j in 1 .. dim -> rng.NextDouble() ] |]
let testPairs = [| for i in 1 .. runs -> rng.Next(size), rng.Next(size) |]
let f v1 v2 = List.fold2 (fun acc x y -> acc + (x-y) * (x-y)) 0.0 v1 v2
printfn "Raw"
clock.Restart()
testPairs |> Array.averageBy (fun (i, j) -> f data.[i] data.[j]) |> printfn "Check: %f"
printfn "Raw: %i\n" clock.ElapsedMilliseconds
let memoized =
let cache = new Dictionary<(int*int),float>(HashIdentity.Structural)
fun key ->
match cache.TryGetValue(key) with
| true, v -> v
| false, _ ->
let v = f data.[fst key] data.[snd key]
cache.Add(key, v)
v
printfn "Memoized"
clock.Restart()
testPairs |> Array.averageBy (fun (i, j) -> memoized (i, j)) |> printfn "Check: %f"
printfn "Memoized: %i\n" clock.ElapsedMilliseconds
let memoizedHash =
let cache = new Dictionary<int,float>(HashIdentity.Structural)
fun key ->
match cache.TryGetValue(key) with
| true, v -> v
| false, _ ->
let i = key / size
let j = key % size
let v = f data.[i] data.[j]
cache.Add(key, v)
v
printfn "memoizedHash"
clock.Restart()
testPairs |> Array.averageBy (fun (i, j) -> memoizedHash (i * size + j)) |> printfn "Check: %f"
printfn "memoizedHash: %i\n" clock.ElapsedMilliseconds
let memoizedHashInline =
let cache = new Dictionary<int,float>(HashIdentity.Structural)
fun key ->
match cache.TryGetValue(key) with
| true, v -> v
| false, _ ->
let i = key / size
let j = key % size
let v = f data.[i] data.[j]
cache.Add(key, v)
v
printfn "memoizedHashInline"
clock.Restart()
let mutable total = 0.0
for i, j in testPairs do
total <- total + memoizedHashInline (i * size + j)
printfn "Check: %f" (total / float testPairs.Length)
printfn "memoizedHashInline: %i\n" clock.ElapsedMilliseconds
printfn "memoizedHashInline2"
clock.Restart()
let mutable total2 = 0.0
let cache = new Dictionary<int,float>(HashIdentity.Structural)
for i, j in testPairs do
let key = (i * size + j)
match cache.TryGetValue(key) with
| true, v -> total2 <- total2 + v
| false, _ ->
let i = key / size
let j = key % size
let v = f data.[i] data.[j]
cache.Add(key, v)
total2 <- total2 + v
printfn "Check: %f" (total2 / float testPairs.Length)
printfn "memoizedHashInline2: %i\n" clock.ElapsedMilliseconds
Console.ReadLine() |> ignore

Why is `logBase 10 x` slower than `log x / log 10`, even when specialized?

solrize in #haskell asked a question about one version of this code and I tried some other cases and was wondering what was going on. On my machine the "fast" code takes ~1 second and the "slow" code takes ~1.3-1.5 (everything is compiled with ghc -O2).
import Data.List
log10 :: Double -> Double
--log10 x = log x / log 10 -- fast
--log10 = logBase 10 -- slow
--log10 = barLogBase 10 -- fast
--log10 = bazLogBase 10 -- fast
log10 = fooLogBase 10 -- see below
class Foo a where
fooLogBase :: a -> a -> a
instance Foo Double where
--fooLogBase x y = log y / log x -- slow
fooLogBase x = let lx = log x in \y -> log y / lx -- fast
barLogBase :: Double -> Double -> Double
barLogBase x y = log y / log x
bazLogBase :: Double -> Double -> Double
bazLogBase x = let lx = log x in \y -> log y / lx
main :: IO ()
main = print . foldl' (+) 0 . map log10 $ [1..1e7]
I'd've hoped that GHC would be able to turn logBase x y into exactly the same thing as log y / log x, when specialised. What's going on here, and what would be the recommended way of using logBase?
As always, look at the Core.
Fast (1.563s) -
-- note: top level constant, referred to by specialized fooLogBase
Main.main_lx :: GHC.Types.Double
Main.main_lx =
case GHC.Prim.logDouble# 10.0 of { r ->
GHC.Types.D# r
}
Main.main7 :: GHC.Types.Double -> GHC.Types.Double
Main.main7 =
\ (y :: GHC.Types.Double) ->
case y of _ { GHC.Types.D# y# ->
case GHC.Prim.logDouble# y# of { r0 ->
case Main.main_lx of { GHC.Types.D# r ->
case GHC.Prim./## r0 r of { r1 ->
GHC.Types.D# r1
}
}
}
Slow (2.013s)
-- simpler, but recomputes log10 each time
Main.main7 =
\ (y_ahD :: GHC.Types.Double) ->
case y_ahD of _ { GHC.Types.D# x_aCD ->
case GHC.Prim.logDouble# x_aCD of wild1_aCF { __DEFAULT ->
case GHC.Prim.logDouble# 10.0 of wild2_XD9 { __DEFAULT ->
case GHC.Prim./## wild1_aCF wild2_XD9 of wild3_aCz { __DEFAULT ->
GHC.Types.D# wild3_aCz
}
}
}
}
In the fast version, log10 is computed once and shared (the static argument is applied once only). In the slow version it is recomputed each time.
You can follow this line of reasoning to produce even better versions:
-- 1.30s
lx :: Double
lx = log 10
log10 :: Double -> Double
log10 y = log y / lx
main :: IO ()
main = print . foldl' (+) 0 . map log10 $ [1..1e7]
And, using array fusion, you can remove the penalty of the compositional style:
import qualified Data.Vector.Unboxed as V
lx :: Double
lx = log 10
log10 :: Double -> Double
log10 y = log y / lx
main :: IO ()
main = print . V.sum . V.map log10 $ V.enumFromN 1 (10^7)
Cutting the cost by 3x
$ time ./A
6.5657059080059275e7
real 0m0.672s
user 0m0.000s
sys 0m0.000s
Which is as good as writing it by hand. The below offers no benefit over the correctly written version above.
lx :: Double
lx = D# (GHC.Prim.logDouble# 10.0##)
log10 :: Double -> Double
log10 (D# y) = D# (case logDouble# y of r -> r /## d#)
where
D# d# = lx
main :: IO ()
main = print . V.sum . V.map log10 $ V.enumFromN 1 (10^7)
Another missed optimization: dividing by a constant (log 10) should be replaced with multiplying by the reciprocal.

Erlang binary optimization known integer

Can this be further optimized:
Binary = <<"2345", 1, "restofmessageexistshere">>
get_integer_value(Binary) ->
[Num, _, LastRest] = integer_value(Binary),
[Num, LastRest].
integer_value(<<1, _Rest/binary>>) -> [0, 1, _Rest];
integer_value(<<H:8, Rest/binary>>) ->
% io:format("~n~p~n", [Rest]),
[Num, Exp, LastRest] = integer_value(Rest),
[(H-48)*Exp + Num, Exp*10, LastRest].
Expected Result -> [2345, "restofmessageexistshere"]
You could use a function like the following one:
integer_value(Bin) ->
integer_value(Bin, 0).
integer_value(<<Char, Tail/binary>>, Acc) when (Char >= $0) and (Char =< $9) ->
integer_value(Tail, Acc * 10 + (Char - $0));
integer_value(<<1, Tail/binary>>, Acc) ->
[Acc, Tail];
integer_value(Bin, _Acc) ->
%% Throw an exception if the argument is not in the correct format
erlang:error(badarg, [Bin]).
If you call integer_value(<<"2345", 1, "restofmessageexistshere">>) you'll get [2345, "restofmessageexistshere"].
This function solves your problem, but as the previous poster said, you might want to explain what you want to do to make sure that this is the best solution for your problem.