I'm trying to do a function who implements a sum of n cubes:
1^3 + 2^3 + 3^3 + ... + n^3 = sum
My function should receive a sum and return a n or -1 if n doesn't exists.
Some examples:
(find-n 9) ; should return 2 because 1^3 + 2^3 = 9
(find-n 100) ; should return 4 because 1^3 + 2^3 + 3^3 + 4^3 = 100
(find-n 10) ; should return -1
After some work I made these two functions:
; aux function
(defn exp-3 [base] (apply *' (take 3 (repeat base))))
; main function
(defn find-n [m]
(loop [sum 0
actual-base 0]
(if (= sum m)
actual-base
(if (> sum m)
-1
(recur (+' sum (exp-3 (inc actual-base))) (inc actual-base))))))
These functions are working properly but is taking too long to evaluate operations with BigNumbers, as example:
(def sum 1025247423603083074023000250000N)
(time (find-n sum))
; => "Elapsed time: 42655.138544 msecs"
; => 45001000
I'm asking this question to raise some advices of how can I make this function faster.
This is all about algebra, and little to do with Clojure or programming. Since this site does not support mathematical typography, let's express it in Clojure.
Define
(defn sigma [coll] (reduce + coll))
and
(defn sigma-1-to-n [f n]
(sigma (map f (rest (range (inc n))))))
(or
(defn sigma-1-to-n [f n]
(->> n inc range rest (map f) sigma))
)
Then the question is, given n, to find i such that (= (sigma-1-to-n #(* % % %) i) n).
The key to doing this quickly is Faulhaber's formula for cubes. It tells us that the following are equal, for any natural number i:
(#(*' % %) (sigma-1-to-n identity i))
(sigma-1-to-n #(* % % %) i)
(#(*' % %) (/ (*' i (inc i)) 2))
So, to be the sum of cubes, the number
must be a perfect square
whose square root is the sum of the first so many numbers.
To find out whether a whole number is a perfect square, we take its approximate floating-point square root, and see whether squaring the nearest integer recovers our whole number:
(defn perfect-square-root [n]
(let [candidate (-> n double Math/sqrt Math/round)]
(when (= (*' candidate candidate) n)
candidate)))
This returns nil if the argument is not a perfect square.
Now that we have the square root, we have to determine whether it is the sum of a range of natural numbers: in ordinary algebra, is it (j (j + 1)) / 2, for some natural number j.
We can use a similar trick to answer this question directly.
j (j + 1) = (j + 1/2)^2 + 1/4
So the following function returns the number of successive numbers that add up to the argument, if there is one:
(defn perfect-sum-of [n]
(let [j (-> n (*' 2)
(- 1/4)
double
Math/sqrt
(- 0.5)
Math/round)]
(when (= (/ (*' j (inc j)) 2) n)
j)))
We can combine these to do what you want:
(defn find-n [big-i]
{:pre [(integer? big-i) ((complement neg?) big-i)]}
(let [sqrt (perfect-square-root big-i)]
(and sqrt (perfect-sum-of sqrt))))
(def sum 1025247423603083074023000250000N)
(time (find-n sum))
"Elapsed time: 0.043095 msecs"
=> 45001000
(Notice that the time is about twenty times faster than before, probably because HotSpot has got to work on find-n, which has been thoroughly exercised by the appended testing)
This is obviously a lot faster than the original.
Caveat
I was concerned that the above procedure might produce false negatives (it will never produce a false positive) on account of the finite precision of floating point. However, testing suggests that the procedure is unbreakable for the sort of number the question uses.
A Java double has 52 bits of precision, roughly 15.6 decimal places. The concern is that with numbers much bigger than this, the procedure may miss the exact integer solution, as the rounding can only be as accurate as the floating point number that it starts with.
However, the procedure solves the example of a 31 digit integer correctly. And testing with many (ten million!) similar numbers produces not one failure.
To test the solution, we generate a (lazy) sequence of [limit cube-sum] pairs:
(defn generator [limit cube-sum]
(iterate
(fn [[l cs]]
(let [l (inc l)
cs (+' cs (*' l l l))]
[limit cs]))
[limit cube-sum]))
For example,
(take 10 (generator 0 0))
=> ([0 0] [1 1] [2 9] [3 36] [4 100] [5 225] [6 441] [7 784] [8 1296] [9 2025])
Now we
start with the given example,
try the next ten million cases and
remove the ones that work.
So
(remove (fn [[l cs]] (= (find-n cs) l)) (take 10000000 (generator 45001000 1025247423603083074023000250000N)))
=> ()
They all work. No failures. Just to make sure our test is valid:
(remove (fn [[l cs]] (= (find-n cs) l)) (take 10 (generator 45001001 1025247423603083074023000250000N)))
=>
([45001001 1025247423603083074023000250000N]
[45001002 1025247514734170359564546262008N]
[45001003 1025247605865263720376770289035N]
[45001004 1025247696996363156459942337099N]
[45001005 1025247788127468667814332412224N]
[45001006 1025247879258580254440210520440N]
[45001007 1025247970389697916337846667783N]
[45001008 1025248061520821653507510860295N]
[45001009 1025248152651951465949473104024N]
[45001010 1025248243783087353664003405024N])
All ought to fail, and they do.
Just avoiding the apply (not really all that fast in CLJ) gives you a 4x speedup:
(defn exp-3 [base]
(*' base base base))
And another 10%:
(defn find-n [m]
(loop [sum 0
actual-base 0]
(if (>= sum m)
(if (= sum m) actual-base -1)
(let [nb (inc actual-base)]
(recur (+' sum (*' nb nb nb)) nb)))))
The following algorithmic-based approach relies on one simple formula which says that the sum of the cubes of the first N natural numbers is: (N*(N+1)/2)^2
(defn sum-of-cube
"(n*(n+1)/2)^2"
[n]
(let [n' (/ (*' n (inc n)) 2)]
(*' n' n')))
(defn find-nth-cube
[n]
((fn [start end prev]
(let [avg (bigint (/ (+' start end) 2))
cube (sum-of-cube avg)]
(cond (== cube n) avg
(== cube prev) -1
(> cube n) (recur start avg cube)
(< cube n) (recur avg end cube))))
1 n -1))
(time (find-nth-cube 1025247423603083074023000250000N))
"Elapsed time: 0.355177 msecs"
=> 45001000N
We want to find the number N such that the sum of 1..N cubes is some number X. To find if such a number exists, we can perform a binary search over some range for it by applying the above formula to see whether the result of the formula equals X. This approach works because the function at the top is increasing, and thus any value f(n) which is too large means that we must look for a lower number n, and any value f(n) which is too small means that we must look for a larger number n.
We choose a (larger than necessary, but easy and safe) range of 0 to X. We will know that the number exists if our formula applied to a given candidate number yields X. If it does not, we continue the binary search until either we find the number, or until we have tried the same number twice, which indicates that the number does not exist.
With an upper bound of logN, only takes 1 millisecond to compute 1E100 (1 googol), so it's very efficient for an algorithmic approach.
You may want to use some mathematical tricks.
(a-k)^3 + (a+k)^3 = 2a^3+(6k^2)a
So, a sum like:
(a-4)^3+(a-3)^3+(a-2)^3+(a-1)^3+a^3+(a+1)^3+(a+2)^3+(a+3)^3+(a+4)^3
= 9a^3+180a
(please confirm correctness of the calculation).
Using this equation, instead of incrementing by 1 every time, you can jump by 9 (or by any 2 k+1 you like). You can check for the exact number whenever you hit a bigger number than n.
Other way to improve is to have a table of ns and sums, by making a batch of computations once and use this table later in function find-n.
Related
I'm summing a long list of Ratios in Clojure, something like:
(defn sum-ratios
[n]
(reduce
(fn [total ind]
(+
total
(/
(inc (rand-int 100))
(inc (rand-int 100)))))
(range 0 n)))
The runtime for various n is:
n = 10^4 ...... 41 ms
n = 10^6 ...... 3.4 s
n = 10^7 ...... 36 s
The (less precise) alternative is to sum these values as doubles:
(defn sum-doubles
[n]
(reduce
(fn [total ind]
(+
total
(double
(/
(inc (rand-int 100))
(inc (rand-int 100))))))
(range 0 n)))
The runtime for this version is:
n = 10^4 ...... 8.8 ms
n = 10^6 ...... 350 ms
n = 10^7 ...... 3.4 s
Why is it significantly slower to sum Ratios? I'm guessing that it has to do with finding the least common multiple of the denominators of the Ratios being summed, but does anybody know specifically which algorithm Clojure uses to sum Ratios?
Let's follow what happens when you + two Ratios, which is what happens at each step in the reduction. We start at the two-arity version of +:
([x y] (. clojure.lang.Numbers (add x y)))
This takes us to Numbers.add(Obj, Obj):
return ops(x).combine(ops(y)).add((Number)x, (Number)y);
ops looks at the class of the first operand and will find RatioOps. This leads to the RatioOps.add function:
final public Number add(Number x, Number y){
Ratio rx = toRatio(x);
Ratio ry = toRatio(y);
Number ret = divide(ry.numerator.multiply(rx.denominator)
.add(rx.numerator.multiply(ry.denominator))
, ry.denominator.multiply(rx.denominator));
return normalizeRet(ret, x, y);
}
So here's your algorithm. There are five BigInteger operations here (three multiplies, one add, one divide):
(yn*xd + xn*yd) / (xd*yd)
You can see how multiply is implemented; it alone is not trivial, and you can examine the others for yourself.
Sure enough, the divide function involves finding the gcd between the two numbers so it can be reduced:
static public Number divide(BigInteger n, BigInteger d){
if(d.equals(BigInteger.ZERO))
throw new ArithmeticException("Divide by zero");
BigInteger gcd = n.gcd(d);
if(gcd.equals(BigInteger.ZERO))
return BigInt.ZERO;
n = n.divide(gcd);
d = d.divide(gcd);
...
}
The gcd function creates two new MutableBigInteger objects.
Computationally, it's expensive, as you can see from all of the above. However, don't discount the cost of extra incidental object creation (as in gcd above), as that is often more expensive as we involve non-cached memory access.
The double conversion is not free, FWIW, as it involves a division of two newly-created BigDecimals.
You really need a profiler to see exactly where the cost is. But hopefully the above gives a little context.
I am currently learning LISP by going through some of the problems on ProjectEuler site. One of the problems asks this:
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ?
I have scrapped together Lisp code that does this. However, for numbers with 9+ digits, it is very slow. Most of the time I never get a solution, whereas for 8 digits, it takes about 4-5 seconds. What's more, sometimes I get "HEAP exceeded" error.
My question is am I doing something wrong in terms of running the code (use Aquamacs)? What are some ways this code can be optimized to be better suited for the task at hand? More importantly, how can "exceeded HEAP" crashes be avoided?
Code:
(defun potential-factors (number)
(loop for x from 1 to (ceiling (/ number 2))
for y = x
collect y))
(defun factors (number)
(let (prime-factors '())
(loop for x in (potential-factors number)
do (if (= (mod number x) 0)
(setq prime-factors (cons x prime-factors))))
prime-factors))
(defun is-prime (n &optional (d (- n 1)))
(if (/= n 1)
(or (= d 1)
(and (/= (rem n d) 0)
(is-prime n (- d 1)))) ()))
(defun problem-3 (number)
(last (sort (remove-if-not #'is-prime (factors number) :from-end t) #'<)))
The problem is that you are creating a list in potential-factors of all the numbers between 1 and n/2. That list takes a huge amount of memory and causes the program to crash. The good news is that you don't need to accumulate these numbers in a list, but simply use one number at a time. In factors replace the line (loop for x in (potential-factors number) with (loop for x from 1 to (ceiling (/ number 2))
That should do the trick.
I'm no mathematician, but another thought: The insight about dividing n by 2 seems to be that factors come in pairs. A is a factor of N only if A times B is N, so B has to be at least 2. But that logic can be extended, right? What about dividing by 3? Once you've checked to see if 3 is a factor, then there is no point in checking all numbers greater than 1/3 N. The same for 4, etc. The observation would seem to be that you really only need to check the numbers such that A is less than or equal to B -- so then what would the limit of that be? Well, if A = B, then A times B = A times A, which means that that in that case, A is the square root of N. So I would think you only need to check up as high as the square root N, instead of all the way up to N / 2.
But I'm no mathematician.
What do you think the result of (potential-factors 600851475143) is? How long would it take to compute a result and how much memory would the result need?
I have been asked to translate a couple of C functions to scheme for an assignment. My professor very briefly grazed over how Scheme works, and I am finding it difficult to understand. I want to create a function that checks to see which number is greater than the other, then keeps checking every time you input a new number. The issue I am having is with variable declaration. I don't understand how you assign a value to an id.
(define max 1)
(define (x x)
(let maxfinder [(max max)]
(if (= x 0)
0
(if (> max x)
max
((= max x) maxfinder(max))))))
The trouble I keep running into is that I want to initialize max as a constant, and modify x. In my mind this is set up as an infinite loops with an exit when x = 0. If max is > x, which it should not be for the first time through, then set max = to x, and return x. I don't know what to do with the constant max. I need it to be a local variable. Thanks
Parenthesis use is very strict. Besides special forms they are used to call procedures. eg (> max x) calls procedure > with arguments max and x. ((if (> x 3) - +) 6 x) is an example where the if form returns a procedure and the result is called.
((= max x) ...) evaluates (= max x) and since the result is not a procedure it will fail.
maxfinder without parenthesis is just a procedure object.
(max) won't work since max is a number, not a procedure.
As for you problem. You add the extra variables you need to change in the named let. Eg. a procedure that takes a number n and makes a list with number 0-n.
(define (make-numbered-list n)
(let loop ((n n) (acc '()))
(if (zero? n)
acc
(loop (- n 1) (cons n acc)))))
Local variables are just locally bound symbols. This can be rewritten
(define (make-numbered-list n)
(define (loop n acc)
(if (zero? n)
acc
(loop (- n 1) (cons n acc))))
(loop n '()))
Unlike Algol dialects like C you don't mutate variables in a loop, but use recusion to alter them.
Good luck
If i understand you correctly, you are looking for the equivalent of a C function's static variable. This is called a closure in Scheme.
Here's an example implementation of a function you feed numbers to, and which will always return the current maximum:
(define maxfinder
(let ((max #f)) ; "static" variable, initialized to False
(lambda (n) ; the function that is defined
(when (or (not max) (< max n)) ; if no max yet, or new value > max
(set! max n)) ; then set max to new value
max))) ; in any case, return the current max
then
> (maxfinder 1)
1
> (maxfinder 10)
10
> (maxfinder 5)
10
> (maxfinder 2)
10
> (maxfinder 100)
100
So this will work, but provides no mechanism to reuse the function in a different context. The following more generalised version instantiates a new function on every call:
(define (maxfinder)
(let ((max #f)) ; "static" variable, initialized to False
(lambda (n) ; the function that is returned
(when (or (not max) (< max n)) ; if no max yet, or new value > max
(set! max n)) ; then set max to new value
max))) ; in any case, return the current max
use like this:
> (define max1 (maxfinder)) ; instantiate a new maxfinder
> (max1 1)
1
> (max1 10)
10
> (max1 5)
10
> (max1 2)
10
> (max1 100)
100
> (define max2 (maxfinder)) ; instantiate a new maxfinder
> (max2 5)
5
Define a function to determine the maximum between two numbers:
(define (max x y)
(if (> x y) x y))
Define a function to 'end'
(define end? zero?)
Define a function to loop until end? computing max
(define (maximizing x)
(let ((input (begin (display "number> ") (read))))
(cond ((not (number? input)) (error "needed a number"))
((end? input) x)
(else (maximizing (max x input))))))
Kick it off:
> (maximizing 0)
number> 4
number> 1
number> 7
number> 2
number> 0
7
I have a simple search function for a property
that I am interested in, and a proof that the function is correct. I want to evaluate the function, and use the correctness proof to get the theorem for the original property. Unfortunately, evaluation in coq is very slow. As a trivial example, consider looking for square roots:
(*
Coq 8.4
A simple example to demonstrate searching.
Timings are rough and approximate.
*)
Require Import Peano_dec.
Definition SearchSqrtLoop :=
fix f i n := if eq_nat_dec (i * i) n
then i
else match i with
| 0 => 0 (* ~ Square n \/ n = 0 *)
| S j => f j n
end
.
Definition SearchSqrt n := SearchSqrtLoop n n.
(*
Compute SearchSqrt 484.
takes about 30 seconds.
*)
Theorem sqrt_484a : SearchSqrt 484 = 22.
apply eq_refl. (* 100 seconds *)
Qed. (* 50 seconds *)
Theorem sqrt_484b : SearchSqrt 484 = 22.
vm_compute. (* 30 seconds *)
apply eq_refl.
Qed. (* 30 seconds *)
Theorem sqrt_484c (a : nat) : SearchSqrt 484 = 22.
apply eq_refl. (* 100 seconds *)
Qed. (* 50 seconds *)
Theorem sqrt_484d (a : nat) : SearchSqrt 484 = 22.
vm_compute. (* 60 seconds *)
apply eq_refl.
Qed. (* 60 seconds *)
Now try the corresponding function in Python:
def SearchSqrt(n):
for i in range(n, -1, -1):
if i * i == n:
return i
return 0
or slightly more literally
def SearchSqrtLoop(i, n):
if i * i == n:
return i
if i == 0:
return 0
return SearchSqrtLoop(i - 1, n)
def SearchSqrt(n):
return SearchSqrtLoop(n, n)
The function is nearly instant in Python, but takes minutes in coq, depending on exactly how you try to call it. Also curious is that putting an extra variable in makes vm_compute take twice as long.
I understand that everything is done symbolically in coq, and thus slow, but it would be very useful if I could directly evaluate simple functions. Is there a way to do it? Just using native integers instead of linked lists would probably help a lot.
You'll get a speedup if you use binary arithmetic instead of unary arithmetic. Take a look at NArith and ZArith.
http://coq.inria.fr/library/
You'll also get a speedup if you run your code on OCaml, Haskell, or Scheme instead.
http://coq.inria.fr/refman/Reference-Manual025.html
There is a much more efficient function for finding square roots in the standard library (sqrt):
The following square root function is linear (and tail-recursive). With Peano representation, we can't do better. For faster algorithm, see Psqrt/Zsqrt/Nsqrt...
Require Import Coq.Init.Nat.
Theorem sqrt_484a_v2 : sqrt 484 = 22.
Time apply eq_refl.
Time Qed.
As Time tells us it works much faster (around 200 times faster than sqrt_484a).
The reason for this performance difference lies in the fact that SearchSqrt squares its first argument on each iteration, which is an expensive operation.
sqrt's implementation, on the other hand, is based on the Odd Number Theorem (1 + 3 + 5 + ... is a square number). One just needs to count the number of increasing intervals that can be fitted into the input argument n and that would be the integer square root of n. E.g. 22 = (1 + 3 + 5 + 7) + 6, which means there are 4 intervals (of lengths 1, 3, 5, and 7) in 22, so sqrt 22 = 4 (and we are not interested in the residual value 6).
So; I'm a hobbyist who's trying to work through SICP (it's free!) and there is an example procedure in the first chapter that is meant to count the possible ways to make change with american coins; (change-maker 100) => 292. It's implemented something like:
(define (change-maker amount)
(define (coin-value n)
(cond ((= n 1) 1)
((= n 2) 5)
((= n 3) 10)
((= n 4) 25)
((= n 5) 50)))
(define (iter amount coin-type)
(cond ((= amount 0) 1)
((or (= coin-type 0) (< amount 0)) 0)
(else (+ (iter amount
(- coin-type 1))
(iter (- amount (coin-value coin-type))
coin-type)))))
(iter amount 5))
Anyway; this is a tree-recursive procedure, and the author "leaves as a challenge" finding an iterative procedure to solve the same problem (ie fixed space). I have not had luck figuring this out or finding an answer after getting frustrated. I'm wondering if it's a brain fart on my part, or if the author's screwing with me.
The simplest / most general way to eliminate recursion, in general, is to use an auxiliary stack -- instead of making the recursive calls, you push their arguments into the stack, and iterate. When you need the result of the recursive call in order to proceed, again in the general case, that's a tad more complicated because you're also going to have to be able to push a "continuation request" (that will come off the auxiliary stack when the results are known); however, in this case, since all you're doing with all the recursive call results is a summation, it's enough to keep an accumulator and, every time you get a number result instead of a need to do more call, add it to the accumulator.
However, this, per se, is not fixed space, since that stack will grow. So another helpful idea is: since this is a pure function (no side effects), any time you find yourself having computed the function's value for a certain set of arguments, you can memoize the arguments-result correspondence. This will limit the number of calls. Another conceptual approach that leads to much the same computations is dynamic programming [[aka DP]], though with DP you often work bottom-up "preparing results to be memoized", so to speak, rather than starting with a recursion and working to eliminate it.
Take bottom-up DP on this function, for example. You know you'll repeatedly end up with "how many ways to make change for amount X with just the smallest coin" (as you whittle things down to X with various coin combinations from the original amount), so you start computing those amount values with a simple iteration (f(X) = X/value if X is exactly divisible by the smallest-coin value value, else 0; here, value is 1, so f(X)=X for all X>0). Now you continue by computing a new function g(X), ways to make change for X with the two smallest coins: again a simple iteration for increasing X, with g(x) = f(X) + g(X - value) for the value of the second-smallest coin (it will be a simple iteration because by the time you're computing g(X) you've already computed and stored f(X) and all g(Y) for Y < X -- of course, g(X) = 0 for all X <= 0). And again for h(X), ways to make change for X with the three smallest coins -- h(X) = g(X) + g(X-value) as above -- and from now on you won't need f(X) any more, so you can reuse that space. All told, this would need space 2 * amount -- not "fixed space" yet, but, getting closer...
To make the final leap to "fixed space", ask yourself: do you need to keep around all values of two arrays at each step (the one you last computed and the one you're currently computing), or, only some of those values, by rearranging your looping a little bit...?
Here is my version of the function, using dynamic programming. A vector of size n+1 is initialized to 0, except that the 0'th item is initially 1. Then for each possible coin (the outer do loop), each vector element (the inner do loop) starting from the k'th, where k is the value of the coin, is incremented by the value at the current index minus k.
(define (counts xs n)
(let ((cs (make-vector (+ n 1) 0)))
(vector-set! cs 0 1)
(do ((xs xs (cdr xs)))
((null? xs) (vector-ref cs n))
(do ((x (car xs) (+ x 1))) ((< n x))
(vector-set! cs x (+ (vector-ref cs x)
(vector-ref cs (- x (car xs)))))))))
> (counts '(1 5 10 25 50) 100)
292
You can run this program at http://ideone.com/EiOVY.
So, in this thread, the original asker of the question comes up with a sound answer via modularization. I would suggest, however, that his code can easily be optimized if you notice that cc-pennies is entirely superfluous (and by extension, so is cc-nothing)
See, the problem with the way cc-pennies is written is that, because there's no lower denomination to go, all it will do by mimicking the structure of the higher denomination procedures is iterate down from (- amount 1) to 0, and it will do this every time you pass it an amount from the cc-nickels procedure. So, on the first pass, if you try 1 dollar, you will get an amount of 100, so (- amount 1) evaluates to 99, which means you'll undergo 99 superfluous cycles of the cc-pennies and cc-nothing cycle. Then, nickels will pass you 95 as amount, so you get 94 more wasted cycles, so on and so forth. And that's all before you even move up the tree to dimes, or quarters, or half-dollars.
By the time you get to cc-pennies, you already know you just want to up the accumulator by one, so I'd suggest this improvement:
(define (count-change-iter amount)
(cc-fifties amount 0))
(define (cc-fifties amount acc)
(cond ((= amount 0) (+ 1 acc))
((< amount 0) acc)
(else (cc-fifties (- amount 50)
(cc-quarters amount acc)))))
(define (cc-quarters amount acc)
(cond ((= amount 0) (+ 1 acc))
((< amount 0) acc)
(else (cc-quarters (- amount 25)
(cc-dimes amount acc)))))
(define (cc-dimes amount acc)
(cond ((= amount 0) (+ 1 acc))
((< amount 0) acc)
(else (cc-dimes (- amount 10)
(cc-nickels amount acc)))))
(define (cc-nickels amount acc)
(cond ((= amount 0) (+ 1 acc))
((< amount 0) acc)
(else (cc-nickels (- amount 5)
(cc-pennies amount acc)))))
(define (cc-pennies amount acc)
(+ acc 1))
Hope you found this useful.
The solution I came up with is to keep count of each type of coin you're using in a 'purse'
The main loop works like this; 'denom is the current denomination, 'changed is the total value of coins in the purse, 'given is the amount of change I need to make and 'clear-up-to takes all the coins smaller than a given denomination out of the purse.
#lang scheme
(define (sub changed denom)
(cond
((> denom largest-denom)
combinations)
((>= changed given)
(inc-combinations-if (= changed given))
(clear-up-to denom)
(jump-duplicates changed denom)) ;checks that clear-up-to had any effect.
(else
(add-to-purse denom)
(sub
(purse-value)
0
))))
(define (jump-duplicates changed denom)
(define (iter peek denom)
(cond
((> (+ denom 1) largest-denom)
combinations)
((= peek changed)
(begin
(clear-up-to (+ denom 1))
(iter (purse-value) (+ denom 1))))
(else
(sub peek (+ denom 1)))))
(iter (purse-value) denom))
After reading Alex Martelli's answer I came up with the purse idea but just got around to making it work
You can solve it iteratively with dynamic programming in pseudo-polynomial time.