I have a question about the difference between two solutions to a problem. The problem asks to transform a list to a truncated list like so:
?- reduce([a,a,a,b,b,c,c,b,b,d,d],Z).
Z = [a,b,c,b,d].
This first solution needs an extra step that reverses the list:
reduce([X|Xs],Z) :-
reduce(X,Xs,Y,[X]),
reverse(Y,Z).
reduce(X,[L|Ls],Y,List) :-
( X=L
-> reduce(X,Ls,Y,List)
; reduce(L,Ls,Y,[L|List])
).
reduce(_,[],Y,Y).
The second solution does not require reverse/2:
reduced([X|Xs],Result) :-
reduced(Xs,List),
List=[A|_],
( A=X
-> Result=List
; Result=[X|List]
),
!.
reduced(Result,Result).
What are the optimization considerations when performing recursion before or after a series of statements? Does the order of the conditions matters? My inclination is to think that doing all the recursion upfront is the way to go, especially because building the list backwards is necessary here.
When you optimize anything, make sure to measure first! (most of us tend to forget this....)
When you optimize Prolog, look out for the following:
Tail recursion tends to do better (so there goes your "before or after series of statements" question);
Avoid creating choice points you don't need (this depends on the Prolog implementation)
Use an optimal algorithm (as in, don't traverse a list twice if you don't have to).
A solution that is "optimized" for a more or less standard Prolog implementation will look maybe a bit different. I will name it list_uniq (in analogy to the command-line uniq tool):
list_uniq([], []). % Base case
list_uniq([H|T], U) :-
list_uniq_1(T, H, U). % Helper predicate
list_uniq_1([], X, [X]).
list_uniq_1([H|T], X, U) :-
( H == X
-> list_uniq_1(T, X, U)
; [X|U1] = U,
list_uniq_1(T, H, U1)
).
It is different from the reduce0/2 by #CapelliC because it uses lagging to avoid the inherent non-determinism of [X|Xs] and [X,X|Xs] in the first argument.
Now to the claim that it is "optimized":
It traverses the list exactly once (no need for reversing)
It it tail-recursive
It does not create and discard choice points
You will get the same 12 inferences as #CapelliC, and if you then use a somewhat longer list, you will start to see differences:
?- length(As, 100000), maplist(=(a), As),
length(Bs, 100000), maplist(=(b), Bs),
length(Cs, 100000), maplist(=(c), Cs),
append([As, Bs, Cs, As, Cs, Bs], L),
time(list_uniq(L, U)).
% 600,006 inferences, 0.057 CPU in 0.057 seconds (100% CPU, 10499893 Lips)
As = [a, a, a, a, a, a, a, a, a|...],
Bs = [b, b, b, b, b, b, b, b, b|...],
Cs = [c, c, c, c, c, c, c, c, c|...],
L = [a, a, a, a, a, a, a, a, a|...],
U = [a, b, c, a, c, b].
The same query with reduce0, reduce1, reduce2 from #CapelliC's answer:
% reduce0(L, U)
% 600,001 inferences, 0.125 CPU in 0.125 seconds (100% CPU, 4813955 Lips)
% reduce1(L, U)
% 1,200,012 inferences, 0.393 CPU in 0.394 seconds (100% CPU, 3050034 Lips)
% reduce2(L, U)
% 2,400,004 inferences, 0.859 CPU in 0.861 seconds (100% CPU, 2792792 Lips)
So, creating and discarding choice points with cuts (!) has a price, too.
However, list_uniq/2, as it stands, can be wrong for queries where the first argument is not ground:
?- list_uniq([a,B], [a,b]).
B = b. % OK
?- list_uniq([a,A], [a]).
false. % WRONG!
reduce0/2 and reduce1/2 can be wrong, too:
?- reduce0([a,B], [a,b]).
false.
?- reduce1([a,B], [a,b]).
false.
As for reduce2/2, I am not sure about this one:
?- reduce2([a,A], [a,a]).
A = a.
Instead, using the definition of if_/3 from this answer:
list_uniq_d([], []). % Base case
list_uniq_d([H|T], U) :-
list_uniq_d_1(T, H, U). % Helper predicate
list_uniq_d_1([], X, [X]).
list_uniq_d_1([H|T], X, U) :-
if_(H = X,
list_uniq_d_1(T, H, U),
( [X|U1] = U,
list_uniq_d_1(T, H, U1)
)
).
With it:
?- list_uniq_d([a,a,a,b], U).
U = [a, b].
?- list_uniq_d([a,a,a,b,b], U).
U = [a, b].
?- list_uniq_d([a,A], U).
A = a,
U = [a] ;
U = [a, A],
dif(A, a).
?- list_uniq_d([a,A], [a]).
A = a ;
false. % Dangling choice point
?- list_uniq_d([a,A], [a,a]).
false.
?- list_uniq_d([a,B], [a,b]).
B = b.
?- list_uniq_d([a,A], [a,a]).
false.
It takes longer, but the predicate seems to be correct.
With the same query as used for the other timings:
% 3,000,007 inferences, 1.140 CPU in 1.141 seconds (100% CPU, 2631644 Lips)
profiling seems the easier way to answer to efficiency questions:
% my own
reduce0([], []).
reduce0([X,X|Xs], Ys) :- !, reduce0([X|Xs], Ys).
reduce0([X|Xs], [X|Ys]) :- reduce0(Xs, Ys).
% your first
reduce1([X|Xs],Z) :- reduce1(X,Xs,Y,[X]), reverse(Y,Z).
reduce1(X,[L|Ls],Y,List) :-
X=L -> reduce1(X,Ls,Y,List);
reduce1(L,Ls,Y,[L|List]).
reduce1(_,[],Y,Y).
% your second
reduce2([X|Xs],Result) :-
reduce2(Xs,List),
List=[A|_],
(A=X -> Result=List;
Result=[X|List]),!.
reduce2(Result,Result).
SWI-Prolog offers time/1:
4 ?- time(reduce0([a,a,a,b,b,c,c,b,b,d,d],Z)).
% 12 inferences, 0.000 CPU in 0.000 seconds (84% CPU, 340416 Lips)
Z = [a, b, c, b, d].
5 ?- time(reduce1([a,a,a,b,b,c,c,b,b,d,d],Z)).
% 19 inferences, 0.000 CPU in 0.000 seconds (90% CPU, 283113 Lips)
Z = [a, b, c, b, d] ;
% 5 inferences, 0.000 CPU in 0.000 seconds (89% CPU, 102948 Lips)
false.
6 ?- time(reduce2([a,a,a,b,b,c,c,b,b,d,d],Z)).
% 12 inferences, 0.000 CPU in 0.000 seconds (83% CPU, 337316 Lips)
Z = [a, b, c, b, d].
your second predicate performs like mine, while the first one seems to leave a choice point...
Order of conditions it's of primary importance, given the resolution strategy Prolog implements. In naive implementations, like mine IL, tail recursion optimization was recognized only when the recursive call was the last, and preceded by a cut. Just to be sure it's deterministic...
This answer is a direct follow-up to #Boris's answer.
To estimate the runtime we can expect once if_/3 is compiled,
I made list_uniq_e/2 which is just like #Boris's list_uniq_d/2 with the if_/3 compiled manually.
list_uniq_e([], []). % Base case
list_uniq_e([H|T], U) :-
list_uniq_e_1(T, H, U). % Helper predicate
list_uniq_e_1([], X, [X]).
list_uniq_e_1([H|T], X, U) :-
=(H,X,Truth),
list_uniq_e_2(Truth,H,T,X,U).
list_uniq_e_2(true ,H,T,_, U ) :- list_uniq_e_1(T,H,U).
list_uniq_e_2(false,H,T,X,[X|U]) :- list_uniq_e_1(T,H,U).
Let's compare the runtime (SWI Prolog 7.3.1, Intel Core i7-4700MQ 2.4GHz)!
First up, list_uniq_d/2:
% 3,000,007 inferences, 0.623 CPU in 0.623 seconds (100% CPU, 4813150 Lips)
Next up, list_uniq_e/2:
% 2,400,003 inferences, 0.132 CPU in 0.132 seconds (100% CPU, 18154530 Lips)
For the sake of completeness reduce0/2, reduce1/2, and reduce2/2:
% 600,002 inferences, 0.079 CPU in 0.079 seconds (100% CPU, 7564981 Lips)
% 600,070 inferences, 0.141 CPU in 0.141 seconds (100% CPU, 4266842 Lips)
% 600,001 inferences, 0.475 CPU in 0.475 seconds (100% CPU, 1262018 Lips)
Not bad! And... this is not the end of the line---as far as optimizing if_/3 is concerned:)
Hoping this is an even better follow-up to #Boris's answer than my last try!
First, here's #Boris's code again (100% original):
list_uniq_d([], []). % Base case
list_uniq_d([H|T], U) :-
list_uniq_d_1(T, H, U). % Helper predicate
list_uniq_d_1([], X, [X]).
list_uniq_d_1([H|T], X, U) :-
if_(H = X,
list_uniq_d_1(T, H, U),
( [X|U1] = U,
list_uniq_d_1(T, H, U1)
)
).
Plus some more code for benchmarking:
bench(P_2) :-
length(As, 100000), maplist(=(a), As),
length(Bs, 100000), maplist(=(b), Bs),
length(Cs, 100000), maplist(=(c), Cs),
append([As, Bs, Cs, As, Cs, Bs], L),
time(call(P_2,L,_)).
Now, let's introduce module re_if:
:- module(re_if, [if_/3, (=)/3, expand_if_goals/0]).
:- dynamic expand_if_goals/0.
trusted_truth(_=_). % we need not check truth values returned by (=)/3
=(X, Y, R) :- X == Y, !, R = true.
=(X, Y, R) :- ?=(X, Y), !, R = false. % syntactically different
=(X, Y, R) :- X \= Y, !, R = false. % semantically different
=(X, Y, R) :- R == true, !, X = Y.
=(X, X, true).
=(X, Y, false) :- dif(X, Y).
:- meta_predicate if_(1,0,0).
if_(C_1,Then_0,Else_0) :-
call(C_1,Truth),
functor(Truth,_,0), % safety check
( Truth == true -> Then_0
; Truth == false , Else_0
).
:- multifile system:goal_expansion/2.
system:goal_expansion(if_(C_1,Then_0,Else_0), IF) :-
expand_if_goals,
callable(C_1), % nonvar && (atom || compound)
!,
C_1 =.. Ps0,
append(Ps0,[T],Ps1),
C_0 =.. Ps1,
( trusted_truth(C_1)
-> IF = (C_0, ( T == true -> Then_0 ; Else_0))
; IF = (C_0,functor(T,_,0),( T == true -> Then_0 ; T == false, Else_0))
).
And now ... *drumroll* ... lo and behold:)
$ swipl
Welcome to SWI-Prolog (Multi-threaded, 64 bits, Version 7.3.3-18-gc341872)
Copyright (c) 1990-2015 University of Amsterdam, VU Amsterdam
SWI-Prolog comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to redistribute it under certain conditions.
Please visit http://www.swi-prolog.org for details.
For help, use ?- help(Topic). or ?- apropos(Word).
?- compile(re_if), compile(list_uniq).
true.
?- bench(list_uniq_d).
% 2,400,010 inferences, 0.865 CPU in 0.865 seconds (100% CPU, 2775147 Lips)
true.
?- assert(re_if:expand_if_goals), compile(list_uniq).
true.
?- bench(list_uniq_d).
% 1,200,005 inferences, 0.215 CPU in 0.215 seconds (100% CPU, 5591612 Lips)
true.
Related
Lets take this function:
function gcd(a, b)
while a ≠ b
if a > b
a := a − b
else
b := b − a
return a
How would we code gcd/3 in pure Prolog, so that it can be inverted. The Prolog predicate should for example compute gcd(2,3)=1. But if we would ask what are the a, b such that gcd(a,b)=1, we would also get by the same Prolog predicate:
/* one while iteration */
2, 1
1, 2
/* two while iterations */
3, 1
2, 3
3, 2
1, 3
/* Etc... */
Prolog seems to be especially suited since it can enumerate solutions.
I first tried to literally translate the GCD function
into a Prolog code. The first clause is for when a ≠ b is false,
which means we can terminate the function. Otherwise we
recurse into the two cases:
euclid(A,A,A).
euclid(A,B,R) :- A #< B, C #= B-A, euclid(A,C,R).
euclid(A,B,R) :- A #> B, C #= A-B, euclid(C,B,R).
We can test, seems to work fine, except it has choice points.
But the choice points are kind of the price we have to pay for
using CLP(FD) and programming pure Prolog without cut:
?- euclid(17,13,X).
X = 1 ;
No
But using euclid/3 for enumeration isn't very satisfactory,
the result is only one execution branch of the GCD function:
?- euclid(A,B,1).
A = 1,
B = 1 ;
A = 1,
B = 2 ;
A = 1,
B = 3 ;
A = 1,
B = 4 ;
Now we can do the following and encode a path P through the GCD
function by a binary number. When we terminate the path will be P=1.
Otherwise we use the lower bit of P to encode which of the two remaining
clauses of the GCD were chosen:
euclid(A,A,1,A).
euclid(A,B,P,R) :- A #< B, C #= B-A, P #= 2*Q, Q #> 0, euclid(A,C,Q,R).
euclid(A,B,P,R) :- A #> B, C #= A-B, P #= 2*Q+1, Q #> 0, euclid(C,B,Q,R).
The resulting Prolog predicate is indeed bidirectional:
?- euclid(17,13,P,X).
P = 241,
X = 1 ;
No
?- euclid(A,B,241,1).
A = 17,
B = 13 ;
No
We can also use it to arbitrarily enumerate, although only with
the help of between/3 and maybe not the most efficient, but it wurks:
?- between(1,7,P), euclid(A,B,P,1), write(B/A), nl, fail; true.
1/1
2/1
1/2
3/1
2/3
3/2
1/3
Yes
Edit 04.02.2021:
Oh, interesting, this works also. But the result is differently ordered:
?- P #< 8, euclid(A,B,P,1), write(B/A), nl, fail; true.
1/1
2/1
3/1
3/2
1/2
2/3
1/3
true.
This solution uses an argument to keep track of the current tier of the loop:
gcd(A, B, G):-
gcd(_, A, B, G).
gcd(Tier, A, B, G):-
Tier1 #= Tier - 1,
Tier1 #>= 0,
zcompare(Order, A, B),
gcd(Order, Tier1, A, B, G).
gcd(=, 0, G, G, G).
gcd(>, Tier, A, B, G):-
A1 #= A - B,
gcd(Tier, A1, B, G).
gcd(<, Tier, A, B, G):-
B1 #= B - A,
gcd(Tier, A, B1, G).
So when you want to get a tiered enumeration you my write:
?- between(1,3,Tier), gcd(Tier, A,B,1), write(B/A), nl, fail; true.
1/1
1/2
2/1
1/3
2/3
3/2
3/1
true.
My professor has given me an RSA factoring problem has assignment. The given modulus is 30 decimal digits long. I have been searching a lot about factoring algorithms. But it has been quite a headache to choose one for my given requirements. Which all algorithms give better performance for 30 decimal digit numbers?
Note: So far I have read about Brute force approach and Quadratic Sieve. The latter is complex and the former time consuming.
There's another method called Pollard's Rho algorithm, which is not as fast as the GNFS but is capable of factoring 30-digit numbers in minutes rather than hours.
The algorithm is very simple. It stops when it finds any factor, so you'll need to call it recursively to obtain a complete factorisation. Here's a basic implementation in Python:
def rho(n):
def gcd(a, b):
while b > 0:
a, b = b, a%b
return a
g = lambda z: (z**2 + 1) % n
x, y, d = 2, 2, 1
while d == 1:
x = g(x)
y = g(g(y))
d = gcd(abs(x-y), n)
if d == n:
print("Can't factor this, sorry.")
print("Try a different polynomial for g(), maybe?")
else:
print("%d = %d * %d" % (n, d, n // d))
rho(441693463910910230162813378557) # = 763728550191017 * 578338290221621
Or you could just use an existing software library. I can't see much point in reinventing this particular wheel.
I'm summing a long list of Ratios in Clojure, something like:
(defn sum-ratios
[n]
(reduce
(fn [total ind]
(+
total
(/
(inc (rand-int 100))
(inc (rand-int 100)))))
(range 0 n)))
The runtime for various n is:
n = 10^4 ...... 41 ms
n = 10^6 ...... 3.4 s
n = 10^7 ...... 36 s
The (less precise) alternative is to sum these values as doubles:
(defn sum-doubles
[n]
(reduce
(fn [total ind]
(+
total
(double
(/
(inc (rand-int 100))
(inc (rand-int 100))))))
(range 0 n)))
The runtime for this version is:
n = 10^4 ...... 8.8 ms
n = 10^6 ...... 350 ms
n = 10^7 ...... 3.4 s
Why is it significantly slower to sum Ratios? I'm guessing that it has to do with finding the least common multiple of the denominators of the Ratios being summed, but does anybody know specifically which algorithm Clojure uses to sum Ratios?
Let's follow what happens when you + two Ratios, which is what happens at each step in the reduction. We start at the two-arity version of +:
([x y] (. clojure.lang.Numbers (add x y)))
This takes us to Numbers.add(Obj, Obj):
return ops(x).combine(ops(y)).add((Number)x, (Number)y);
ops looks at the class of the first operand and will find RatioOps. This leads to the RatioOps.add function:
final public Number add(Number x, Number y){
Ratio rx = toRatio(x);
Ratio ry = toRatio(y);
Number ret = divide(ry.numerator.multiply(rx.denominator)
.add(rx.numerator.multiply(ry.denominator))
, ry.denominator.multiply(rx.denominator));
return normalizeRet(ret, x, y);
}
So here's your algorithm. There are five BigInteger operations here (three multiplies, one add, one divide):
(yn*xd + xn*yd) / (xd*yd)
You can see how multiply is implemented; it alone is not trivial, and you can examine the others for yourself.
Sure enough, the divide function involves finding the gcd between the two numbers so it can be reduced:
static public Number divide(BigInteger n, BigInteger d){
if(d.equals(BigInteger.ZERO))
throw new ArithmeticException("Divide by zero");
BigInteger gcd = n.gcd(d);
if(gcd.equals(BigInteger.ZERO))
return BigInt.ZERO;
n = n.divide(gcd);
d = d.divide(gcd);
...
}
The gcd function creates two new MutableBigInteger objects.
Computationally, it's expensive, as you can see from all of the above. However, don't discount the cost of extra incidental object creation (as in gcd above), as that is often more expensive as we involve non-cached memory access.
The double conversion is not free, FWIW, as it involves a division of two newly-created BigDecimals.
You really need a profiler to see exactly where the cost is. But hopefully the above gives a little context.
I'm unsure of the general time complexity of the following code.
Sum = 0
for i = 1 to N
if i > 10
for j = 1 to i do
Sum = Sum + 1
Assuming i and j are incremented by 1.
I know that the first loop is O(n) but the second loop is only going to run when N > 10. Would the general time complexity then be O(n^2)? Any help is greatly appreciated.
Consider the definition of Big O Notation.
________________________________________________________________
Let f: ℜ → ℜ and g: ℜ → ℜ.
Then, f(x) = O(g(x))
⇔
∃ k ∈ ℜ ∋ ∃ M > 0 ∈ ℜ ∋ ∀ x ≥ k, |f(x)| ≤ M ⋅ |g(x)|
________________________________________________________________
Which can be read less formally as:
________________________________________________________________
Let f and g be functions defined on a subset of the real numbers.
Then, f is O of g if, for big enough x's (this is what the k is for in the formal definition) there is a constant M (from the real numbers, of course) such that M times g(x) will always be greater than or equal to (really, you can just increase M and it will always be greater, but I regress) f(x).
________________________________________________________________
(You may note that if a function is O(n), then it is also O(n²) and O(e^n), but of course we are usually interested in the "smallest" function g such that it is O(g). In fact, when someone says f is O of g then they almost always mean that g is the smallest such function.)
Let's translate this to your problem. Let f(N) be the amount of time your process takes to complete as a function of N. Now, pretend that addition takes one unit of time to complete (and checking the if statement and incrementing the for-loop take no time), then
f(1) = 0
f(2) = 0
...
f(10) = 0
f(11) = 11
f(12) = 23
f(13) = 36
f(14) = 50
We want to find a function g(N) such that for big enough values of N, f(N) ≤ M ⋅g(N). We can satisfy this by g(N) = N² and M can just be 1 (maybe it could be smaller, but we don't really care). In this case, big enough means greater than 10 (of course, f is still less than M⋅g for N <11).
tl;dr: Yes, the general time complexity is O(n²) because Big O assumes that your N is going to infinity.
Let's assume your code is
Sum = 0
for i = 1 to N
for j = 1 to i do
Sum = Sum + 1
There are N^2 sum operations in total. Your code with if i > 10 does 10^2 sum operations less. As a result, for enough big N we have
N^2 - 10^2
operations. That is
O(N^2) - O(1) = O(N^2)
I'm looking for a tool for determining whether a given set of linear equations/inequalities (A) entails another given set of linear equations/inequalities (B). The return value should be either 'true' or 'false'.
To illustrate, let's look at possible instances of A and B and the expected return value of the algorithm:
A: {Z=3,Y=Z+2, X < Y} ;
B: {X<5} ;
Expected result: true
A: {Z=3,Y=Z+2, X < Y} ;
B: {X<10} ;
Expected result: true
A: {Z=3,Y=Z+2, X < Y} ;
B: {X=3} ;
Expected result: false
A: {X<=Y,X>=Y} ;
B: {X=Y} ;
Expected result: true
A: {X<=Y,X>=Y} ;
B: {X=Y, X>Z+2} ;
Expected result: false
Typically A contains up to 10 equations/inequalities, and B contains 1 or 2. All of them are linear and relatively simple. We may even assume that all variables are integers.
This task - of determining whether A entails B - is part of a bigger system and therefore I'm looking for tools/source code/packages that already implemented something like that and I can use.
Things I started to look at:
Theorem provers with algebra - Otter, EQP and Z3 (Vampire is currently unavailable for download).
Coq formal proof management system.
Linear Programming.
However, my experience with these tools is very limited and so far I didn't find a promising direction. Any guidelines and ideas from people more experienced than me will be highly appreciated.
Thank you for your time!
I think I found a working solution. The problem can be rephrased as an assignment problem and then it can be solved by theorem provers such as Z3 and with some work probably also by linear programming solvers.
To illustrate, let's look at the first example I gave above:
A: {Z=3, Y=Z+2, X<Y} ;
B: {X<5} ;
Determining whether A entails B is equivalent to determining whether it is impossible that A is true while B is false. This is a simple simple logical equivalence. In our case, it means that rather than checking whether the condition in B follows from the ones in A, we can check whether there is no assignment of X, Y and Z that satisfies the conditions in A and not in B.
Now, when phrased as an assignment problem, a theorem prover such as Z3 can be called for the task. The following code checks if the conditions in A are satisfiable while the one in B is not:
(declare-fun x () Int)
(declare-fun y () Int)
(declare-fun z () Int)
(assert (= z 3))
(assert (= y (+ z 2)))
(assert (< x y))
(assert (not (< x 5)))
(check-sat)
(get-model)
(exit)
Z3 reports that there is no model that satisfies these conditions, thus it is not possible that B doesn't follow from A, and therefore we can conclude that B follows from A.