Lets take this function:
function gcd(a, b)
while a ≠ b
if a > b
a := a − b
else
b := b − a
return a
How would we code gcd/3 in pure Prolog, so that it can be inverted. The Prolog predicate should for example compute gcd(2,3)=1. But if we would ask what are the a, b such that gcd(a,b)=1, we would also get by the same Prolog predicate:
/* one while iteration */
2, 1
1, 2
/* two while iterations */
3, 1
2, 3
3, 2
1, 3
/* Etc... */
Prolog seems to be especially suited since it can enumerate solutions.
I first tried to literally translate the GCD function
into a Prolog code. The first clause is for when a ≠ b is false,
which means we can terminate the function. Otherwise we
recurse into the two cases:
euclid(A,A,A).
euclid(A,B,R) :- A #< B, C #= B-A, euclid(A,C,R).
euclid(A,B,R) :- A #> B, C #= A-B, euclid(C,B,R).
We can test, seems to work fine, except it has choice points.
But the choice points are kind of the price we have to pay for
using CLP(FD) and programming pure Prolog without cut:
?- euclid(17,13,X).
X = 1 ;
No
But using euclid/3 for enumeration isn't very satisfactory,
the result is only one execution branch of the GCD function:
?- euclid(A,B,1).
A = 1,
B = 1 ;
A = 1,
B = 2 ;
A = 1,
B = 3 ;
A = 1,
B = 4 ;
Now we can do the following and encode a path P through the GCD
function by a binary number. When we terminate the path will be P=1.
Otherwise we use the lower bit of P to encode which of the two remaining
clauses of the GCD were chosen:
euclid(A,A,1,A).
euclid(A,B,P,R) :- A #< B, C #= B-A, P #= 2*Q, Q #> 0, euclid(A,C,Q,R).
euclid(A,B,P,R) :- A #> B, C #= A-B, P #= 2*Q+1, Q #> 0, euclid(C,B,Q,R).
The resulting Prolog predicate is indeed bidirectional:
?- euclid(17,13,P,X).
P = 241,
X = 1 ;
No
?- euclid(A,B,241,1).
A = 17,
B = 13 ;
No
We can also use it to arbitrarily enumerate, although only with
the help of between/3 and maybe not the most efficient, but it wurks:
?- between(1,7,P), euclid(A,B,P,1), write(B/A), nl, fail; true.
1/1
2/1
1/2
3/1
2/3
3/2
1/3
Yes
Edit 04.02.2021:
Oh, interesting, this works also. But the result is differently ordered:
?- P #< 8, euclid(A,B,P,1), write(B/A), nl, fail; true.
1/1
2/1
3/1
3/2
1/2
2/3
1/3
true.
This solution uses an argument to keep track of the current tier of the loop:
gcd(A, B, G):-
gcd(_, A, B, G).
gcd(Tier, A, B, G):-
Tier1 #= Tier - 1,
Tier1 #>= 0,
zcompare(Order, A, B),
gcd(Order, Tier1, A, B, G).
gcd(=, 0, G, G, G).
gcd(>, Tier, A, B, G):-
A1 #= A - B,
gcd(Tier, A1, B, G).
gcd(<, Tier, A, B, G):-
B1 #= B - A,
gcd(Tier, A, B1, G).
So when you want to get a tiered enumeration you my write:
?- between(1,3,Tier), gcd(Tier, A,B,1), write(B/A), nl, fail; true.
1/1
1/2
2/1
1/3
2/3
3/2
3/1
true.
Related
I have a question about the difference between two solutions to a problem. The problem asks to transform a list to a truncated list like so:
?- reduce([a,a,a,b,b,c,c,b,b,d,d],Z).
Z = [a,b,c,b,d].
This first solution needs an extra step that reverses the list:
reduce([X|Xs],Z) :-
reduce(X,Xs,Y,[X]),
reverse(Y,Z).
reduce(X,[L|Ls],Y,List) :-
( X=L
-> reduce(X,Ls,Y,List)
; reduce(L,Ls,Y,[L|List])
).
reduce(_,[],Y,Y).
The second solution does not require reverse/2:
reduced([X|Xs],Result) :-
reduced(Xs,List),
List=[A|_],
( A=X
-> Result=List
; Result=[X|List]
),
!.
reduced(Result,Result).
What are the optimization considerations when performing recursion before or after a series of statements? Does the order of the conditions matters? My inclination is to think that doing all the recursion upfront is the way to go, especially because building the list backwards is necessary here.
When you optimize anything, make sure to measure first! (most of us tend to forget this....)
When you optimize Prolog, look out for the following:
Tail recursion tends to do better (so there goes your "before or after series of statements" question);
Avoid creating choice points you don't need (this depends on the Prolog implementation)
Use an optimal algorithm (as in, don't traverse a list twice if you don't have to).
A solution that is "optimized" for a more or less standard Prolog implementation will look maybe a bit different. I will name it list_uniq (in analogy to the command-line uniq tool):
list_uniq([], []). % Base case
list_uniq([H|T], U) :-
list_uniq_1(T, H, U). % Helper predicate
list_uniq_1([], X, [X]).
list_uniq_1([H|T], X, U) :-
( H == X
-> list_uniq_1(T, X, U)
; [X|U1] = U,
list_uniq_1(T, H, U1)
).
It is different from the reduce0/2 by #CapelliC because it uses lagging to avoid the inherent non-determinism of [X|Xs] and [X,X|Xs] in the first argument.
Now to the claim that it is "optimized":
It traverses the list exactly once (no need for reversing)
It it tail-recursive
It does not create and discard choice points
You will get the same 12 inferences as #CapelliC, and if you then use a somewhat longer list, you will start to see differences:
?- length(As, 100000), maplist(=(a), As),
length(Bs, 100000), maplist(=(b), Bs),
length(Cs, 100000), maplist(=(c), Cs),
append([As, Bs, Cs, As, Cs, Bs], L),
time(list_uniq(L, U)).
% 600,006 inferences, 0.057 CPU in 0.057 seconds (100% CPU, 10499893 Lips)
As = [a, a, a, a, a, a, a, a, a|...],
Bs = [b, b, b, b, b, b, b, b, b|...],
Cs = [c, c, c, c, c, c, c, c, c|...],
L = [a, a, a, a, a, a, a, a, a|...],
U = [a, b, c, a, c, b].
The same query with reduce0, reduce1, reduce2 from #CapelliC's answer:
% reduce0(L, U)
% 600,001 inferences, 0.125 CPU in 0.125 seconds (100% CPU, 4813955 Lips)
% reduce1(L, U)
% 1,200,012 inferences, 0.393 CPU in 0.394 seconds (100% CPU, 3050034 Lips)
% reduce2(L, U)
% 2,400,004 inferences, 0.859 CPU in 0.861 seconds (100% CPU, 2792792 Lips)
So, creating and discarding choice points with cuts (!) has a price, too.
However, list_uniq/2, as it stands, can be wrong for queries where the first argument is not ground:
?- list_uniq([a,B], [a,b]).
B = b. % OK
?- list_uniq([a,A], [a]).
false. % WRONG!
reduce0/2 and reduce1/2 can be wrong, too:
?- reduce0([a,B], [a,b]).
false.
?- reduce1([a,B], [a,b]).
false.
As for reduce2/2, I am not sure about this one:
?- reduce2([a,A], [a,a]).
A = a.
Instead, using the definition of if_/3 from this answer:
list_uniq_d([], []). % Base case
list_uniq_d([H|T], U) :-
list_uniq_d_1(T, H, U). % Helper predicate
list_uniq_d_1([], X, [X]).
list_uniq_d_1([H|T], X, U) :-
if_(H = X,
list_uniq_d_1(T, H, U),
( [X|U1] = U,
list_uniq_d_1(T, H, U1)
)
).
With it:
?- list_uniq_d([a,a,a,b], U).
U = [a, b].
?- list_uniq_d([a,a,a,b,b], U).
U = [a, b].
?- list_uniq_d([a,A], U).
A = a,
U = [a] ;
U = [a, A],
dif(A, a).
?- list_uniq_d([a,A], [a]).
A = a ;
false. % Dangling choice point
?- list_uniq_d([a,A], [a,a]).
false.
?- list_uniq_d([a,B], [a,b]).
B = b.
?- list_uniq_d([a,A], [a,a]).
false.
It takes longer, but the predicate seems to be correct.
With the same query as used for the other timings:
% 3,000,007 inferences, 1.140 CPU in 1.141 seconds (100% CPU, 2631644 Lips)
profiling seems the easier way to answer to efficiency questions:
% my own
reduce0([], []).
reduce0([X,X|Xs], Ys) :- !, reduce0([X|Xs], Ys).
reduce0([X|Xs], [X|Ys]) :- reduce0(Xs, Ys).
% your first
reduce1([X|Xs],Z) :- reduce1(X,Xs,Y,[X]), reverse(Y,Z).
reduce1(X,[L|Ls],Y,List) :-
X=L -> reduce1(X,Ls,Y,List);
reduce1(L,Ls,Y,[L|List]).
reduce1(_,[],Y,Y).
% your second
reduce2([X|Xs],Result) :-
reduce2(Xs,List),
List=[A|_],
(A=X -> Result=List;
Result=[X|List]),!.
reduce2(Result,Result).
SWI-Prolog offers time/1:
4 ?- time(reduce0([a,a,a,b,b,c,c,b,b,d,d],Z)).
% 12 inferences, 0.000 CPU in 0.000 seconds (84% CPU, 340416 Lips)
Z = [a, b, c, b, d].
5 ?- time(reduce1([a,a,a,b,b,c,c,b,b,d,d],Z)).
% 19 inferences, 0.000 CPU in 0.000 seconds (90% CPU, 283113 Lips)
Z = [a, b, c, b, d] ;
% 5 inferences, 0.000 CPU in 0.000 seconds (89% CPU, 102948 Lips)
false.
6 ?- time(reduce2([a,a,a,b,b,c,c,b,b,d,d],Z)).
% 12 inferences, 0.000 CPU in 0.000 seconds (83% CPU, 337316 Lips)
Z = [a, b, c, b, d].
your second predicate performs like mine, while the first one seems to leave a choice point...
Order of conditions it's of primary importance, given the resolution strategy Prolog implements. In naive implementations, like mine IL, tail recursion optimization was recognized only when the recursive call was the last, and preceded by a cut. Just to be sure it's deterministic...
This answer is a direct follow-up to #Boris's answer.
To estimate the runtime we can expect once if_/3 is compiled,
I made list_uniq_e/2 which is just like #Boris's list_uniq_d/2 with the if_/3 compiled manually.
list_uniq_e([], []). % Base case
list_uniq_e([H|T], U) :-
list_uniq_e_1(T, H, U). % Helper predicate
list_uniq_e_1([], X, [X]).
list_uniq_e_1([H|T], X, U) :-
=(H,X,Truth),
list_uniq_e_2(Truth,H,T,X,U).
list_uniq_e_2(true ,H,T,_, U ) :- list_uniq_e_1(T,H,U).
list_uniq_e_2(false,H,T,X,[X|U]) :- list_uniq_e_1(T,H,U).
Let's compare the runtime (SWI Prolog 7.3.1, Intel Core i7-4700MQ 2.4GHz)!
First up, list_uniq_d/2:
% 3,000,007 inferences, 0.623 CPU in 0.623 seconds (100% CPU, 4813150 Lips)
Next up, list_uniq_e/2:
% 2,400,003 inferences, 0.132 CPU in 0.132 seconds (100% CPU, 18154530 Lips)
For the sake of completeness reduce0/2, reduce1/2, and reduce2/2:
% 600,002 inferences, 0.079 CPU in 0.079 seconds (100% CPU, 7564981 Lips)
% 600,070 inferences, 0.141 CPU in 0.141 seconds (100% CPU, 4266842 Lips)
% 600,001 inferences, 0.475 CPU in 0.475 seconds (100% CPU, 1262018 Lips)
Not bad! And... this is not the end of the line---as far as optimizing if_/3 is concerned:)
Hoping this is an even better follow-up to #Boris's answer than my last try!
First, here's #Boris's code again (100% original):
list_uniq_d([], []). % Base case
list_uniq_d([H|T], U) :-
list_uniq_d_1(T, H, U). % Helper predicate
list_uniq_d_1([], X, [X]).
list_uniq_d_1([H|T], X, U) :-
if_(H = X,
list_uniq_d_1(T, H, U),
( [X|U1] = U,
list_uniq_d_1(T, H, U1)
)
).
Plus some more code for benchmarking:
bench(P_2) :-
length(As, 100000), maplist(=(a), As),
length(Bs, 100000), maplist(=(b), Bs),
length(Cs, 100000), maplist(=(c), Cs),
append([As, Bs, Cs, As, Cs, Bs], L),
time(call(P_2,L,_)).
Now, let's introduce module re_if:
:- module(re_if, [if_/3, (=)/3, expand_if_goals/0]).
:- dynamic expand_if_goals/0.
trusted_truth(_=_). % we need not check truth values returned by (=)/3
=(X, Y, R) :- X == Y, !, R = true.
=(X, Y, R) :- ?=(X, Y), !, R = false. % syntactically different
=(X, Y, R) :- X \= Y, !, R = false. % semantically different
=(X, Y, R) :- R == true, !, X = Y.
=(X, X, true).
=(X, Y, false) :- dif(X, Y).
:- meta_predicate if_(1,0,0).
if_(C_1,Then_0,Else_0) :-
call(C_1,Truth),
functor(Truth,_,0), % safety check
( Truth == true -> Then_0
; Truth == false , Else_0
).
:- multifile system:goal_expansion/2.
system:goal_expansion(if_(C_1,Then_0,Else_0), IF) :-
expand_if_goals,
callable(C_1), % nonvar && (atom || compound)
!,
C_1 =.. Ps0,
append(Ps0,[T],Ps1),
C_0 =.. Ps1,
( trusted_truth(C_1)
-> IF = (C_0, ( T == true -> Then_0 ; Else_0))
; IF = (C_0,functor(T,_,0),( T == true -> Then_0 ; T == false, Else_0))
).
And now ... *drumroll* ... lo and behold:)
$ swipl
Welcome to SWI-Prolog (Multi-threaded, 64 bits, Version 7.3.3-18-gc341872)
Copyright (c) 1990-2015 University of Amsterdam, VU Amsterdam
SWI-Prolog comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to redistribute it under certain conditions.
Please visit http://www.swi-prolog.org for details.
For help, use ?- help(Topic). or ?- apropos(Word).
?- compile(re_if), compile(list_uniq).
true.
?- bench(list_uniq_d).
% 2,400,010 inferences, 0.865 CPU in 0.865 seconds (100% CPU, 2775147 Lips)
true.
?- assert(re_if:expand_if_goals), compile(list_uniq).
true.
?- bench(list_uniq_d).
% 1,200,005 inferences, 0.215 CPU in 0.215 seconds (100% CPU, 5591612 Lips)
true.
I have a question regarding modified condition/decision coverage that I can't figure out.
So if I have the expression ((A || B) && C) and the task is with a minimal number of test cases receive 100% MD/DC.
I break it down into two parts with the minimal number of test cases for (A || B) and (X && C).
(A || B) : {F, F} = F, {F, T} = T, {T, -} = T
(X && C) : {F, -} = F, {T, F} = F, {T, T} = T
The '-' means that it doesn't matter which value they are since they won't be evaluated by the compiler.
So when I combine these I get this as my minimal set of test cases:
((A || B) && C) : {{F, F}, -} = F, {{F, T}, F} = F, {{T, -}, T} = T
But when I google it this is also in the set: {{F, T}, T} = T
Which I do not agree on because I tested the parts of this set separately in the other tests, didn't I?
So I seem to miss what the fourth test case adds to the set and it would be great if someone could explain why I must have it?
First of all, a little explanation on MCDC
To achieve MCDC you need to find at least one Test Pair for each condition in your boolean expression that fulfills the MCDC criterion.
At the moment there are 3 types of MCDC defined and approved by certification bodies (e.g. FAA)
"Unique Cause" – MCDC (original definition): Only one, specifically the influencing condition may change between the two test values of the test pair. The resulting decision, the result of the boolean expression, must also be different for the 2 test values of the test pair. All other conditions in the test pair must be fixed and unchanged.
"Masking" – MCDC”: Only one condition of the boolean expression having an influence on the outcome of the decision may change. Other conditions may change as well, but must be masked. Masked means, using the boolean logic in the expression, they will not have an influence on the outcome. If the left side of an AND is FALSE, then the complete rightside expression and sub expressions do not matter. They are masked. Masking is a relaxation of the “Unique Cause”MCDC.
"Unique Cause + Masking" – MCDC. This is a hybrid and especially needed for boolean expressions with strongly coupled conditions like for example in “ab+ac”. It is not possible to find a test pair for condition “a” the fulfills "Unique Cause" – MCDC. So, we can relax the original definition, and allow masking for strongly coupled conditions.
With these 2 additional definitions much more test pairs can be found.
Please note additionally that, when using languages with a boolean short cut evaluation strategy (like Java, C, C++), there are even more test pairs fulfilling MCDC criteria.
Very important is to understand that a BlackBox view on a truth table does not allow to find any kind of Masking or boolean short cut evaluation.
MCDC is a structural coverage metric and so a WhiteBox View on the boolean expression is absolutely mandatory.
So, now to your expression: “(a+b)c”. A brute force analysis with boolean shortcut evaluation will give you the following test pairs for your 3 conditions a,b,c:
Influencing Condition: 'a' Pair: 0, 5 Unique Cause
Influencing Condition: 'a' Pair: 0, 7 Unique Cause
Influencing Condition: 'a' Pair: 1, 5 Unique Cause
Influencing Condition: 'a' Pair: 1, 7 Unique Cause
Influencing Condition: 'b' Pair: 0, 3 Unique Cause
Influencing Condition: 'b' Pair: 1, 3 Unique Cause
Influencing Condition: 'c' Pair: 2, 3 Unique Cause
Influencing Condition: 'c' Pair: 2, 5 Masking
Influencing Condition: 'c' Pair: 2, 7 Masking
Influencing Condition: 'c' Pair: 3, 4 Masking
Influencing Condition: 'c' Pair: 3, 6 Masking
Influencing Condition: 'c' Pair: 4, 5 Unique Cause
Influencing Condition: 'c' Pair: 4, 7 Unique Cause
Influencing Condition: 'c' Pair: 5, 6 Unique Cause
Influencing Condition: 'c' Pair: 6, 7 Unique Cause
Without a white box view, you will never find “Unique Cause” – MCDC test pairs like 0,7 for condition “a”. Also, you will not find any of the valid "Masking" – MCDC” test pairs.
The next step is now to find one minimum test set. This can be done by applying the “set cover” or “unate covering” algoritm. From above test pairs we can create a table that shows which test value covers what condition. In your case this looks like the following:
0 1 2 3 4 5 6 7
a X X X X
b X X X
c X X X X X X
Simple elimination of double columns will already reduce the table to
0 1 2 3 4 5 6 7 0 3 5
a X - X - ==> a X X
b X - X b X X
c - X - X - - c X X
Please Note: We employed some heuristics to select, which of completely identical columns should be eliminated. So effectively there are more solutions to the set cover problem. But with the geometrically growth on computing time and memory consumption related to the number of conditions, this is the only meaningful approach.
Now we will apply Petrick’s method and find out all coverage sets:
1. 0, 3
2. 0, 5
3. 3, 5
That means, we need to select, out of the above test pair list, test pairs for a,b,c that contain, for solution 1, 0 and 3, for solution 2, 0 and 5, and for solution 3 3 and 5
Also here we will apply some heuristic functions to come up with the minimum solutions:
-------- For Coverage set 1
Test Pair for Condition 'a': 0 5 (Unique Cause)
Test Pair for Condition 'b': 0 3 (Unique Cause)
Test Pair for Condition 'c': 2 3 (Unique Cause)
Resulting Test Vector: 0 2 3 5
-------- For Coverage set 2
Test Pair for Condition 'a': 0 5 (Unique Cause)
Test Pair for Condition 'b': 0 3 (Unique Cause)
Test Pair for Condition 'c': 5 6 (Unique Cause)
Resulting Test Vector: 0 3 5 6
-------- For Coverage set 3
Test Pair for Condition 'a': 1 5 (Unique Cause)
Test Pair for Condition 'b': 1 3 (Unique Cause)
Test Pair for Condition 'c': 5 6 (Unique Cause)
Resulting Test Vector: 1 3 5 6
---------------------------------------
Test vector: Recommended Result: 0 2 3 5
0: a=0 b=0 c=0 (0)
2: a=0 b=1 c=0 (0)
3: a=0 b=1 c=1 (1)
5: a=1 b=0 c=1 (1)
You see that you have for all 3 possible solutions 4 test cases that cover all conditions and that fulfill MCDC criteria. If you check publications in the internet, then you will see that the minimum number of tests is n+1 (n = number of conditions). This sounds not like a great achievement. But looking at MCC (Multiple Condition Coverage) you will have 2^n test cases. So, for 8 conditions 256 test cases. With MCDC there will be only 9. That was one reason to come up with MCDC.
I created also a software to help to better understand MCDC coverage. It uses a boolean expression as input, simplifies that (if you wish) and calculates all MCDC test pairs.
You may find it here:
MCDC
I hope, I could shed some light on the topic. I am happy to answer further questions.
Recall that for MC/DC you need for every condition P (A/B/C in your case) two test cases T and T' so that P is true in T and false in T', and so that the outcome of the predicate is true in one and false in the other test case.
An MC/DC cover for ((A || B) && C) is:
T1: (F, F, T) -> F (your first test case)
T2: (F, T, T) -> T (B flips outcome compared to T1, missing)
T3: (T, F, T) -> T (A flips outcome compared to T1, your third test case)
T4: (F, T, F) -> F (C flips outcome compared to T2, your second test case)
In concrete test cases you cannot have "-" / don't-care values: You must make a choice when you exercise the system.
So what you are missing in your answer is a pair of two test cases (T1 and T2) in which flipping only the second condition B also flips the outcome.
For MCDC to be achieved it requires the output to change when there is a change in only 1 input while the other inputs remain constant. For example in your case
MCDC for (A || B) & C
So when A changes from 1 to 0 B & C remain constant whereas output changes from 1 to 0. Same applies for both B & C.
Normally they say the minimum number of steps to achieve MCDC is Number of Inputs + Number of Outputs.
So in your case MCDC can be achieved in minimum of 4 steps.
Given that fib(n)=fib(n-1)+fib(n-2) for n>1 and given that fib(0)=a, fib(1)=b (some a, b >0), which of the following is true?
fib(n) is
Select one or more:
a. O(n^2)
b. O(2^n)
c. O((1-sqrt 5)/2)^n)
d. O(n)
e. Answer depends on a and b.
f. O((1+sqrt 5)/2)^n)
Solving the Fibonacci sequence I got that:
fib(n)= 1/(sqrt 5) ((1+sqrt 5)/2)^n - 1/(sqrt 5) ((1-sqrt 5)/2)^n
But what would be the time complexity in this case? Would that mean the answers are c and f?
From your closed form of your formula, the term 1 / (sqrt 5) ((1 - sqrt 5) / 2)^n has limit 0 as n grows to infinity (|(1 - sqrt 5) / 2| < 1). Therefore we can ignore this term. Also since in time complexity theory we don't care about muliplication constants the following is true:
fib(n) = Θ(φ^n)
where φ = (1 + sqrt 5) / 2 a.k.a. the golden ratio constant.
So it's an exponential function and we can exclude a, d, e. We can exclude c since as was said it has limit 0. But answer b is also correct because φ < 2 and O expresses an upper bound.
Finally, the correct answers are:
b, f
Θ(φ^n) is correct when a=1 and b=1 or a=1 and b=2 . The value of φ depends on a and b.
For computing fib(n-1) and fib(n-2) if we compute them recursively complexity is exponential, but if we save two last values and use them, complexity is O(n) and not depends on a and b.
I have the following code which works fine without a
meta_predicate declaration. I have defined a predicate
rec/3 as follows:
:- use_module(library(lambda)).
rec(F,1,F).
rec(F,N,\A^B^(call(F,A,H),call(G,H,B))) :-
N>1, M is N-1, rec(F,M,G).
The predicate rec/3 basically implements the following
higherorder recursion equation:
F^1 = F
F^N = F*F^(N-1) for N>1
Where * is the composition of two relations. It can
for example be used to define addition in terms of
successor. Successor would be the following relation:
?- F = \A^B^(B is A+1), call(F, 2, R).
R = 3 /* 3 = 2+1 */
Addition can then be done as follows (SWI-Prolog):
?- F = \A^B^(B is A+1), rec(F, 8, G), call(G, 3, R).
R = 11 /* 11 = 3+8 */
Now if a I add a meta_predicate declaration as follows,
before the clauses of rec/3:
:- meta_predicate rec(2,?,2).
rec(F,1,F).
rec(F,N,\A^B^(call(F,A,H),call(G,H,B))) :-
N>1, M is N-1, rec(F,M,G).
Things don't work anymore (SWI-Prolog):
?- F = \A^B^(B is A+1), rec(F, 8, G), call(G, 3, R).
false
How can I fix the clauses for rec/3 and the query so
that they work in the presence of meta_predicate?
Bye
The following straigh-forward solution (only tested on SWI-Prolog but in any case far from the wide portability of the Logtalk-based solution):
:- module(m, [rec/3]).
:- use_module(library(lambda)).
:- meta_predicate(rec(:,?,-)).
rec(F, 1, F).
rec(F, N, \A^B^(call(F,A,H),call(G,H,B))) :-
N > 1, M is N -1,
rec(F, M, G).
gives:
?- [mrec].
true.
?- use_module(library(lambda)).
true.
?- F = \A^B^(B is A+1), rec(F,10,G), call(G,0,R).
F = \A^B^ (B is A+1),
G = \_G56^_G59^ (call(user: \A^B^ (...is...), _G56, _G67), call(\_G75^_G78^ (call(..., ..., ...), call(..., ..., ...)), _G67, _G59)),
R = 10 .
without requiring low level hacks (one of the motivations of the meta_predicate/1 directive is to avoid the need of using explicit qualification) or requiring a misleading a meta_predicate/1 directive. After re-reading the post and the comments, I still wonder why you want forcibly to write:
:- meta_predicate(rec(2,?,2)).
The first argument of rec/2 is not going to be used as a closure to which the meta-predicate will append two arguments to construct a goal in order to call it. The third argument is an output argument. In the first argument, "2" means input but for the third argument it means instead output! In neither case the meta-predicate is making any meta-calls! The end result of this breakage of the meaning of long established meta-argument indicators in meta-predicate directives is that a user will no longer know how to interpret a meta-predicate template without looking at the actual code of the meta-predicate.
No problem with a Logtalk version of your code:
:- object(rec).
:- public(rec/3).
:- meta_predicate(rec(2,*,*)).
rec(F, 1, F).
rec(F, N, [A,B]>>(call(F,A,H),call(G,H,B))) :-
N > 1, M is N - 1,
rec(F, M, G).
:- public(local/2).
local(A, B) :-
B is A + 1.
:- end_object.
I get:
$ swilgt
...
?- {rec}.
% [ /Users/pmoura/Desktop/lgtemp/stackoverflow/rec.lgt loaded ]
% (0 warnings)
true.
?- F = [A,B]>>(B is A+1), rec::rec(F, 8, G), logtalk<<call(G, 3, R).
F = [A, B]>> (B is A+1),
G = [_G88, _G91]>> (call([A, B]>> (B is A+1), _G88, _G99), call([_G108, _G111]>> (call([A, B]>> (B is A+1), _G108, _G119), call([_G128, _G131]>> (call(... >> ..., _G128, _G139), call(... >> ..., _G139, _G131)), _G119, _G111)), _G99, _G91)),
R = 11 ;
false.
?- F = [A,B]>>(rec::local(A,B)), rec::rec(F, 8, G), logtalk<<call(G, 3, R).
F = [A, B]>> (rec<<local(A, B)),
G = [_G2655, _G2658]>> (call([A, B]>> (rec<<local(A, B)), _G2655, _G2666), call([_G2675, _G2678]>> (call([A, B]>> (rec<<local(A, B)), _G2675, _G2686), call([_G2695, _G2698]>> (call(... >> ..., _G2695, _G2706), call(... >> ..., _G2706, _G2698)), _G2686, _G2678)), _G2666, _G2658)),
R = 11 ;
false.
Note the "fix" for the meta_predicate/1 directive. The code for the rec/3 predicate is the same except for the conversion of the lambda expression syntax to the Logtalk syntax. However, in the case of Logtalk, the meta_predicate/1 directive is not required for this example (as all that the rec/3 predicate does is converting a term to a new term) and only serves documentation purposes. You can comment it out and still use the rec::rec/3 predicate, calling it from either user (i.e. from the top-level interpreter) or from a client object.
The call/3 call is made in the context of the logtalk built-in object just to get the Logtalk lambda expression interpreted (Logtalk doesn't make, on purpose, its native lambda expression support available at the Prolog top-level interpreter).
The SWI meta-predicate declarations and modules are similar to those
in Quintus, SICStus, and YAP. The fundamental assumption in those
systems is that all information is passed through the declared
meta-argument using (:)/2. There is no hidden state
or context. For the common cases (simple instantiated arguments), the
meta-predicate declarations are sufficient to relieve the burden of
explicit qualification from the programmer.
However, in more complex situations as the present one, you have to
ensure that explicit qualification will be added. Further, you need
to ensure to "dereference" the (:)/2 prefixes accordingly. In SWI,
there is strip_module/3:
?- strip_module(a:b:c:X,M,G).
X = G,
M = c.
Assume the definition:
rec(_, -1, local).
rec(_, 0, =).
rec(F, 1, F).
local(S0,S) :-
S is S0+1.
Which now has to be written like so:
:- meta_predicate goal_qualified(:,-).
goal_qualified(G,G).
:- meta_predicate rec(2,+,2).
rec(_, -1, G) :-
strip_module(G,_,VG),
goal_qualified(local,VG).
rec(_, 0, G) :-
strip_module(G,_,VG),
goal_qualified(=,VG).
rec(F, 1, G) :-
strip_module(G,_,F).
Many prefer to add module prefixes manually:
:- meta_predicate rec(2,+,2).
rec(_, -1, G) :-
strip_module(G,_,mymodule:local).
...
And if we restrict ourselves to SWI only, thereby sacrificing
compatibility to SICStus or YAP:
:- meta_predicate rec(2,+,2).
rec(_, -1, _:mymodule:local).
rec(_, 0, _:(=)).
rec(F, 1, _:F).
The rule in your question
rec(F,N,\A^B^(call(F,A,H),call(G,H,B))) :-
N>1, M is N-1, rec(F,M,G).
is thus translated as:
rec(F, N, MG) :-
N > 1, M is N - 1,
strip_module(MG,_,VG),
goal_qualified(\A^B^(call(F,A,H),call(G,H,B)),VG),
rec(F, M, G).
Assuming that library(lambda) is imported everywhere this can again be simplified in SWI to:
rec(F, N, _:(\A^B^(call(F,A,H),call(G,H,B)) )) :-
N > 1, M is N -1,
rec(F, M, G).
My conclusion
1mo: Systems should produce a warning for always failing clauses, like in:
| ?- [user].
% compiling user...
| :- meta_predicate p(0).
| p(1).
% compiled user in module user, 0 msec 2080 bytes
yes
| ?- p(X).
no
2do: Maybe it would be best to use the following auxiliary predicate:
:- meta_predicate cont_to(:,:).
cont_to(MGoal, MVar) :-
strip_module(MVar, _, Var),
( nonvar(Var)
-> throw(error(uninstantiation_error(Var),_))
; true
),
( strip_module(MGoal,_,Goal),
var(Goal)
-> throw(error(instantiation_error,_))
; true
),
Var = MGoal.
Usage.
rec(_, -1, MV) :-
cont_to(local, MV).
Or rather: one version for each number of auxiliary arguments, thus
:- meta_predicate cont0_to(0,0).
:- meta_predicate cont1_to(1,1).
:- meta_predicate cont2_to(2,2).
...
The name could be better, an operator would not do, though.
I'm trying to determine the complexity of this two functions, where D in an integer and list is a list of integers:
def solve(D, list):
for element in List:
doFunc(element, D, list)
def doFunc(element, D, list):
quantityx = 0
if(D > 0):
for otherElement in list:
if otherElement == element:
quantityx += 1
return quantityx + (doFunc ((element+1), (D-1), list))
return 0
Intuitively, I think it has a O(n²) where n is the quantity of elements of list, but I'd like to proof it in a formal way.
First observation: solve calls doFunc, but not the other way around. Therefore, the complexity of solve will depend on the complexity of doFunc, but not the other way around. We need to figure out the complexity of doFunc first.
Let T(E, D, N) be the time complexity of doFunc as a function of E, D and the number of elements N in the list. Every time doFunc is called, we do N iterations of the loop and then invoke doFunc with E+1, D-1, and the list unchanged. Based on this, we know that the time complexity of doFunc is given by the following recursive formula:
T(E, D, N) = aN + b + T(E+1, D-1, N)
Here, a and b are some constants to be determined.
Now we need a base case for this recursive formula. Our base case, the only time we don't recurse, is when D <= 0. Assuming that D is non-negative, this means D = 0 is the base case. We get the following additional requirement:
T(E, 0, N) = c
Here, c is some constant to be determined.
Putting this all together, we can list out a few values for different values of D and see if we can identify a pattern:
D T(E, D, N)
0 c
1 c + b + aN
2 c + 2b + 2aN
3 c + 3b + 3aN
...
k c + kb + kaN
Based on this, we can guess that T(E, D, N) = c + Db + aDN for some constants a, b, c. We can see that this formula satisfies the base case and we can check that it also satisfies the recursive part (try this). Therefore, this is our function.
Assuming E, D and N are all independent and vary freely, the time complexity of doFunc is best rendered as O(c + Db + aDN) = O(DN).
Since solve calls doFunc once for each element in the list, its complexity is simply N times that of doFunc, i.e., O(DN^2).