Hello here is my code in Prolog:
arc(a,h).
arc(b,c).
related_to(X, Ys) :-
setof(Y, arc(X, Y), Ys).
cut([H|T],Y) :-
check(H,Y),
T = [] -> cut(T,Y).
check(X,Y) :-
related_to(X,Xs),
member(Y,Xs) -> write('There is a road');
cut(Xs,Y).
When I am trying to run check(a,b) it doesn't run. I get the message
Singleton variable in branch: Xs
When I am not using cut question, I don't get any error. I would be grateful for pointing me where I made a mistake and showing way to repair it.
TL;DR: Prolog is right. And you really are doing the best taking the messages seriously.
You are using if-then-else in an unconventional manner. For this reason it is not that simple to figure out what is happening. When I say listing(check) I get the following:
check(A, B) :-
( related_to(A, C),
member(B, C)
-> write('There is a road')
; cut(C, B)
).
So Prolog was not very impressed by your indentation style, instead, it just looked for operators. In fact, the C (which is your original Xs) occurs in the if-part which is unrelated to the else-part. What you probably wanted is:
check(X,Y) :-
related_to(X,Xs),
( member(Y,Xs)
-> write('There is a road')
; cut(Xs,Y)
).
Regardless of the concrete problem at hand, I very much doubt that your code makes sense: Xs is a list of connected nodes, but do you really need this in this context? I do not think so.
Why not use closure0/3 to determine connectedness:
?- closure0(arc, A, B).
BTW, it is not clear whether you consider a directed graph or an undirected one. Above works only for directed graphs, for undirected graphs rather use:
comm(P_2, A,B) :-
( call(P_2, A,B)
; call(P_2, B,A)
).
?- closure0(comm(arc), A, B).
If you are interested in the path as well, use path/4:
?- path(comm(arc), Path, A, B).
Related
I would like to define the set of nodes in a directed graph reachable from all nodes in a given set of start nodes in Clingo. To my understanding, this can be done via conditions in a rule body: in a rule
p(X) :- q(X) : r(X).
a conjunction of rules q(a) is dynamically generated in the body of p/1 for grounded facts a, for which the rule r(a) also holds. Now for some reason, the following set of rules results in an "unsafe" variable X being discovered on the last line:
% Test case
arc(1,4). arc(2,4). arc(3,5). arc(4,1). arc(4,2). arc(4,3).
start(1). start(4). start(5).
% Define a path inductively, with the base case being path of length 1:
path(A, B) :- arc(A, B).
path(A, B) :- arc(A, X), arc(X, B).
path(A, B) :- arc(A, X), path(X, Y), arc(Y, B).
% A node X is simply reachable/1, if there is a possibly empty path to it
% from a start node or reachable/2 from A, if there is a path to it from A:
reachable(X) :- start(X).
reachable(X) :- start(A), path(A, X).
reachable(X, A) :- path(A, X).
% Predicate all_reach defined by the reachable relation:
all_reach(X) :- reachable(X, A) : start(A).
I wanted to ask, what is meant by an "unsafe" variable, and how might I amend this situation? One source claims that an unsafe variable is a variable, which appears in the head of a rule but not in the body, which makes sense as the symbols :- denote a reverse implication. However, this does not seem to be the case here, so I'm confused.
Could it be that there might not be a grounded fact a for which start(a) holds, and hence the body of the implication or rule becomes empty, causing the unsafety. Is that it? Is there a standard way of avoiding this?
The issue was that there wasn't a positive rule in the body of all_reach/1 that was satisfied for at least some grounded instance of X. Adding the lines
% Project nodes from arcs
node(X) :- arc(X,Y).
node(Y) :- arc(X,Y).
and reformulating the all_reach/1 rule as
all_reach(X) :- reachable(X, A) : start(A); node(X).
solved the issue. The desired conjunction
∧[i=1 → ∞] reachable(d, s[i])
for all start nodes s[i] and destination nodes d is then generated as the
body of allreach/1.
In other words, when using conditionals in the body b/m of a rule r/n,
there must still be a predicate p in b that is unconditionally grounded
for any variables present in the head of the rule. Otherwise we might end up
with an ambiguity or an ever-expanding rule during grounding, which is unsafe.
In the SWI Prolog manual, I found the following remark:
For example, assume an application that can reason about multiple worlds. It is attractive to store the data of a particular world in a module, so we extract information from a world simply by invoking goals in this world.
This is actually a very good description of what I'm trying to achieve. However I ran into a problem. While I do want to model many different worlds, there are also things that I want to share across all of them. So my idea is to have an allworlds module for things that are true in every world, and one module for every world that I want to reason about, and the latter imports from the former. So I'd do something like this in the REPL:
allworlds:asserta(grandparent(X, Z) :- (parent(X, Y), parent(Y, Z))).
allworlds:dynamic(parent/2).
add_import_module(greece, allworlds, start).
greece:asserta(parent(kronos, zeus)).
greece:asserta(parent(zeus, ares)).
Now I'd like to query greece:grandparent(kronos, X) and get X = ares, but all I get is false. When allworlds:grandparent calls parent, it doesn't call greece:parent like I want it to, but allworlds:parent. My research seems to indicate that I need to make the grandparent predicate module-transparent. But calling allworlds:module_transparent(grandparent/2). didn't fix the issue, and it's also deprecated. This is where I'm stuck. How can I get this working? Is meta_predicate/1 part of the solution? Unfortunately I can't make heads or tails of its documentation.
Prolog modules don't provide a good solution for the "many worlds" design pattern. Notably, making the predicates meta-predicates (or module transparent or multifile) would be a problematic hack. But this pattern is trivial with Logtalk, which is a language extends Prolog and can use most Prolog systems as a backend compiler. A minimal (but not unique) solution for your problem is:
:- object(allworlds).
:- public(grandparent/2).
grandparent(X, Z) :-
::parent(X, Y),
::parent(Y, Z).
:- public(parent/2).
:- end_object.
:- object(greece,
extends(allworlds)).
parent(kronos, zeus).
parent(zeus, ares).
:- end_object.
Here, we use inheritance (the individual worlds inherit the common knowledge) and messages to self (the ::/1 control construct) when common predicates need to access world specific predicate definitions (self is the object/world that received the message - grandparent/2 in the example).
Assuming the code is saved in a worlds.lgt file and that you're using SWI-Prolog as the backend:
$ swilgt
...
?- {worlds}.
% [ /Users/pmoura/worlds.lgt loaded ]
% (0 warnings)
true.
?- greece::grandparent(kronos, X).
X = ares.
P.S. If running on windows, use the "Logtalk - SWI-Prolog" shortcut from the Start Menu after installing Logtalk.
I ultimately solved this by passing the module around explicitly and invoking predicates in it with the : operator. It reminds me a bit of doing OOP in C, where you do things like obj->vtable->method(obj, params) (note how obj is mentioned twice, just like the M in my code below).
Similar to the Logtalk solution, I need to explicitly call into the imported module when I want to consider its clauses. As an example, I've added the fact that a father is also a parent to the allworlds module.
allworlds:assertz(grandparent(M, X, Z) :- (M:parent(M, X, Y), M:parent(M, Y, Z))).
allworlds:assertz(parent(M, X, Y) :- M:father(M, X, Y)).
add_import_module(greece, allworlds, start).
greece:assertz(parent(_, kronos, zeus)).
% need to call into allworlds explicitly
greece:assertz(parent(M, X, Y) :- allworlds:parent(M, X, Y)).
greece:assertz(father(_, zeus, ares)).
After making these assertions, I can call greece:grandparent(greece, kronos, X). and get the expected result X = ares.
How you can have a different behaviour if a variable is defined or not in racket language?
There are several ways to do this. But I suspect that none of these is what you want, so I'll only provide pointers to the functions (and explain the problems with each one):
namespace-variable-value is a function that retrieves the value of a toplevel variable from some namespace. This is useful only with REPL interaction and REPL code though, since code that is defined in a module is not going to use these things anyway. In other words, you can use this function (and the corresponding namespace-set-variable-value!) to get values (if any) and set them, but the only use of these values is in code that is not itself in a module. To put this differently, using this facility is as good as keeping a hash table that maps symbols to values, only it's slightly more convenient at the REPL since you just type names...
More likely, these kind of things are done in macros. The first way to do this is to use the special #%top macro. This macro gets inserted automatically for all names in a module that are not known to be bound. The usual thing that this macro does is throw an error, but you can redefine it in your code (or make up your own language that redefines it) that does something else with these unknown names.
A slightly more sophisticated way to do this is to use the identifier-binding function -- again, in a macro, not at runtime -- and use it to get information about some name that is given to the macro and decide what to expand to based on that name.
The last two options are the more useful ones, but they're not the newbie-level kind of macros, which is why I suspect that you're asking the wrong question. To clarify, you can use them to write a kind of a defined? special form that checks whether some name is defined, but that question is one that would be answered by a macro, based on the rest of the code, so it's not really useful to ask it. If you want something like that that can enable the kind of code in other dynamic languages where you use such a predicate, then the best way to go about this is to redefine #%top to do some kind of a lookup (hashtable or global namespace) instead of throwing a compilation error -- but again, the difference between that and using a hash table explicitly is mostly cosmetic (and again, this is not a newbie thing).
First, read Eli's answer. Then, based on Eli's answer, you can implement the defined? macro this way:
#lang racket
; The macro
(define-syntax (defined? stx)
(syntax-case stx ()
[(_ id)
(with-syntax ([v (identifier-binding #'id)])
#''v)]))
; Tests
(define x 3)
(if (defined? x) 'defined 'not-defined) ; -> defined
(let ([y 4])
(if (defined? y) 'defined 'not-defined)) ; -> defined
(if (defined? z) 'defined 'not-defined) ; -> not-defined
It works for this basic case, but it has a problem: if z is undefined, the branch of the if that considers that it is defined and uses its value will raise a compile-time error, because the normal if checks its condition value at run-time (dynamically):
; This doesn't work because z in `(list z)' is undefined:
(if (defined? z) (list z) 'not-defined)
So what you probably want is a if-defined macro, that tells at compile-time (instead of at run-time) what branch of the if to take:
#lang racket
; The macro
(define-syntax (if-defined stx)
(syntax-case stx ()
[(_ id iftrue iffalse)
(let ([where (identifier-binding #'id)])
(if where #'iftrue #'iffalse))]))
; Tests
(if-defined z (list z) 'not-defined) ; -> not-defined
(if-defined t (void) (define t 5))
t ; -> 5
(define x 3)
(if-defined x (void) (define x 6))
x ; -> 3
When writing a function like factorial:
fac(Val) when is_integer(Val)->
Visit = fun (X, _F) when X < 2 ->
1;
(X, F) ->
X * F(X -1, F)
end,
Visit(Val, Visit).
one cannot help but notice that tail call optimization is not straight forward however writing it in continuation parsing style is:
fac_cps(Val) when is_integer(Val)->
Visit = fun (X, _F, K) when X < 2 ->
K (1);
(X, F, K) ->
F(X-1, F, fun (Y) -> K(X * Y) end)
end,
Visit(Val, Visit, fun (X) -> X end).
Or perhaps even defunctionalized:
fac_cps_def_lambdas({lam0}, X) ->
X;
fac_cps_def_lambdas({lam1, X, K}, Y) ->
fac_cps_def_lambdas(K, X*Y).
fac_cps_def(X) when is_integer(X) ->
fac_cps_def(X, {lam0}).
fac_cps_def(X, K) when X < 2 ->
fac_cps_def_lambdas(K,1);
fac_cps_def(X, K) ->
fac_cps_def(X-1, {lam1, X, K}).
Timing these three implementations I found that execution time is, as expected, the same.
My question is, is there a way to get more detailed knowledge than this?
How do I for instance get the memory usage of executing the function - am I avoiding any stack memory at all?
What are the standart tools for inspecting these sorts of things?
The questions are again, how do I mesure the stack heights of the functions, how do I determine the memory usage of a function call on each of them, and finally, which one is best?
My solution is to just inspect the code with my eyes. Over time, you learn to spot if the code is in tail-call style. Usually, I don't care too much about it, unless I know the size of the structure passing through that code to be huge.
It is just by intuition for me. You can inspect the stack size of a process with erlang:process_info/2. You can inspect the runtime with fprof. But I only do it as a last resort fix.
This doesn't answer your question, but why have you written the code like that? It is not very Erlangy. You generally don't use an explicit CPS unless there is a specific reason for it, it is normally not needed.
As #IGIVECRAPANSWERS says you soon learn to see tail-calls, and there are very few cases where you actually MUST use it.
EDIT: A comment to the comment. No there is no direct way of checking whether the compiler has used LCO or not. It does exactly what you tell it to and assumes you know what you are doing, and why. :-) However, you can be certain that it does when it can, but that it is about it. The only way to check is to look at the stack size of a process to see whether it is growing or not. Unfortunately if you got it wrong in the right place the process can grow very slowly and be hard to detect except over a long period of time.
But again there are very few places where you really need to get the LCO right.
P.S. You use the term LCO (Last Call Optimisation) which is what I learnt way back when. Now, however, "they" seem to use TCO (Tail Call Optimisation) instead. That's progress. :-)
I'm building a compiler/assembler/linker in Java for the x86-32 (IA32) processor targeting Windows.
High-level concepts (I do not have any "source code": there is no syntax nor lexical translation, and all languages are regular) are translated into opcodes, which then are wrapped and outputted to a file. The translation process has several phases, one is the translation between regular languages: the highest-level code is translated into the medium-level code which is then translated into the lowest-level code (probably more than 3 levels).
My problem is the following; if I have higher-level code (X and Y) translated to lower-level code (x, y, U and V), then an example of such a translation is, in pseudo-code:
x + U(f) // generated by X
+
V(f) + y // generated by Y
(An easy example) where V is the opposite of U (compare with a stack push as U and a pop as V). This needs to be 'optimized' into:
x + y
(essentially removing the "useless" code)
My idea was to use regular expressions. For the above case, it'll be a regular expression looking like this: x:(U(x)+V(x)):null, meaning for all x find U(x) followed by V(x) and replace by null. Imagine more complex regular expressions, for more complex optimizations. This should work on all levels.
What do you suggest? What would be a good approach to optimize and produce fast x86 assembly?
What you should actually do is build an Abstract Syntax Tree (AST).
It is a representation of the source code in the form of a tree, that is much easier to work with, especially to make transformations and optimizations.
That code, represented as a tree, would be something like:
(+
(+
x
(U f))
(+
(V f)
y))
You could then try to make some transformations: a sum of sums is a sum of all the terms:
(+
x
(U f)
(V f)
y)
Then you could scan the tree and you could have the following rules:
(+ (U x) (V x)) = 0, for all x
(+ 0 x1 x2 ...) = x, for all x1, x2, ...
Then you would obtain what you are looking for:
(+ x y)
Any good book on compiler-writing will discuss a lot on ASTs. Functional programming languages are specially suited for this task, since in general it is easy to represent trees and to do pattern matching to parse and transform the tree.
Usually, for this task, you should avoid using regular expressions. Regular expressions define what mathematicians call regular languages. Any regular language can be parsed by a set of regular expressions. However, I think your language is not regular, so it cannot be properly parsed by regexps.
People try, and try, and try to parse languages such as HTML using regular expressions. This has been extensively discussed here in SO, and you cannot parse HTML with regular expressions. There will always be an exceptional case in which your regular expressions would fail, and you would have to adapt it.
It might be the same with your language: if it is not regular, you should avoid lots of headaches and not try to parse it (and especially "transform" it) using regular expressions.
I'm having a lot of trouble understanding this question, but I think you will find it useful to learn something about term-rewriting systems, which seems to be what you are proposing. Whether the mechanism is tree rewriting (always works) or regular expressions (will work for some languages some of the time and other languages all of the time) is of secondary importance.
It is definitely possible to optimize object code by term rewriting. You probably also will benefit from learning something about peephole optimization; a good place to start, because it is very strong on the fundamentals, is a paper by Davidson and Fraser on a retargetable peephole optimizer. There's also excellent later work by Benitez and Davidson.