In the SWI Prolog manual, I found the following remark:
For example, assume an application that can reason about multiple worlds. It is attractive to store the data of a particular world in a module, so we extract information from a world simply by invoking goals in this world.
This is actually a very good description of what I'm trying to achieve. However I ran into a problem. While I do want to model many different worlds, there are also things that I want to share across all of them. So my idea is to have an allworlds module for things that are true in every world, and one module for every world that I want to reason about, and the latter imports from the former. So I'd do something like this in the REPL:
allworlds:asserta(grandparent(X, Z) :- (parent(X, Y), parent(Y, Z))).
allworlds:dynamic(parent/2).
add_import_module(greece, allworlds, start).
greece:asserta(parent(kronos, zeus)).
greece:asserta(parent(zeus, ares)).
Now I'd like to query greece:grandparent(kronos, X) and get X = ares, but all I get is false. When allworlds:grandparent calls parent, it doesn't call greece:parent like I want it to, but allworlds:parent. My research seems to indicate that I need to make the grandparent predicate module-transparent. But calling allworlds:module_transparent(grandparent/2). didn't fix the issue, and it's also deprecated. This is where I'm stuck. How can I get this working? Is meta_predicate/1 part of the solution? Unfortunately I can't make heads or tails of its documentation.
Prolog modules don't provide a good solution for the "many worlds" design pattern. Notably, making the predicates meta-predicates (or module transparent or multifile) would be a problematic hack. But this pattern is trivial with Logtalk, which is a language extends Prolog and can use most Prolog systems as a backend compiler. A minimal (but not unique) solution for your problem is:
:- object(allworlds).
:- public(grandparent/2).
grandparent(X, Z) :-
::parent(X, Y),
::parent(Y, Z).
:- public(parent/2).
:- end_object.
:- object(greece,
extends(allworlds)).
parent(kronos, zeus).
parent(zeus, ares).
:- end_object.
Here, we use inheritance (the individual worlds inherit the common knowledge) and messages to self (the ::/1 control construct) when common predicates need to access world specific predicate definitions (self is the object/world that received the message - grandparent/2 in the example).
Assuming the code is saved in a worlds.lgt file and that you're using SWI-Prolog as the backend:
$ swilgt
...
?- {worlds}.
% [ /Users/pmoura/worlds.lgt loaded ]
% (0 warnings)
true.
?- greece::grandparent(kronos, X).
X = ares.
P.S. If running on windows, use the "Logtalk - SWI-Prolog" shortcut from the Start Menu after installing Logtalk.
I ultimately solved this by passing the module around explicitly and invoking predicates in it with the : operator. It reminds me a bit of doing OOP in C, where you do things like obj->vtable->method(obj, params) (note how obj is mentioned twice, just like the M in my code below).
Similar to the Logtalk solution, I need to explicitly call into the imported module when I want to consider its clauses. As an example, I've added the fact that a father is also a parent to the allworlds module.
allworlds:assertz(grandparent(M, X, Z) :- (M:parent(M, X, Y), M:parent(M, Y, Z))).
allworlds:assertz(parent(M, X, Y) :- M:father(M, X, Y)).
add_import_module(greece, allworlds, start).
greece:assertz(parent(_, kronos, zeus)).
% need to call into allworlds explicitly
greece:assertz(parent(M, X, Y) :- allworlds:parent(M, X, Y)).
greece:assertz(father(_, zeus, ares)).
After making these assertions, I can call greece:grandparent(greece, kronos, X). and get the expected result X = ares.
Related
I am searching some SWI-Prolog function which is able to make some set union with variables as parameters inside. My aim is to make the union first and define the parameters at further on in source code.
Means eg. I have some function union and the call union(A, B, A_UNION_B) makes sense. Means further more the call:
union(A, [1,2], C), A=[3].
would give me as result
C = [3, 1, 2].
(What you call union/3 is most probably just concatenation, so I will use append/3 for keeping this answer short.)
What you expect is impossible without delayed goals or constraints. To see this, consider the following failure-slice
?- append(A, [1,2], C), false, A=[3].
loops, unexpected. % observed, but for us unexpected
false. % expected, but not the case
This query must terminate, in order to make the entire question useful. But there are infinitely many lists of different length for A. So in order to describe all possible solutions, we would need infinitely many answer substitutions, like
?- append(A, [1,2], C).
A = [], C = [1,2]
; A = [_A], C = [_A,1,2]
; A = [_A,_B], C = [_A,_B,1,2]
; A = [_A,_B,_C], C = [_A,_B,_C,1,2]
; ... .
The only way around is to describe that set of solutions with finitely many answers. One possibility could be:
?- when((ground(A);ground(C)), append(A,B,C)).
when((ground(A);ground(C)),append(A,B,C)).
Essentially it reads: Yes, the query is true, provided the query is true.
While this solves your exact problem, it will now delay many otherwise succeeding goals, think of A = [X], B = [].
A more elaborate version could provide more complex tests. But it would require a somehow different definition than append/3 is. Some systems like sicstus-prolog provide block declarations to make this more smoothly (SWI has a coarse emulation for that).
So it is possible to make this even better, but the question remains whether or not this makes much sense. After all, debugging delayed goals becomes more and more difficult with larger programs.
In many situations it is preferable to prevent this and produce an instantiation error in its stead as iwhen/2 does:
?- iwhen((ground(A);ground(C)),append(A,B,C)).
error(instantiation_error,iwhen/2).
That error is not the nicest answer possible, but at least it is not incorrect. It says: You need to provide more instantiations.
If you really want to solve this problem for the general case you have to delve into E-unification. That is an area with most trivial problem statements and extremely evolved answers. Often, just decidability is non-trivial let alone an effective algorithm. For your particular question, either ACI (for sets) or ANlr (for concatenation) are of interest. Where ACI requires solving Diophantine Equations and associative unification alone is even more complex than that. I am unaware of any such implementation for a Prolog system that solves the general problem.
Prolog IV offered an associative infix operator for concatenation but simply delayed more complex cases. So debugging these remains non-trivial.
I'm particularly interested to understand the K prelude (how it is structured, why its content is like that, how "kompile" calculates dependencies, etc).
The main question is: what is the criterion for a hooked symbol from the K prelude to be copied into the generated Kore file?
Here some examples of potential problems:
The symbol andBool is copied with its associated rewrite rules, which does not seem to be the case for the symbol in_keys, which is simply copied without its rewrite rules.
Other symbols seem to be useless (for the IMP semantic) but exist, with or without its rewrite rules, in the generated Kore file, such as countAllOccurrences, findChar, signExtendBitRangeInt or Float2String.
It seems that SortId is generated by the line syntax Id [token]. However, the lines "syntax Bool ::= "true" [token] and syntax Bool ::= "false" [token] do not generate true and false symbols.
(Moreover, is it a choice that true and false are values and not constructors?)
The sort named SortId is not generated for the following example, whereas some generated hooked symbols depend on this sort. This problem does not exist with the IMP semantic.
module MAX-OW-SYNTAX
imports INT
imports BOOL
syntax Exp ::= Int | "(" Exp ")" [bracket]
| "max" Exp Exp
endmodule
module MAX-OW
imports MAX-OW-SYNTAX
syntax KResult ::= Int
rule max X Y => Y requires X <Int Y
rule max X _ => X [owise]
endmodule
Is it correct that the K prelude is implemented in each language of each backend, and that an implementation in the Kore language is available in the K prelude?
Do you have the necessary interface to implement for a new backend? (For instance, Bag is obsolete, but not Set, List and Map, but I don't know the list of set operators, map operators, etc. that the new backend must provide.)
Is there a reason why andThenBool and andBool have the same semantics once implemented in the Kore syntax (Booleans module)?
Where are the rewrite rules defined for ==Bool, used in the definition of =/=Bool (Booleans module)?
The best reference point for the K internals is the User Manual, along with the K source for the prelude. To respond to your specific questions as best as I can:
in_keys only has simplification rules that apply on symbolic backends. These will not apply on concrete backends, and so those backends use the hooked implementation MAP.in_keys. Some functions (such as andBool) can be implemented both in K and as an efficient backend hook. For example, on the K LLVM backend, andBool is implemented by code generation. If a backend didn't support that hook, the (relatively) inefficient K rewriting implementation would be used.
The Id sort is built in for convenience. It represents program identifiers.
You haven't imported DOMAINS in this example. Doing so will pull in the Id sort and related rewrites.
Very roughly, and largely for internal purposes. Do you have a hypothetical K backend in mind, or is there a way in which the LLVM / Haskell backends provided by K are inadequate for your specific use case?
andThenBool is required to short-circuit its arguments; andBool is permitted to short-circuit, but may evaluate both arguments strictly. An implementation that makes both perform short-circuiting is valid.
==Bool is implemented only in terms of a hook. In domains.md, you can see the hook(BOOL.eq) attribute that indicates how ==Bool is implemented.
Do let us know if you have further questions, or would like help implementing a specific semantics in K.
I am currently developing my own programming language. The codebase (in Lua) is composed of several modules, as follows:
The first, error.lua, has no dependancies;
lexer.lua depends only on error.lua;
prototypes.lua also has no dependancies;
parser.lua, instead, depends on all the modules above;
interpreter.lua is the fulcrum of the whole codebase. It depends on error.lua, parser.lua, and memory.lua;
memory.lua depends on functions.lua;
finally, functions.lua depends on memory.lua and interpreter.lua. It is required from inside memory.lua, so we can say that memory.lua also depends on interpreter.lua.
With "A depends on B" I mean that the functions declared in A need those declared in B.
The real problem, though, is when A depends on B which depends on A, which, as you can understand from the list above, happens quite frequently in my code.
To give a concrete example of my problem, here's how interpreter.lua looks like:
--first, I require the modules that DON'T depend on interpreter.lua
local parser, Error = table.unpack(require("parser"))
--(since error.lua is needed both in the lexer, parser and interpreter module,
--I only actually require it once in lexer.lua and then pass its result around)
--Then, I should require memory.lua. But since memory.lua and
--functions.lua need some functions from interpreter.lua to work, I just
--forward declare the variables needed from those functions and then those functions themself:
--forward declaration
local globals, new_memory, my_nil, interpret_statement
--functions I need to declare before requiring memory.lua
local function interpret_block()
--uses interpret_statement and new_memory
end
local function interpret_expresion()
--uses new_memory, Error and my_nil
end
--Now I can safely require memory.lua:
globals, new_memory, my_nil = require("memory.lua")(interpret_block, interpret_espression)
--(I'll explain why it returns a function to call later)
--Then I have to fulfill the forward declaration of interpret_executement:
function interpret_executement()
--uses interpret_expression, new_memory and Error
end
--finally, the result is a function
return function()
--uses parser, new_fuction and globals
end
The memory.lua module returns a function so that it can receive interpret_block and interpret_expression as arguments, like this:
--memory.lua
return function(interpret_block, interpret_expression)
--declaration of globals, new_memory, my_nil
return globals, new_memory, my_nil
end
Now, I got the idea of the forward declarations here and that of the functions-as-modules (like in memory.lua, to pass some functions from the requiring module to the required module) here. They're all great ideas, and I must say that they work greatly. But you pay in readability.
In fact, breaking in smaller pieces the code this time made my work harder that it would have been if I coded everything in a single file, which is impossible for me because it's over than 1000 lines of code and I'm coding from a smartphone.
The feeling I have is that of working with spaghetti code, only on a larger scale.
So how could I solve the problem of my code being ununderstandable because of some modules needing each other to work (which doesn't involve making all the variables global, of course)? How would programmers in other languages solve this problem? How should I reorganize my modules? Are there any standard rules in using Lua modules that could also help me with this problem?
If we look at your lua files as a directed graph, where a vertice points from a dependency to its usage, the goal is to modify your graph to be a tree or forest, as you intend to get rid of the cycles.
A cycle is a set of nodes, which, traversed in the direction of the vertices can reach the starting node.
Now, the question is how to get rid of cycles?
The answer looks like this:
Let's consider node N and let's consider {D1, D2, ..., Dm} as its direct dependencies. If there is no Di in that set that depends on N either directly or indirectly, then you can leave N as it is. In that case, the set of problematic dependencies looks like this: {}
However, what if you have a non-empty set, like this: {PD1, ..., PDk} ?
You then need to analyze PDi for i between 1 and k along with N and see what is the subset in each PDi that does not depend on N and what is the subset of N which does not depend on any PDi. This way you can define N_base and N, PDi_base and PDi. N depends on N_base, just like all PDi elements and PDi depends on PDi_base along with N_base.
This approach minimalizes circles in the dependency tree. However, it is quite possible that a function set of {f1, ..., fl} exists in this group which cannot be migrated into _base as discussed due to dependencies and there are still cycles. In this case you need to give a name to the group in question, create a module for it and migrate all to functions into that group.
Hello here is my code in Prolog:
arc(a,h).
arc(b,c).
related_to(X, Ys) :-
setof(Y, arc(X, Y), Ys).
cut([H|T],Y) :-
check(H,Y),
T = [] -> cut(T,Y).
check(X,Y) :-
related_to(X,Xs),
member(Y,Xs) -> write('There is a road');
cut(Xs,Y).
When I am trying to run check(a,b) it doesn't run. I get the message
Singleton variable in branch: Xs
When I am not using cut question, I don't get any error. I would be grateful for pointing me where I made a mistake and showing way to repair it.
TL;DR: Prolog is right. And you really are doing the best taking the messages seriously.
You are using if-then-else in an unconventional manner. For this reason it is not that simple to figure out what is happening. When I say listing(check) I get the following:
check(A, B) :-
( related_to(A, C),
member(B, C)
-> write('There is a road')
; cut(C, B)
).
So Prolog was not very impressed by your indentation style, instead, it just looked for operators. In fact, the C (which is your original Xs) occurs in the if-part which is unrelated to the else-part. What you probably wanted is:
check(X,Y) :-
related_to(X,Xs),
( member(Y,Xs)
-> write('There is a road')
; cut(Xs,Y)
).
Regardless of the concrete problem at hand, I very much doubt that your code makes sense: Xs is a list of connected nodes, but do you really need this in this context? I do not think so.
Why not use closure0/3 to determine connectedness:
?- closure0(arc, A, B).
BTW, it is not clear whether you consider a directed graph or an undirected one. Above works only for directed graphs, for undirected graphs rather use:
comm(P_2, A,B) :-
( call(P_2, A,B)
; call(P_2, B,A)
).
?- closure0(comm(arc), A, B).
If you are interested in the path as well, use path/4:
?- path(comm(arc), Path, A, B).
I'm building a compiler/assembler/linker in Java for the x86-32 (IA32) processor targeting Windows.
High-level concepts (I do not have any "source code": there is no syntax nor lexical translation, and all languages are regular) are translated into opcodes, which then are wrapped and outputted to a file. The translation process has several phases, one is the translation between regular languages: the highest-level code is translated into the medium-level code which is then translated into the lowest-level code (probably more than 3 levels).
My problem is the following; if I have higher-level code (X and Y) translated to lower-level code (x, y, U and V), then an example of such a translation is, in pseudo-code:
x + U(f) // generated by X
+
V(f) + y // generated by Y
(An easy example) where V is the opposite of U (compare with a stack push as U and a pop as V). This needs to be 'optimized' into:
x + y
(essentially removing the "useless" code)
My idea was to use regular expressions. For the above case, it'll be a regular expression looking like this: x:(U(x)+V(x)):null, meaning for all x find U(x) followed by V(x) and replace by null. Imagine more complex regular expressions, for more complex optimizations. This should work on all levels.
What do you suggest? What would be a good approach to optimize and produce fast x86 assembly?
What you should actually do is build an Abstract Syntax Tree (AST).
It is a representation of the source code in the form of a tree, that is much easier to work with, especially to make transformations and optimizations.
That code, represented as a tree, would be something like:
(+
(+
x
(U f))
(+
(V f)
y))
You could then try to make some transformations: a sum of sums is a sum of all the terms:
(+
x
(U f)
(V f)
y)
Then you could scan the tree and you could have the following rules:
(+ (U x) (V x)) = 0, for all x
(+ 0 x1 x2 ...) = x, for all x1, x2, ...
Then you would obtain what you are looking for:
(+ x y)
Any good book on compiler-writing will discuss a lot on ASTs. Functional programming languages are specially suited for this task, since in general it is easy to represent trees and to do pattern matching to parse and transform the tree.
Usually, for this task, you should avoid using regular expressions. Regular expressions define what mathematicians call regular languages. Any regular language can be parsed by a set of regular expressions. However, I think your language is not regular, so it cannot be properly parsed by regexps.
People try, and try, and try to parse languages such as HTML using regular expressions. This has been extensively discussed here in SO, and you cannot parse HTML with regular expressions. There will always be an exceptional case in which your regular expressions would fail, and you would have to adapt it.
It might be the same with your language: if it is not regular, you should avoid lots of headaches and not try to parse it (and especially "transform" it) using regular expressions.
I'm having a lot of trouble understanding this question, but I think you will find it useful to learn something about term-rewriting systems, which seems to be what you are proposing. Whether the mechanism is tree rewriting (always works) or regular expressions (will work for some languages some of the time and other languages all of the time) is of secondary importance.
It is definitely possible to optimize object code by term rewriting. You probably also will benefit from learning something about peephole optimization; a good place to start, because it is very strong on the fundamentals, is a paper by Davidson and Fraser on a retargetable peephole optimizer. There's also excellent later work by Benitez and Davidson.