Is it possible to have a production with one or more terminal(s) in the LHS of a recursively enumerable language? - grammar

As Recursively enumerable grammar is unrestricted, is it possible to have a production with one or more terminal(s) in the LHS (ie no non-terminals)?

Related

Traversing an evaluation context in K?

Contexts are often in a form #freezer_;__BILS-SYNTAX1_ ( ! _1 ).
I'd like to write a function that traverses evaluation contexts.
How does one pattern match evaluation contexts and traverse them?
EDIT: to give an example, if I want to log evaluation contexts I've seen so far into a file, I would need to write a function that converts an evaluation context into a string, and then print it via system calls. Is pattern matching evaluation contexts possible to do? alternatively, is there a built-in "to-string" function in K?

What kind of meta argument is the first argument of predicate_property/2?

In other words, should it be 0 or : or something else? The Prolog systems SICStus, YAP, and SWI all indicate this as :. Is this appropriate? Shouldn't it be rather a 0 which means a term that can be called by call/1?
To check your system type:
| ?- predicate_property(predicate_property(_,_),P).
P = (meta_predicate predicate_property(:,?)) ? ;
P = built_in ? ;
P = jitted ? ;
no
I should add that meta-arguments — at least as in the form used here — cannot guarantee the same algebraic properties we expect from pure relations:
?- S=user:false, predicate_property(S,built_in).
S = user:false.
?- predicate_property(S,built_in), S=user:false.
false.
Here is the relevant part from ISO/IEC 13211-2:
7.2.2 predicate_property/2
7.2.2.1 Description
predicate_property(Prototype, Property) is true in the calling context of a module M iff the procedure
associated with the argument Prototype has predicate property
Property.
...
7.2.2.2 Template and modes
predicate_property(+prototype, ?predicate_property)
7.2.2.3 Errors
a) Prototype is a variable — instantiation_error.
...
c) Prototype is neither a variable nor a callable term —
type_error(callable, Prototype).
...
7.2.2.4 Examples
Goals attempted in the context of the module
bar.
predicate_property(q(X), exported).
succeeds, X is not instantiated.
...
Thats an interesting question. First I think there are two
kinds of predicate_property/2 predicates. The first kind
takes a callable and is intended to work smoothly with
for example vanilla interpreters and built-ins such as
write/1, nl/0, etc.., i.e.:
solve((A,B)) :- !, solve(A), solve(B).
solve(A) :- predicate_property(A, built_in), !, A.
solve(A) :- clause(A,B), solve(B).
For the first kind, I guess the 0 meta argument specifier
would work fine. The second kind of predicate_property/2
predicates works with predicate indicators. Callable
and predicate indicators are both notions already defined
in the ISO core standard.
A predicate indicator has the form F/N, where F is an atom
and N is an integer. Matters get a little bit more complicated
if modules are present, especially because of the operator
precedence of (:)/2 versus (/)/2. If predicate property works
with predicate indicators, we can still code the vanilla
interpreter:
solve((A,B)) :- !, solve(A), solve(B).
solve(A) :- functor(A,F,N), predicate_property(F/N, built_in), !, A.
solve(A) :- clause(A,B), solve(B).
Here we loose the connection of a possible meta argument
0, for example of solve/1, with predicate property. Because
functor/3 has usually no meta predicate declaration. Also
to transfer module information via functor/3 to
predicate_property/2 is impossible, since functor/3 is agnostic to
modules, it usually has no realization that can deal with
arguments that contain module qualification.
There are now two issues:
1) Can we give typing and/or should we give typing to predicates
such as functor/3.
2) Can we extend functor/3 so that it can convey module
qualification.
Here are my thoughts:
1) Would need a more elaborate type system. One that would
allow overloading of predicates with multiple types. For
example functor/3 could have two types:
:- meta_predicate functor(?,?,?).
:- meta_predicate functor(0,?,?).
The real power of overloading multiple types would only
shine in predicates such as (=)/2. Here we would have:
:- meta_predicate =(?,?).
:- meta_predicate =(0,0).
Thus allowing for more type inference, if one side of
(=)/2 is a goal we could deduced that the other side
is also a goal.
But matter are not so simple, it would possibly make
sense to have also a form of type cast, or some other
mechanism to restrict the overloading. Something which
is not covered by introducing just a meta predicate
directive. This would require further constructs inside
the terms and goals.
Learning form lambda Prolog or some dependent type
system, could be advantageous. For example (=)/2 can
be viewed as parametrized by a type A, i.e.:
:- meta_predicate =(A,A).
2) For Jekejeke Prolog I have provided an alternative
functor/3 realization. The predicate is sys_modfunc_site/2.
And it works bidirectionally like functor/3, but returns
and accepts the predicate indicator as one whole thing.
Here are some example runs:
?- sys_modfunc_site(a:b(x,y), X).
X = a:b/2
?- sys_modfunc_site(X, a:b/2).
X = a:b(_A,_B)
The result of the predicate could be called a generalized
predicate indicator. It is what SWI-Prolog already understands
for example in listing/1. So it could have the same meta argument
specification as listing/1 has. Which is current : in SWI-Prolog.
So we would have, and subsequently predicate_property/2 would
take the : in its first argument:
:- meta_predicate sys_modfunc_site(?,?).
:- meta_predicate sys_modfunc_site(0,:).
The vanilla interpreter, that can also deal with modules, then
reads as follows. Unfortunately a further predicate is needed,
sys_indicator_colon/2, which compresses a qualified predicate
indicator into an ordinary predicate indicator, since our
predicate_property/2 does not understand generalized predicate
indicators for efficiency reasons:
solve((A,B)) :- !, solve(A), solve(B).
solve(A) :-
sys_modfunc_site(A,I),
sys_indicator_colon(J,I),
predicate_property(J, built_in), !, A.
solve(A) :- clause(A,B), solve(B).
The above implements a local semantic of the colon (:)/2,
compared to the rather far reaching semantic of the colon
(:)/2 as described in the ISO module standard. The far
reaching semantic imputes a module name on all the literals
of a query. The local semantic only expects a qualified
literal and just applies the module name to that literal.
Jekejeke only implements local semantic with the further
provision that call-site is not changed. So under the hood
sys_modfunc_site/2 and sys_indicator_colon/2 have also to
transfer the call-site so that predicate_property/2 makes
the right decision for unqualified predicates, i.e. resolving
the predicate name by respecting imports etc..
Finally a little epilog:
The call-site transfer of Jekejeke Prolog is a pure runtime
thing, and doesn't need some compile time manipulation, especially
no ad hoc adding of module qualifiers at compile time. As a result
certain algebraic properties are preserved. For example assume
we have the following clause:
?- [user].
foo:bar.
^D
Then the following things work fine, since not only sys_modfunc_site/2
is bidirectional, but also sys_indicator_colon/2:
?- S = foo:bar/0, sys_indicator_colon(R,S), predicate_property(R,static).
S = foo:bar/0,
R = 'foo%bar'/0
?- predicate_property(R,static), sys_indicator_colon(R,S), S = foo:bar/0.
R = 'foo%bar'/0,
S = foo:bar/0
And of course predicate_property/2 works with different input and
output modes. But I guess the SWI-Prolog phaenomenom has first an
issue that a bare bone variable is prefixed with the current module. And
since false is not in user, but in system, it will not show false.
In output mode it will not show predicates which are equal by resolution.
Check out in SWI-Prolog:
?- predicate_property(X, built_in), write(X), nl, fail; true.
portray(_G2778)
ignore(_G2778)
...
?- predicate_property(user:X, built_in), write(X), nl, fail; true.
prolog_load_file(_G71,_G72)
portray(_G71)
...
?- predicate_property(system:X, built_in), write(X), nl, fail; true.
...
false
...
But even if the SWI-Prolog predicate_property/2 predicate would
allow bar bone variables, i.e. output goals, we would see less
commutativity in the far reaching semantic than in the local
semantic. In the far reaching semantic M:G means interpreting G
inside the module M, i.e. respecting the imports of the module M,
which might transpose the functor considerable.
The far reaching semantic is the cause that user:false means
system:false. On the other hand, in the local semantics, where M:G
means M:G and nothing else, we have the algebraic property more often.
In the local semantics user:false would never mean system:false.
Bye

Setting AST nodes as transient (effectively removing them from the AST)?

For many cases, a complete AST - as specified in a grammar spec - is great, since other code can obtain any syntactic detail.
Look at this AST forest:
My ANTLR generated parser is meant to statically analyze a programming language. Therefore, the tree variable -> base_variable_with_function_calls -> base_variable ... would't be of interest.
Solely the fact, that $d is a compound_variable would be enough detail.
Therefore: May I set somehow a ANTLR production rules as transient, so that ANTLR silently parses the grammar rule, but doesn't create intermediate AST nodes?
Obviously, such a tag could only be applied to productions, which have a single son node.
No, ANTLR 4 does not support this. The generated parse tree will contain every token matched by the grammar, and will contain a RuleNode for every rule invoked by the grammar.

Is currying the same as overloading?

Is currying for functional programming the same as overloading for OO programming? If not, why? (with examples if possible)
Tks
Currying is not specific to functional programming, and overloading is not specific to object-oriented programming.
"Currying" is the use of functions to which you can pass fewer arguments than required to obtain a function of the remaining arguments. i.e. if we have a function plus which takes two integer arguments and returns their sum, then we can pass the single argument 1 to plus and the result is a function for adding 1 to things.
In Haskellish syntax (with function application by adjacency):
plusOne = plusCurried 1
three = plusOne 2
four = plusCurried 2 2
five = plusUncurried 2 3
In vaguely Cish syntax (with function application by parentheses):
plusOne = plusCurried(1)
three = plusOne(2)
four = plusCurried(2)(2)
five = plusUncurried(2, 3)
You can see in both of these examples that plusCurried is invoked on only 1 argument, and the result is something that can be bound to a variable and then invoked on another argument. The reason that you're thinking of currying as a functional-programming concept is that it sees the most use in functional languages whose syntax has application by adjacency, because in that syntax currying becomes very natural. The applications of plusCurried and plusUncurried to define four and five in the Haskellish syntax merge to become completely indistinguishable, so you can just have all functions be fully curried always (i.e. have every function be a function of exactly one argument, only some of them will return other functions that can then be applied to more arguments). Whereas in the Cish syntax with application by parenthesised argument lists, the definitions of four and five look completely different, so you need to distinguish between plusCurried and plusUncurried. Also, the imperative languages that led to today's object-oriented languages never had the ability to bind functions to variables or pass them to other functions (this is known as having first-class functions), and without that facility there's nothing you can actually do with a curried-function other than invoke it on all arguments, and so no point in having them. Some of today's OO languages still don't have first-class functions, or only gained them recently.
The term currying also refers to the process of turning a function of multiple arguments into one that takes a single argument and returns another function (which takes a single argument, and may return another function which ...), and "uncurrying" can refer to the process of doing the reverse conversion.
Overloading is an entirely unrelated concept. Overloading a name means giving multiple definitions with different characteristics (argument types, number of arguments, return type, etc), and have the compiler resolve which definition is meant by a given appearance of the name by the context in which it appears.
A fairly obvious example of this is that we could define plus to add integers, but also use the same name plus for adding floating point numbers, and we could potentially use it for concatenating strings, arrays, lists, etc, or to add vectors or matrices. All of these have very different implementations that have nothing to do with each other as far as the language implementation is concerned, but we just happened to give them the same name. The compiler is then responsible for figuring out that plus stringA stringB should call the string plus (and return a string), while plus intX intY should call the integer plus (and return an integer).
Again, there is no inherent reason why this concept is an "OO concept" rather than a functional programming concept. It simply happened that it fit quite naturally in statically typed object-oriented languages that were developed; if you're already resolving which method to call by the object that the method is invoked on, then it's a small stretch to allow more general overloading. Completely ad-hoc overloading (where you do nothing more than define the same name multiple times and trust the compiler to figure it out) doesn't fit as nicely in languages with first-class functions, because when you pass the overloaded name as a function itself you don't have the calling context to help you figure out which definition is intended (and programmers may get confused if what they really wanted was to pass all the overloaded definitions). Haskell developed type classes as a more principled way of using overloading; these effectively do allow you to pass all the overloaded definitions at once, and also allow the type system to express types a bit like "any type for which the functions f and g are defined".
In summary:
currying and overloading are completely unrelated
currying is about applying functions to fewer arguments than they require in order to get a function of the remaining arguments
overloading is about providing multiple definitions for the same name and having the compiler select which definition is used each time the name is used
neither currying nor overloading are specific to either functional programming or object-oriented programming; they each simply happen to be more widespread in historical languages of one kind or another because of the way the languages developed, causing them to be more useful or more obvious in one kind of language
No, they are entirely unrelated and dissimilar.
Overloading is a technique for allowing the same code to be used at different types -- often known in functional programming as polymorphism (of various forms).
A polymorphic function:
map :: (a -> b) -> [a] -> [b]
map f [] = []
map f (x:xs) = f x : map f xs
Here, map is a function that operates on any list. It is polymorphic -- it works just as well with a list of Int as a list of trees of hashtables. It also is higher-order, in that it is a function that takes a function as an argument.
Currying is the transformation of a function that takes a structure of n arguments, into a chain of functions each taking one argument.
In curried languages, you can apply any function to some of its arguments, yielding a function that takes the rest of the arguments. The partially-applied function is a closure.
And you can transform a curried function into an uncurried one (and vice-versa) by applying the transformation invented by Curry and Schonfinkel.
curry :: ((a, b) -> c) -> a -> b -> c
-- curry converts an uncurried function to a curried function.
uncurry :: (a -> b -> c) -> (a, b) -> c
-- uncurry converts a curried function to a function on pairs.
Overloading is having multiple functions with the same name, having different parameters.
Currying is where you can take multiple parameters, and selectively set some, so you may just have one variable, for example.
So, if you have a graphing function in 3 dimensions, you may have:
justgraphit(double[] x, double[] y, double[] z), and you want to graph it.
By currying you could have:
var fx = justgraphit(xlist)(y)(z) where you have now set fx so that it now has two variables.
Then, later on, the user picks another axis (date) and you set the y, so now you have:
var fy = fx(ylist)(z)
Then, later you graph the information by just looping over some data and the only variability is the z parameter.
This makes complicated functions simpler as you don't have to keep passing what is largely set variables, so the readability increases.

Does implementation of ++i vs. i++ vary from language to language?

I recently read:
"The expressions (++i) and (i++) have values and side effects.
The side effect is that the value in i is increased by 1.
The value of (i++) is the value before the increment and
the value of (++i) is the value after the increment,
but whether the increment or the evaluation takes place first,
is not part of C."
I know the evaluative step takes place first in Java... is it the same for all other languages?
At least in C++, operators can be overloaded, so the semantics of ++i and i++ are not guaranteed - they can in fact be overloaded to do very different things, and can even be made to do something that has nothing to do with increment. So the answer to your question is that no - in at least one language, the postfix and prefix ++ operator for classes can do whatever the programmer wishes.
But just because someone can do that, it doesn't mean they should. Since the pre- and post-increment operators have very well known semantics, (decent) C++ programmers try to not violate that, lest the code that uses them will be most surprised.
A good example of operator overloading in C++ is the STL iterators. Iterators to containers like linked lists define a class that overloads the preincrement and postincrement operators in such a way that it mimics a pointer (iterators in C++ are in fact a generalization of pointers).