Formal Grammar removing unit production - grammar

Assume that I have 2 productions X and Y.
X-> AL|BL|X,
Y-> CK|DK|X.
I guess is not possible to reduce unit productions in X, but is this the case for Y?

You can remove the unit production X → X as it has no effect - expanding X → X in any derivation can be removed and replaced by using one of the other X productions.
Having done that, you can then remove all unit productions for Y by replacing Y → X with Y → ω for each remaining production X → ω.

Related

Are variables bounded to free variables still free variables?

I am looking at some questions in my textbook about whether or not the variables are free or bound. I am not sure about these two in particular.
First off, I want to make sure I understand the concept of free vs. bound. I am fairly sure this x is a free variable in the following:
variable x is free in expression "x"
I believe this is true but I just want to make sure.
These two questions I am not really sure about, however.
(/ (+ 1 x) (let x 2 (+ x x))),
(let x y (/ (+ 1 x) (let x 2 (+ x x))))
For the top expression, the x in the first subexpression is unbound(right?) but x in the second subexpression is bound to 2, so would that mean the x in regards to the expression as a whole is unbound?
For the bottom expression, x is bound to y, but y is a free variable(?). So would x be free because y is free or is it bound because x is still bounded to y?
For (/ (+ 1 x) (let x 2 (+ x x))), the x in the first subexpression is unbound but x in the second subexpression is bound to 2, so would that mean the x in regards to the expression as a whole is unbound?
Yes. Although I would use the terminology "is bound" or "is free" only with respect to a concrete variable expression, not to a name. As you can see, it's ambiguous what "x" refers to.
I'd say "the whole expression has a free variable x", which is what you usually care about when trying to evaluate the expression.
For (let x y (/ (+ 1 x) (let x 2 (+ x x)))), x is bound to y, but y is a free variable. So would x be free because y is free or is it bound because x is still bounded to y?
The x is bound (and can be substituted). y is free.

Does the predicate `contracting/1` restore deleted inconsistent values?

This question is subsequent to another one I posted earlier on custom labeling in Prolog.
Does the contracting/1 predicate, when used after a value assignment to a variable in a custom labeling predicate, delete the "inconsistent" values from the domain permanently ? Or are these values restored when backtracking ?
These values are of course restored on backtracking.
It is the nature of pure Prolog predicates, such as CLP(FD) constraints, that everything they state is completely undone on backtracking. Without this, many important declarative properties would not hold. See logical-purity for more information.
You can see easily that this also holds for clpfd:contracting/1, using for example a sample session:
?- X in 0..5, X mod Y #= 2, Y in 0..2.
X in 0..5,
X mod Y#=2,
Y in 1..2.
?- X in 0..5, X mod Y #= 2, Y in 0..2, clpfd:contracting([X,Y]).
false.
?- X in 0..5, X mod Y #= 2, Y in 0..2, ( clpfd:contracting([X,Y]) ; true ).
X in 0..5,
X mod Y#=2,
Y in 1..2.

Proving linear equations/Inequalities automatically

I'm looking for a tool for determining whether a given set of linear equations/inequalities (A) entails another given set of linear equations/inequalities (B). The return value should be either 'true' or 'false'.
To illustrate, let's look at possible instances of A and B and the expected return value of the algorithm:
A: {Z=3,Y=Z+2, X < Y} ;
B: {X<5} ;
Expected result: true
A: {Z=3,Y=Z+2, X < Y} ;
B: {X<10} ;
Expected result: true
A: {Z=3,Y=Z+2, X < Y} ;
B: {X=3} ;
Expected result: false
A: {X<=Y,X>=Y} ;
B: {X=Y} ;
Expected result: true
A: {X<=Y,X>=Y} ;
B: {X=Y, X>Z+2} ;
Expected result: false
Typically A contains up to 10 equations/inequalities, and B contains 1 or 2. All of them are linear and relatively simple. We may even assume that all variables are integers.
This task - of determining whether A entails B - is part of a bigger system and therefore I'm looking for tools/source code/packages that already implemented something like that and I can use.
Things I started to look at:
Theorem provers with algebra - Otter, EQP and Z3 (Vampire is currently unavailable for download).
Coq formal proof management system.
Linear Programming.
However, my experience with these tools is very limited and so far I didn't find a promising direction. Any guidelines and ideas from people more experienced than me will be highly appreciated.
Thank you for your time!
I think I found a working solution. The problem can be rephrased as an assignment problem and then it can be solved by theorem provers such as Z3 and with some work probably also by linear programming solvers.
To illustrate, let's look at the first example I gave above:
A: {Z=3, Y=Z+2, X<Y} ;
B: {X<5} ;
Determining whether A entails B is equivalent to determining whether it is impossible that A is true while B is false. This is a simple simple logical equivalence. In our case, it means that rather than checking whether the condition in B follows from the ones in A, we can check whether there is no assignment of X, Y and Z that satisfies the conditions in A and not in B.
Now, when phrased as an assignment problem, a theorem prover such as Z3 can be called for the task. The following code checks if the conditions in A are satisfiable while the one in B is not:
(declare-fun x () Int)
(declare-fun y () Int)
(declare-fun z () Int)
(assert (= z 3))
(assert (= y (+ z 2)))
(assert (< x y))
(assert (not (< x 5)))
(check-sat)
(get-model)
(exit)
Z3 reports that there is no model that satisfies these conditions, thus it is not possible that B doesn't follow from A, and therefore we can conclude that B follows from A.

Theoretical Computer Science - context-sensitive grammar?

how can it be that the rule "Aa -> aA" is context-sensitive? According to the definition, context-sensitive rules have to be like this form:
αAβ → αγβ
where
A ∈ N, α,β ∈ (N∪Σ)* and γ ∈ (N∪Σ)+
Thanks.
It depends on what you mean. If you scroll down the Wikipedia entry, you can see that, formally,
cB → Bc
does not fit the scheme, but it can be simulated by 4 rules that do fit it:
c B → W B
W B → W X
W X → B X
B X → B c
So Aa → aA is not a CSG rule in itself, but the langue it generates is. Perhaps whoever told you it is, was using it as a shorthand (you could expand the definition of CSG rules to include these types of things as "macros").

Finite difference in Haskell, or how to disable potential optimizations

I'd like to implement the following naive (first order) finite differencing function:
finite_difference :: Fractional a => a -> (a -> a) -> a -> a
finite_difference h f x = ((f $ x + h) - (f x)) / h
As you may know, there is a subtle problem: one has to make sure that (x + h) and x differ by an exactly representable number. Otherwise, the result has a huge error, leveraged by the fact that (f $ x + h) - (f x) involves catastrophic cancellation (and one has to carefully choose h, but that is not my problem here).
In C or C++, the problem can be solved like this:
volatile double temp = x + h;
h = temp - x;
and the volatile modifier disables any optimization pertaining to the variable temp, so we are assured that a "clever" compiler will not optimize away those two lines.
I don't know enough Haskell yet to know how to solve this. I'm afraid that
let temp = x + h
hh = temp - x
in ((f $ x + hh) - (f x)) / h
will get optimized away by Haskell (or the backend it uses). How do I get the equivalent of volatile here (if possible without sacrificing laziness)? I don't mind GHC specific answers.
I have two solutions and a suggestion:
First solution: You can guarantee that this won't be optimized out with two helper functions and the NOINLINE pragma:
norm1 x h = x+h
{-# NOINLINE norm1 #-}
norm2 x tmp = tmp-x
{-# NOINLINE norm2 #-}
normh x h = norm2 x (norm1 x h)
This will work, but will introduce a small cost.
Second solution: Write the normalization function in C using volatile and call it through the FFI. The performance penalty will be minimal.
Now for the suggestion: Currently the math isn't optimized out, so it will work properly at present. You're afraid it will break in a future compiler. I think this is unlikely, but not so unlikely that I wouldn't want to guard against it also. So write some unit tests that cover the cases in question. Then if it does break in the future (for any reason), you'll know exactly why.
One way is to look at the Core.
Specializing to Doubles (which will be the case most likely to trigger some optimization):
finite_difference :: Double -> (Double -> Double) -> Double -> Double
finite_difference h f x = ((f $ x + hh) - (f x)) / h
where
temp = x + h
hh = temp - x
Is compiled to:
A.$wfinite_difference h f x =
case f (case x of
D# x' -> D# (+## x' (-## (+## x' h) x'))
) of
D# x'' -> case f x of D# y -> /## (-## x'' y) h
And similarly (with even less rewriting) for the polymorphic version.
So while the variables are inlined, the math isn't optimized away.
Beyond looking at the Core, I can't think of a way to guarantee the property you want.
I don't think that
temp = unsafePerformIO $ return $ x + h
would get optimised out. Just a guess.