Question about defining grammars - grammar

I studying grammars and am a bit confused about how to design grammars where one value is dependent on another.
For example, say I want to define a grammar that produces exactly the following three sentences:
i + i = ii : BASE CASE
iiii + ii = iiiiii (thats 4 i's + 2 i's equals 6 i's)
iii + i = iiii (3 i's + 1 i equals 4 i's)
How would I go about this? The part the confuses me is that if the first 'value' is iiii then the second can only be 'ii' and not 'i' or 'iii'.
Thanks in advance!

Grammars are trivial if your language is finite:
S → "i + i = ii"
S → "iiii + ii = iiiiii"
S → "iii + i = iiii"

Related

axiomantic semantic..what are the weakest preconditon?

i was studying axiomantic semantic which is really pain in my ass. everything was so far great untill i met these questions. i was stuck at the 2 Question which has 'and' in the postcondition.
what are the weakest precondition?
1)
if (x > y)
c = x * 2 + 4
else
a = x + 4;
{a > 4 and c < 6}
2)if (x > y)
e = x * 2 + 4
else
f = x + 5;
{f > 4 and e > 6}
i've never seen postcondition with the 'and' , it was pretty confusing.
because when i tried to figure out the first one
(precondition for if)
a>4 and 2x+4<6
a>4 and 2x<2
a>4 and x<1
(pre condtion for else)
x+4 >4 and c<6
x>0 and c<6
i couldn't apply the rule of consequence because there are three variables and x has different direction of comparison symbol , which is hard to figure out which one is the stronger one, or weaker.
can anyone help this poor computer noob :( ?

Constructing a linear grammar for the language

I find difficulties in constructing a Grammar for the language especially with linear grammar.
Can anyone please give me some basic tips/methodology where i can construct the grammar for any language ? thanks in advance
I have a doubt whether the answer for this question "Construct a linear grammar for the language: is right
L ={a^n b c^n | n belongs to Natural numbers}
Solution:
Right-Linear Grammar :
S--> aS | bA
A--> cA | ^
Left-Linear Grammar:
S--> Sc | Ab
A--> Aa | ^
As pointed out in the comments, these grammars are wrong since they generate strings not in the language. Here's a derivation of abcc in both grammars:
S -> aS -> abA -> abcA -> abccA -> abcc
S -> Sc -> Scc -> Abcc -> Aabcc -> abcc
Also as pointed out in the comments, there is a simple linear grammar for this language, where a linear grammar is defined as having at most one nonterminal symbol in the RHS of any production:
S -> aSc | b
There are some general rules for constructing grammars for languages. These are either obvious simple rules or rules derived from closure properties and the way grammars work. For instance:
if L = {a} for an alphabet symbol a, then S -> a is a gammar for L.
if L = {e} for the empty string e, then S -> e is a grammar for L.
if L = R U T for languages R and T, then S -> S' | S'' along with the grammars for R and T are a grammar for L if S' is the start symbol of the grammar for R and S'' is the start symbol of the grammar for T.
if L = RT for languages R and T, then S = S'S'' is a grammar for L if S' is the start symbol of the grammar for R and S'' is the start symbol of the grammar for T.
if L = R* for language R, then S = S'S | e is a grammar for L if S' is the start symbol of the grammar for R.
Rules 4 and 5, as written, do not preserve linearity. Linearity can be preserved for left-linear and right-linear grammars (since those grammars describe regular languages, and regular languages are closed under these kinds of operations); but linearity cannot be preserved in general. To prove this, an example suffices:
R -> aRb | ab
T -> cTd | cd
L = RT = a^n b^n c^m d^m, 0 < a,b,c,d
L' = R* = (a^n b^n)*, 0 < a,b
Suppose there were a linear grammar for L. We must have a production for the start symbol S that produces something. To produce something, we require a string of terminal and nonterminal symbols. To be linear, we must have at most one nonterminal symbol. That is, our production must be of the form
S := xYz
where x is a string of terminals, Y is a single nonterminal, and z is a string of terminals. If x is non-empty, reflection shows the only useful choice is a; anything else fails to derive known strings in the language. Similarly, if z is non-empty, the only useful choice is d. This gives four cases:
x empty, z empty. This is useless, since we now have the same problem to solve for nonterminal Y as we had for S.
x = a, z empty. Y must now generate exactly a^n' b^n' b c^m d^m where n' = n - 1. But then the exact same argument applies to the grammar whose start symbol is Y.
x empty, z = d. Y must now generate exactly a^n b^n c c^m' d^m' where m' = m - 1. But then the exact same argument applies to the grammar whose start symbol is Y.
x = a, z = d. Y must now generate exactly a^n' b^n' bc c^m' d^m' where n' and m' are as in 2 and 3. But then the exact same argument applies to the grammar whose start symbol is Y.
None of the possible choices for a useful production for S is actually useful in getting us closer to a string in the language. Therefore, no strings are derived, a contradiction, meaning that the grammar for L cannot be linear.
Suppose there were a grammar for L'. Then that grammar has to generate all the strings in (a^n b^n)R(a^m b^m), plus those in e + R. But it can't generate the ones in the former by the argument used above: any production useful for that purpose would get us no closer to a string in the language.

Let Σ={a,b,c}. How many languages over Σ are there such that each string in the language has length 2 or less?

First of all I see the number of strings as the following:
1 (epsilon 0 length string) + 3 (pick one letter) + 9 (3 options for first letter, 3 options for second)
For a total of 13 strings. Now as far as I know a language can pick any combination of this for example l1 = {ab,a,ac} l2 = {c}
I'm not sure how to calculate the total number of languages there could be here. Any Help?
So you have a set with 13 elements. A particular language could be any subset of this set. How many subsets does this set have?
This is called the power set of that set, and it has 213 elements.
Cardinality of character set, say d = 3.
Total words possible of length (<= k), say w = (d^(k+1) - 1)/(d-1) = 13.
Total languages possible = Power Set {Each word can be included or not} = 2^w = 8192.

Parametrization of primitive Pythagorean triples

I have already written an algorithm to find integer Pythagorean triples, but unfortunately the algorithm runs at O(n^3). Does anyone know how to use parametrization to find Pythagorean triples? If so, can you explain this process to me?
There is Euclid's formula for generating primitive Pythagorean triples:
for all integer n, m = n + 1 + 2 * p (m - n is odd), and m and n are coprime:
a = m2-n2
b = 2 * m * n
c = m2 + n2
Sorry to perform necromancy, but take a look at this article published in Mathematics Teacher several years ago: http://www.scribd.com/doc/191694547/Calculating-Pythagorean-Triples
It might be relevant.

Hoare Logic, while loop with '<= '

I'm working on some Hoare logic and I am wondering whether my approach is the right one.
I have the following program P:
s = 0
i = 1
while (i <= n) {
s = s + i
i = i + 1
}
It should satisfy the hoare triple {n >= 0}P{s = n*(n+1)/2} (so it just takes the sum). Now, initially I had |s = i*(i-1)/2| as my invariant, which works fine. However, I had a problem from going to the end of my loop, to my desired postcondition. Because for the impliciation
|s = i*(i-1)/2 & i > n|
=>
| s = n * (n+1) / 2 |
to hold, I need to prove that i is n+1, and not just any i bigger than n. So what I thought of is to add a (i <= n + 1) to the invariant, so that it becomes :
|s = i * (i-1)/2 & i <= n+1|
Then I can prove the program so I think it should be correct.
Nonetheless, I find the invariant to be a bit, less "invariantly" :). And not like anything I've seen in the course or in the exercises so far, so I was wondering if there was a more elegant solution here?
So what I thought of is to add a (i <= n + 1) to the invariant, so that it becomes :
|s = i * (i-1)/2 & i <= n+1|
Nonetheless, I find the invariant to be a bit, less "invariantly" :). And not like anything I've seen in the course or in the exercises so far, so I was wondering if there was a more elegant solution here?
Nope, given the way the code is written, that's exactly the way to go. (I can tell from experience since I've been teaching Hoare logic during several semesters in two different courses and since it's part of my graduate studies.)
Using i <= n is common practice when programming. In your particular program, you could just as well have written i != n+1 instead, in which case your first invariant (which indeed looks cleaner) would have sufficed since you get
| s=i*(i-1)/2 & i=n+1 |
=>
| s=n*(n+1)/2 |
which evidently holds.
There is another way to reason,given a more appropriate invariant (and other code)...searh n for final value of i...
I : s = i*(i+1)/2 and 0 <= i <=n
B : i < n
Now,evidently you have for post condition:
I and i >= n => s = i*(i+1)/2 and i=n => s = n*(n+1)/2
The code now becomes
s = 0
i = 0
while (i < n) {
s = s + (i+1)
i = i + 1
}
The invariant holds at init and keeps after each loop,since rewriting I as 2s=i*(i+1) we have to proof
I and i<n => 2(s + (i+1)) = (i+1)*(i+2)
2(s + (i+1) )=
2s + 2(i+1) =
i*(i+1) + 2(i+1)= (since I holds)
(i+1)(i+2)
Qed.