bounded PDA and context-free languages - automation

How can we prove that:
Class of linear bounded PDA languages ∼ Class of CFLs
I know that for a linear bounded PDA there is a constant k such that the stack size for w is at most k|w|. I also read about PDA's and CFL's and know that pushdown automata are essentially finite automata with a stack as
storage and a grammar G = (V, T, S, P) is said to be context-free if all productions
in P have the form
A → x
How can this be used to show that they are equivalent?

Related

What exactly does small-oh notation mean? [duplicate]

What is the difference between Big-O notation O(n) and Little-O notation o(n)?
f ∈ O(g) says, essentially
For at least one choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) <= k g(x) holds for all x > a.
Note that O(g) is the set of all functions for which this condition holds.
f ∈ o(g) says, essentially
For every choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) < k g(x) holds for all x > a.
Once again, note that o(g) is a set.
In Big-O, it is only necessary that you find a particular multiplier k for which the inequality holds beyond some minimum x.
In Little-o, it must be that there is a minimum x after which the inequality holds no matter how small you make k, as long as it is not negative or zero.
These both describe upper bounds, although somewhat counter-intuitively, Little-o is the stronger statement. There is a much larger gap between the growth rates of f and g if f ∈ o(g) than if f ∈ O(g).
One illustration of the disparity is this: f ∈ O(f) is true, but f ∈ o(f) is false. Therefore, Big-O can be read as "f ∈ O(g) means that f's asymptotic growth is no faster than g's", whereas "f ∈ o(g) means that f's asymptotic growth is strictly slower than g's". It's like <= versus <.
More specifically, if the value of g(x) is a constant multiple of the value of f(x), then f ∈ O(g) is true. This is why you can drop constants when working with big-O notation.
However, for f ∈ o(g) to be true, then g must include a higher power of x in its formula, and so the relative separation between f(x) and g(x) must actually get larger as x gets larger.
To use purely math examples (rather than referring to algorithms):
The following are true for Big-O, but would not be true if you used little-o:
x² ∈ O(x²)
x² ∈ O(x² + x)
x² ∈ O(200 * x²)
The following are true for little-o:
x² ∈ o(x³)
x² ∈ o(x!)
ln(x) ∈ o(x)
Note that if f ∈ o(g), this implies f ∈ O(g). e.g. x² ∈ o(x³) so it is also true that x² ∈ O(x³), (again, think of O as <= and o as <)
Big-O is to little-o as ≤ is to <. Big-O is an inclusive upper bound, while little-o is a strict upper bound.
For example, the function f(n) = 3n is:
in O(n²), o(n²), and O(n)
not in O(lg n), o(lg n), or o(n)
Analogously, the number 1 is:
≤ 2, < 2, and ≤ 1
not ≤ 0, < 0, or < 1
Here's a table, showing the general idea:
(Note: the table is a good guide but its limit definition should be in terms of the superior limit instead of the normal limit. For example, 3 + (n mod 2) oscillates between 3 and 4 forever. It's in O(1) despite not having a normal limit, because it still has a lim sup: 4.)
I recommend memorizing how the Big-O notation converts to asymptotic comparisons. The comparisons are easier to remember, but less flexible because you can't say things like nO(1) = P.
I find that when I can't conceptually grasp something, thinking about why one would use X is helpful to understand X. (Not to say you haven't tried that, I'm just setting the stage.)
Stuff you know: A common way to classify algorithms is by runtime, and by citing the big-Oh complexity of an algorithm, you can get a pretty good estimation of which one is "better" -- whichever has the "smallest" function in the O! Even in the real world, O(N) is "better" than O(N²), barring silly things like super-massive constants and the like.
Let's say there's some algorithm that runs in O(N). Pretty good, huh? But let's say you (you brilliant person, you) come up with an algorithm that runs in O(N⁄loglogloglogN). YAY! Its faster! But you'd feel silly writing that over and over again when you're writing your thesis. So you write it once, and you can say "In this paper, I have proven that algorithm X, previously computable in time O(N), is in fact computable in o(n)."
Thus, everyone knows that your algorithm is faster --- by how much is unclear, but they know its faster. Theoretically. :)
In general
Asymptotic notation is something you can understand as: how do functions compare when zooming out? (A good way to test this is simply to use a tool like Desmos and play with your mouse wheel). In particular:
f(n) ∈ o(n) means: at some point, the more you zoom out, the more f(n) will be dominated by n (it will progressively diverge from it).
g(n) ∈ Θ(n) means: at some point, zooming out will not change how g(n) compare to n (if we remove ticks from the axis you couldn't tell the zoom level).
Finally h(n) ∈ O(n) means that function h can be in either of these two categories. It can either look a lot like n or it could be smaller and smaller than n when n increases. Basically, both f(n) and g(n) are also in O(n).
I think this Venn diagram (adapted from this course) could help:
It's the exact same has what we use for comparing numbers:
In computer science
In computer science, people will usually prove that a given algorithm admits both an upper O and a lower bound 𝛺. When both bounds meet that means that we found an asymptotically optimal algorithm to solve that particular problem Θ.
For example, if we prove that the complexity of an algorithm is both in O(n) and 𝛺(n) it implies that its complexity is in Θ(n). (That's the definition of Θ and it more or less translates to "asymptotically equal".) Which also means that no algorithm can solve the given problem in o(n). Again, roughly saying "this problem can't be solved in (strictly) less than n steps".
Usually the o is used within lower bound proof to show a contradiction. For example:
Suppose algorithm A can find the min value in an array of size n in o(n) steps. Since A ∈ o(n) it can't see all items from the input. In other words, there is at least one item x which A never saw. Algorithm A can't tell the difference between two similar inputs instances where only x's value changes. If x is the minimum in one of these instances and not in the other, then A will fail to find the minimum on (at least) one of these instances. In other words, finding the minimum in an array is in 𝛺(n) (no algorithm in o(n) can solve the problem).
Details about lower/upper bound meanings
An upper bound of O(n) simply means that even in the worse case, the algorithm will terminate in at most n steps (ignoring all constant factors, both multiplicative and additive). A lower bound of 𝛺(n) is a statement about the problem itself, it says that we built some example(s) where the given problem couldn't be solved by any algorithm in less than n steps (ignoring multiplicative and additive constants). The number of steps is at most n and at least n so this problem complexity is "exactly n". Instead of saying "ignoring constant multiplicative/additive factor" every time we just write Θ(n) for short.
The big-O notation has a companion called small-o notation. The big-O notation says the one function is asymptotical no more than another. To say that one function is asymptotically less than another, we use small-o notation. The difference between the big-O and small-o notations is analogous to the difference between <= (less than equal) and < (less than).

Definition of the "DFA for a language"

I just started learning Theory of Computation this semester and a bit confused by the phrase "DFA for a language". If it is asked to construct a DFA for some collection of binary strings L, does it mean to find DFA M with L(M)=L or just $L(M)\supset L$?
Most compiler/theory courses tend to have different styles surrounding teaching definitions of deterministic finite automata and formal languages, but I'll try to make this description as agnostic as possible.
The phrase "DFA for a language" loosely means: a DFA which accepts every word in the language and rejects every word not in the language.
The way I was taught DFAs is to have final/accepting states and regular states which removes the necessity for an implicit error state.
This means that a DFA accepts a word if the state it is in at the end of input is accepting and it rejects the word if the state is not accepting.
Ex:
Let's define L as the language which contains an even number of 1s. These will be binary strings so the symbols are just 0 and 1.
00, 110, 111, 1111, etc are examples of words in this language. Notice that the empty string is in this language.
We can have two states in our DFA. The starting state, let's call it even 1s, is also an accepting state because 0 ones is even. The other state is odd 1s, this is not accepting.
As for transitions, when even 1s receives a 1, it transitions to odd 1s. And when odd 1s receives a 1, it transitions to even 1s.
Now, the number of 0s doesn't matter, so in either state, it transitions to itself.
Apologies for the double arrow, this website is great but I couldn't figure out how to separate the transitions between even 1s and odd 1s
Deterministic Finite Automaton (DFA)
In DFA, for each input symbol, one can determine the state to which the machine will move. Hence, it is called Deterministic Automaton. As it has a finite number of states, the machine is called a Deterministic Finite Machine or Deterministic Finite Automaton.
Formal Definition of a DFA
A DFA can be represented by a 5-tuple (Q, ∑, δ, q0, F) where −
-> Q is a finite set of states.
-> ∑ is a finite set of symbols called the alphabet.
-> δ is the transition function where δ: Q × ∑ → Q
-> q0 is the initial state from where any input is processed (q0 ∈ Q).
-> F is a set of final state/states of Q (F ⊆ Q).
Write your question in precise way. here DFA for a language means that you need to construct machine for particular language only not it's subset or superset. construct DFA maachine for which L(M)= L.

Understanding fitness function

I am working with use of genetic algorithm to break transposition cipher. So in this work I have come across to a paper named Breaking Transposition Cipher with Genetic Algorithm by R. Toemeh & S. Arumugam.
In this paper they have used a fitness function. But i can not understand it completely. I can not understand the function of β and γ in the equation.
Can anyone please explain the fitness function please? Here is the picture of the fitness function:
The weights β and γ can be varied to allow more or less
emphasis on particular statistics (they're determined "experimentally").
Kb(i, j) and Kt(i, j, k) are the known language bigram and trigram statistics. E.g. for English language you have (bigrams):
(further details in The frequency of bigrams in an English corpus)
Db(i, j) and Dt(i, j ,k) are the bigram and trigram statistics of
the message decrypted with key k.
In A Generic Genetic Algorithm to Automate an Attack on Classical Ciphers by Anukriti Dureha and Arashdeep Kaur there are some reference values of β and γ (and α since they use an extended form of the above equation) and three types of ciphers.
Some further details about β and γ.
They're weights that remain constant during the evolution. They should be tuned experimentally ("optimal" values depends on the target languages and the cipher algorithms).
Offline parameter tuning is the way to go, i.e.:
simple parameter sweep (try everything)
meta-GA
racing strategy

How are one-of types achieved using subclassing in object-oriented languages?

From the notes of a course on programming languages (page 2, second paragraph),
In object-oriented languages with classes like Java, one-of types are
achieved with subclassing
What's meant by one-of types here is what programming language researchers often refer to as sum types (e.g., types like option types in ML and enumerations in C and Java).
There was a suggestion by someone that, in an object-oriented language, if W is a super-type of X and Y, then W is a one-of type in the sense that a sub-type of W is either X or Y. But then I wondered, what if there's a sub-type Z of a sub-type of W, let's say X. In this case, a sub-type of W could be an X and a Z.
This is a little bit confusing, but is the idea of having Z as a sub-type of X as a sub-type of W is the same as nesting an each-of type (aka product type) inside of a one-of type (i.e., W)?
If I'm getting this whole idea in the wrong way, then how are one-of types achieved using subclassing in object-oriented languages?

How to determine which class these languages belong to?

{WW} - Decidable but not Context free
{WW^R} - Context Free, but not in Regular
Σ* - Regular language
How can you determine which class they belong to?
May be my answer helpful to you:
L1 = {ww | w ∈ {a, b}* }
is not context Free Language because a (Push down Automata) PDA is not possible (even Non-Deterministic-PDA ). Why? suppose you push first w in stack. To match second w with first w you have to push first w in reverse order (either you need to match second w in reverse order with stack content) that is not possible with stack (and we can't read input in reverse order). Although its decidable because be can draw a Turing Machine for L1 that always half after finite number of steps.
L3 = {wwR | w ∈ {a, b}* }
Language L3 is a Non-Deterministic Context Free Language, because n-PDA is possible but Finite Automate is not possible for L3. you can also proof this using Pumping Lemma for Regular Languages.
Σ* - Regular Language(RL)
Σ* is Regular Expression (RE) e.g
if Σ = {a, b} then RE is (a + b)* RE is possible only for RLs.
The examples in my question may be more helpful to you.