Algorithm which decides whether L(M) = {a} or L(M) =/= {a} - finite-automata

I started learning about NFA's and DFA's and stumbled across this question online in one of the Berkley PDF's on DFA's, but the question did not have a solution attached.
How would I be able to Show that there is an algorithm which receives as input a DFA M over the alphabet {a, b} and decides whether L(M) = {a} or L(M) =/= {a}?
Any guidance would be highly appreciated.

Given two DFAs D1 and D2, it's possible to decide whether L(D1) = L(D2) by minimizing each DFA and checking whether the resulting DFAs are identical (this works because for each language, there's a unique minimum-state DFA for that language).
Now, you're trying to check whether L(D1) = {a}. As a hint, can you construct a DFA whose language is exactly {a}? If so, could you then use the above algorithm to solve this problem?
Hope this helps!

Related

Understanding fitness function

I am working with use of genetic algorithm to break transposition cipher. So in this work I have come across to a paper named Breaking Transposition Cipher with Genetic Algorithm by R. Toemeh & S. Arumugam.
In this paper they have used a fitness function. But i can not understand it completely. I can not understand the function of β and γ in the equation.
Can anyone please explain the fitness function please? Here is the picture of the fitness function:
The weights β and γ can be varied to allow more or less
emphasis on particular statistics (they're determined "experimentally").
Kb(i, j) and Kt(i, j, k) are the known language bigram and trigram statistics. E.g. for English language you have (bigrams):
(further details in The frequency of bigrams in an English corpus)
Db(i, j) and Dt(i, j ,k) are the bigram and trigram statistics of
the message decrypted with key k.
In A Generic Genetic Algorithm to Automate an Attack on Classical Ciphers by Anukriti Dureha and Arashdeep Kaur there are some reference values of β and γ (and α since they use an extended form of the above equation) and three types of ciphers.
Some further details about β and γ.
They're weights that remain constant during the evolution. They should be tuned experimentally ("optimal" values depends on the target languages and the cipher algorithms).
Offline parameter tuning is the way to go, i.e.:
simple parameter sweep (try everything)
meta-GA
racing strategy

How to determine which class these languages belong to?

{WW} - Decidable but not Context free
{WW^R} - Context Free, but not in Regular
Σ* - Regular language
How can you determine which class they belong to?
May be my answer helpful to you:
L1 = {ww | w ∈ {a, b}* }
is not context Free Language because a (Push down Automata) PDA is not possible (even Non-Deterministic-PDA ). Why? suppose you push first w in stack. To match second w with first w you have to push first w in reverse order (either you need to match second w in reverse order with stack content) that is not possible with stack (and we can't read input in reverse order). Although its decidable because be can draw a Turing Machine for L1 that always half after finite number of steps.
L3 = {wwR | w ∈ {a, b}* }
Language L3 is a Non-Deterministic Context Free Language, because n-PDA is possible but Finite Automate is not possible for L3. you can also proof this using Pumping Lemma for Regular Languages.
Σ* - Regular Language(RL)
Σ* is Regular Expression (RE) e.g
if Σ = {a, b} then RE is (a + b)* RE is possible only for RLs.
The examples in my question may be more helpful to you.

Fast Exponentiation for galois fields

I want to be able to compute
g^x = g * g * g * ... * g (x times)
where g is in a finite field GF(2^m). Here m is rather large, m = 256, 384, 512, etc. so lookup tables are not the solution. I know that there are really fast algorithms for a similar idea, modpow for Z/nZ (see page 619-620 of HAC).
What is a fast, non-table based way to compute cycles (i.e. g^x)?
This is definitely a wishful question but here it comes: Can the idea of montgomery multiplication/exponentiation be 'recycled' to Galois fields? I would like to think so because of the isomorphic properties but I really don't know.
Remark: this is from my post on math.stackoverflow.com I suppose this is the best community to ask this question.
From the math stackexchange community, I had two people suggest Binary Exponentiaion. Wikipedia states a recursive it as a recursive algorithm. It can be changed to an iterative algorithm as shown in the Wiki's psuedocode.
I frowned at the idea at first but I looked into it more and I found two papers (1, 2) that can help implement binary exponentiation in Galois Fields that uses Montgomery Multiplication.
Furthermore, Jyrki Lahtonen suggested using normal bases (or when m =/= 256,384, 512, etc. optimal normal bases) to speed up the multiplication. Algorithms for this method of multiplication can be found in this paper.
Thanks to sarnold for his/her input.

Construct a DFA for the following language: all strings that have at least three 0s and at most two 1s

I am to construct a DFA from the intersection of two simpler DFAs. The first simpler DFA recognizes languages of all strings that have at least three 0s, and the second simpler language DFA recognizes languages of strings of at most two 1s. The alphabet is (0,1). I'm not sure how to construct a larger DFA combining the two. Thanks!
Here's a general idea:
The most straightforward way to do this is to have different paths for counting your 0s that are based on the number of 1s you've seen, such that they are "parallel" to each other. Move from one layer of the path to the next any time you see a 1, and then move from the last layer to a trap state if you see a third 1. Depending on the exact nature of the assignment you might be able to condense this, but once you have a basic layout you can determine that. Typically you can combine states from the first DFA with states in the second DFA to produce a smaller end result.
Here's a more mathematical explanation:
Constructing automata for the
intersection operation.
Assume we are
given two DFA M1 = (S1, q(1) 0 , T1,
F1) and M2 = (S2, q(2) 0 , T2, F2).
These two DFA recognize languages L1 =
L(M1) and L2 = L(M2). We want to
design a DFA M= (S, q0, T, F) that
recognizes the intersection L1 ∩L2. We
use the idea of constructing the DFA
for the union of languages. Given an
input w, we run M1 and M2 on w
simultaneously as we explained for the
union operation. Once we finish the
runs of M1 and of M2 on w, we look at
the resulting end states of these two
runs. If both end states are accepting
then we accept w, otherwise we reject
w.
When constructing the new transition function, the easy way to think of it is by using pairs of states. For example, consider the following DFAs:
Now, we can start combining these by traversing both DFAs at the same time. For example, both start at state 1. Now what happens if we see an a as input? Well, DFA1 will go from 1->2, and DFA2 will go from 1->3. When combining, then, we can say that the intersection will go from state "1,1" (both DFAs are in state 1) to state "2,3". State 2 is an accept state in DFA1 and state 3 is an accept state in DFA2, so state "2,3" is an accept state in our new DFA3. We can repeat this for all states/transitions and end up with:
Does that make sense?
Reference: Images found in this assignment from Cornell University.
The simplest way would be using the 2DFA model: from the end state of the first DFA(the one testing for at least 3 zeros) jump to the start state of the second one, and reverse to the beginning of the input. Then let the second DFA test the string.

If a language (L) is recognized by an n-state NFA, can it also be recognized by a DFA with no more than 2^n states?

I'm thinking so, because the upper bound would be the 2^n, and given that these are both finite machines, the intersection for both the n-state NFA and the DFA with 2^n or less states will be valid.
Am I wrong here?
You're right. 2^n is an upper limit, so the generated DFA can't have more states than that limit. But it's the worst-case scenario. In most common scenarios there's less states than that in the resulting DFA. Sometimes it could be even less than in the original NFA.
But as far as I know, the algorithm to predict how many states the resulting DFA will actually have, doesn't exist yet. So if you'll find it, please let me know ;)
That is correct. As you probably already know, both DFAs and NFAs only accept regular languages. That means that they are equal in the languages they can accept. Also, the most primitive way of transforming a NFA to a DFA is with subset construction (also called powerset construction), where you simply create a state in the DFA for every combination of states in the NFA. This is called the powerset of states, which could at most be 2^n.
But, as mentioned by #SasQ that is the worst case scenario. Typically you will not end up with that many states if you use Hopcroft's algorithm or Brozowski's algorithm.