Is Dead state is included in the Minimized DFA or not? - finite-automata

I searched google and in many pages it is given that in Minimized DFA dead state or trap state is removed. My question is how can it be still a DFA if some transition is undefined. So what you say people?

Even minimal DFAs must include dead states; otherwise, they're either (a) not DFAs or (b) not accepting the same language as their non-minimal counterparts. For instance, a minimal DFA for the language {a} over alphabet {a, b} must have 3 states: a start state where you can see a and accept; an accepting state where you reject if you see anything else; and a dead state where you go if you see a b or anything in the accepting state.
Never heard of omitting dead states from minimal DFAs. Blasphemy!

Dead states are not removed in the 'minimal' versions, but yes they are lost during 'reversals' of DFAs(probably you got the terms mixed up)

#PrashantBhardwaj : I also think that it(dead state and corresponding dead moves) should be included because including it would complete the DFA i.e., we won't have any anonymous moves on a particular state in a minimized DFA by considering it.
Still, the question is un-answered? Finally, should we include it or not? Could anyone certify it ?

Related

Non Deterministic Finite Automata acceptance and Rejection

Can an NFA ever accept a string which is not in the language?
I know that for an NFA to accept a string there has to be atleast one way by which it gets accepted and we can safely say that the NFA accepts it.
But in case of Rejection ...can at times it may happen that if a string which doesn't belong to the language getting accepted by NFA?
The definition of the language accepted by an NFA says that it is the set of all strings that are accepted by the NFA. So clearly, every string that is accepted belongs to the language, and thus the answer to your question is: No.
Rejection means: all possible computations for the given string either end in a non-accepting state or do not even read the entire string (if the automaton is not complete). Both of these possibilities exclude acceptance.
For non-deterministic Turing Machines there exist notions of acceptance like: "more than half of the computations accept," or "an odd number of computations accept" (Parity) etc. There you can have accepting computations despite global rejection. But these notions are not widely used and I have never seen them applied to finite automata.

Can I say that a state space is a formal specification of some system's behaviour?

Given a system, and its complete state space, can I say that that state space is a formal specification of that system's behaviour?
Not unless you have formally defined all possible transitions to and from each state and your state space is inclusive of all possible states the system can be in.
In a formal definition for a computer system, it should also include unexpected transitions such as computer crashes. A fault tree analysis may help with ensuring that all possible states are defined.
See wikipedia

Can someone give a simple but non-toy example of a context-sensitive grammar? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm trying to understand context-sensitive grammars, and I understand why languages like
{ww | w is a string}
{an bn cn | a,b,c are symbols}
are not context free, but what I'd like to know if a language similar to the untyped lambda calculus is context sensitive. I'd like to see an example of a simple, but non-toy (I consider the above toy examples), example of a context-sensitive grammar that can, for some production rule, e.g., tell whether or not some string of symbols is in scope currently (e.g. when producing the body of a function). Are context sensitive grammars powerful enough to make undefined/undeclared/unbound variables a syntactic (rather than semantic) error?
Yes, context-sensitive grammars (CSG) are powerful enough to make undefined/undeclared/unbound variables check, but unfortunately we don't know any efficient algorithm to parse strings of CSG.
A real example of a context-sensitive language is the C programming language. A feature like declare variables first and then use them later make C-language a context-sensitive language (CSL). (I don't know about untyped lambda calculus).
And because we don't know any linear parsing algorithm for CSL (or CSG). That is the reason in compiler design, we use CFG (and its parsing algoritm only) for syntax checking since we know efficient algorithms to parse CFG (if it's in restricted form). Compilers first parse a context free feature and then later handle context-sensitive features in a problematically way (for example, checks any used variable in the symbol table if it's defined. Otherwise, it generates an error).
Also context-sensitive grammar is used in natural-language processing (NLP). And most natural languages are examples of context-sensitive languages. (I am not sure for the Sanskrit language).
I will try to explain it with a silly but simple example (it's just an idea, you can refine it):
NOUN --> { BlueBomber, Grijesh, I, We}
TENSE --> { am, was, is, were}
VERB --> { going, eating, working}
SENTENCE --> <NOUN> <TENSE> <VERB>
Now, using this grammar, we can generate some correct statements, but some are wrong too. For example,
SENTENCE --> <NOUN> <TENSE> <VERB>
Grijesh is working [Correct statement]
But
Grijesh am working [wrong statement]
Reason: the value of <TENSE> depends on value <NOUN> (for example, I <TENSE> --> I am) and hence the grammar doesn't generate correct statements in the English language.
Actually we can't write a context-free grammar for complete English!
You might have noticed, any natural language translator or grammar checker doesn't works correctly (try with long statements). Because this problem comes under the context-sensitive parsing algorithm.
REFERENCE: You can watch Dr. Arun Kumar's lectures.
In some lecture he explains exactly what you are interested in.

When you are proving a language is decidable, what are you effectively doing?

When you are proving a language is decidable, what are you effectively doing?
If you asking HOW is it done, I'm unsure, but I can check.
Basically, decidable is the language for which one can construct an algorithm (i.e. Turing machine) that will halt for ANY finite input (with accepting or rejecting the input).
Undecidable is the language which is not decidable.
http://en.wikipedia.org/wiki/Recursive_language ... but more on the subject can easily be found. On this link there is only a quick mention of the term.
p.s. So, when constructing above mentioned algorithm, you are basically proving that language is decidable.

What are Finite State Automata and why should a programmer know about them?

Erm - what the question said. It's something I keep hearing about, but I've not got round to looking into it yet.
(updated) I could look up the definition... but why not (as pointed out by #erikson) get insight into your real experiences and anecdotes. Community Wiki'd incase that helps folks vote up the most insightful answer. Interesting reading so far, thanks!
Short answer, it is a technique that you can use to express systems with concrete states (as opposed to quantum states / probability distributions).
Quoting the Wikipedia article:
A finite state machine (FSM) or finite
state automaton (plural: automata) or
simply a state machine, is a model of
behavior composed of a finite number
of states, transitions between those
states, and actions. A finite state
machine is an abstract model of a
machine with a primitive internal
memory.
So, what does that mean to you? Put simply, it is an effective way to represent the path(s) from a starting state to the end state(s) of the system that you care about. Using regular expressions as a fairly easy to understand example, let's look at the pattern AB+C (imagine that that plus is a superscript). I would expect to this pattern to accept strings such as "ABC", "ABBC", "ABBBC", etc. A at the start, C at the end, some number of B's in the middle (greater than or equal to one).
If you think about it, it's almost easier to think about this in terms of a picture. Faking it with text (and that my parentheses are a loopback arc), you can see that A (on the left), is the starting state and C (on the right) is the end state on the right.
_
( )
A --> B --> C
From FSAs, you can continue your journey into computational complexity by heading over to the land of Turing Machines.
However, you can also use state machines to represent real behaviors and systems. In my world, we use them to model certain workflow of actual people working with components that are extremely intolerant of mistakes in state order. As in, "A had better happen before C or there will be a very serious problem. Make that be not possible right now."
You could look it up, but what the hell. Intuitively, a finite state automaton is an abstraction of something that has some finite number of states, and rules by which you can go from state to state. A state is something for which a true or false statement can be made, and a rule is a way that you change from one state to another. So, you could have, say, two states: "I'm at home" and "I'm at work" and two rules, "go to work" and "go home."
It turns out that you can look at machines like this mathematically, and find there are things they can and cannot do. Regular expressions are basically a way of describing a finite state machine in which the states are a set of different strings, and the rules move you from state to state based on the next character read. You can prove that. But you can also prove that no finite state machine can tell whether or not the parentheses in an expression are matched (via the pumping lemma for FSAs.)
The reason you should learn about FSAs is that they can be used to solve many problems: string matching, control of systems, business process descriptions, digital circuit design. They're also inherently pretty.
Formally, an FSA is a algebraic structure F = 〈Σ, S, s0, F, δ〉 where Σ is the input alphabet, S is a set of states, s0 ∈ S is a particular start state, F ⊆ S is a set of accepting states, and δ:S×Σ → S is the state transition function.
in OOP terms: if you have an object with methods that you call on certain events, and some (other) methods that have different behaviour depending on the previous calls.... surprise! you have a state machine!
now, if you know the theory, you don't have to rethink it all. you simply say: "piece of cake, it's just a state machine" and go on to implement it.
if you don't know the theory you'll think about it for a while, write some clever hacks, and get something that's difficult to explain and document... because you don't have the words to describe it
Good answers above. I would only add that FSA are primarily a thinking tool, not a programming technique. What makes them useful is they have nice properties, and anything that acts like one has those properties. If you can think of something as an FSA, there are many ways you can build it:
as a regular expression
as a state-transition table
as a while-switch-on-state loop
as a goto-net (horrors!)
as simple structured program code
etc. etc.
If somebody says something is a FSA, you can immediately know what they are talking about, no matter how it is built.
You need state machines whenever you have to release your thread before you have completed your operation.
Since web services are often not statefull, you don't usually see this in web services--you re-arrange your URL so that each URL corresponds to a single path through the code.
I guess another way to think about it could be that every web server is a FSM where the state information is kept in the URL.
You often see it when processing input. You have to release your thread before the input has all been completed, so you set a flag saying "input in progress" or something like that. When done you set the flag to "awaiting input". That flag is your state monitor.
More often than not, a FSM is implemented as a switch statement that switches on a variable. Each case is a different state. At the end of the case, you may set the state to a new value. You've almost certainly seen this somewhere.
The nice thing about a FSM is that you can make the state a part of your data rather than your code. Imagine that you need to fill out 1000 items in the database. The incoming data will address one of the 1000 items, but you generally don't have enough data to complete the operation.
Without an FSM you might have hundreds of threads waiting around for the rest of the data so they can complete processing and write the results to the DB. With a FSM, you write the state to the DB, then exit your thread. Next time you can check the incoming data, read the state from the thread and that should give you enough info to determine what code to run.
Nearly every FSM operation COULD be done by dedicating a thread to it, but probably not as well (The complexity multiplies based on number of states, whereas with a state machine the rise in complexity is more linear). Also, there are some conceptual design issues--examining your code at the state level is in some cases much easier than examining it at the line of code level.
Every programmer should know about them because they are an excellent tool for certain kinds of problems, where the usual 'iterative-thinking' approach would yield nasty, complex code.
A typical example is game AI, where NPCs have different states that change according to where the player is, something like:
NPC_STATE_IDLE
NPC_STATE_ALERT (player at less than 100 meters)
NPC_STATE_ENGAGE (player attacked NPC)
NPC_STATE_FLEE (low on health)
where a FSM can describe easily the transitions and help perform complex reasoning about the system the FSM is describing.
Important: If you are a "visual" style learner, stop everything you are doing and go to this link ... Right Now.
If you are a "visual" learner, here is an excellent link that gives a very accessible introduction.
Reanimator by Oliver Steele
It looks like you've already approved an answer, but if you appreciate "visual" introduction to new concepts, as is common, you really should check out the link. It is simply outstanding.
(Note: the link points to a discussion of DFA and NDFA in the context of regular expressions -- with animated interactive diagrams)
Yes! You could look it up!
http://en.wikipedia.org/wiki/Finite_state_machine
What it is is better answered on other sites (such as Wikipedia), because there are pretty extensive answers out there already.
Why you should know them: Because you probably implemented them already.
Any time your code has a limited number of possible states (that's the "finite state" part) and switches to another one once some input/event happend (that's the "machine" part) you've written a finite state machine.
It is a very common tool and knowing the theoretical basics for that, being able to reason about it and knowing how to combine two FSMs into a single one that does the same work can be a great help.
FSAs are great data structures to understand because any chance you have to implement them, you're working at the lowest level of computational complexity on the Chomsky hierarchy. A great example is in word morphology (how parts of words come together). A lot of work has been done to show that even the most severe cases can be analyzed in this extremely fast analytical framework. Take a look at Karttunnen and Beesley's work out of PARC.
FSAs are also a great place to start learning about machine learning concepts like hidden markov models, because in many ways, the problem can be broken down using the same ideas and vocabulary.
One item that hasn't been mentioned so far is the semantic equivalence of finite state automata and regular expressions. A regular expression can be compiled to a finite state automaton (this is how regex libraries work) and vice-versa.
FSA (including DFA and NFA) are very important for computer science and they are use in many fields including many fields. For instance hidden markov fields for speech recognition also regular expressions are converted to the FSA's before they are interpreted by the software and NLP (Natural Language Processing), AI (game programming), Robot Programming etc.
One of the disadvantage of FSA's are they are usually slow and usually hard to implement and hard to understand or visualize while reading the code, but they are good because they usually provide generic solutions to the problems and they are well-known with a lot of studies on FSA's.