I tried to replace some expressions with a function, but some terms were not changed.
Input:
b + 2 *a /. (b*m_ + a*n_) :> chi[m, n]
Out
2 a + b
but if instead of 1 I put 3*b
3*b + 2 *a /. (b*m_ + a*n_) :> chi[m, n]
The output is correct
chi[3, 2]
Weird
If I write 1.*b it works better
(1.*b + 2 *a) /. (b*m_ + a*n_) :> chi[m, n]
out
chi[1., 2]
What assumption must be done to avoid puting this point ? Yes, the simpliest way is to eliminate 1 from the pattern, but in this example I simplified the things too much only for solving the problem. My case is how to write a sum of about fifty terms of the form like that
Sqrt[Pi]V(m*b^2+n*a^2)^(-3/2),
where m and n are Integers and b,a,V constants
The subexpression b is not matched by the pattern b*m_. To see this examine the FullForm of b and of, for example, 2*b. Mathematica patterns match on syntactical form, not on semantics. You and I may know that b is the same as b*1 but Mathematica does not, or at least does not apply this when matching expressions to patterns.
EDIT
No, 1*b won't work either, Mathematica will evaluate that to b before pattern-matching. The rules for 1.0*b are obviously different. There is a number of ways to write a pattern to match 1*b, the simplest is to write a separate rule such as {b,a*n_}->chi[1,n].
Related
Is there a betther way to do this condition:
if ( x > 3 || y > 3 || z > 3 ) {
...
}
I was thinking of some bitwise operation but could'nt find anything.
I searched over Google but it is hard to find something related to that kind of basic question.
Thanks!
Edit
I was thinkg in general programming. Would it be different according to a specific language? Such as C/C++, Java...
What you have is good. Assuming C/C++ or Java:
Intent is clear
Short circuit optimisation means the expression will be true as soon as any one part is true.
Looking at that 2nd point- if any of x, y or z is more likely to be >3, then put them to the left of the expression so they are evaluated first which means the others may not need to be evaluated at all.
For argument's sake, if you must have a bitwise check x|y|z > 3 works, but it normally won't be reduced, so it's (probably*) always 2 bitwise ops and a compare, where the other way could be as fast as 1 compare.
(* This is where the language lawyers arrive an add comments why this edit is wrong and the bitwise version can be optimised;-)
There was a comment here (now deleted) along the lines of "new programmer shouldn't worry about this level of optimisation" - and it was 100% correct. Write easy to follow, working code and THEN try to squeeze performance out of it AFTER you know it is "too slow".
There's a certain over-verbosity that I have to engage in when writing certain Boolean expressions, at least with all the languages I've used, and I was wondering if there were any languages that let you write more concisely?
The way it goes is like this:
I want to find out if I have a Thing that can be either A, B, C, or D.
And I'd like to see if Thing is an A or a B.
The logical way for me to express this is
//1: true if Thing is an A or a B
Thing == (A || B)
Yet all the languages I know expect it to be written as
//2: true if Thing is an A or a B
Thing == A || Thing == B
Are there any languages that support 1? It doesn't seem problematic to me, unless Thing is a Boolean.
Yes. Icon does.
As a simple example, here is how to get the sum of all numbers less than 1000 that are divisble by three or five (the first problem of Project Euler).
procedure main ()
local result
local n
result := 0
every n := 1 to 999 do
if n % (3 | 5) == 0 then
result +:= n
write (result)
end
Note the n % (3 | 5) == 0 expression. I'm a bit fuzzy on the precise semantics, but in Icon, the concept of booleans is not like other languages. Every expression is a generator and it may pass (generating a value) or fail. When used in an if expression, a generator will continue to iterate until it passes or exhausts itself. In this case, n % (3 | 5) == 0 is a generator which uses another generator (3 | 5) to test if n is divisible by 3 or 5. (To be entirely technical, this isn't even syntactic sugar.)
Likewise, in Python (which was influenced by Icon) you can use the in statement to test for equality on multiple elements. It's a little weaker than Icon though (as in, you could not translate the modulo comparison above directly). In your case, you would write Thing in (A, B), which translates exactly to what you want.
There are other ways to express that condition without trying to add any magic to the conditional operators.
In Ruby, for example:
$> thing = "A"
=> "A"
$> ["A","B"].include? thing
=> true
I know you are looking for answers that have the functionality built into the language, but here are two other means that I find work better as they solve more problems and have been in use for many decades.
Have you considered using a preprocessor?
Also languages like Lisp have macros which is part of the language.
I'm trying to understand this algorithm the DFA minimization algorithm at http://www.cs.umd.edu/class/fall2009/cmsc330/lectures/discussion2.pdf where it says:
while until there is no change in the table contents:
For each pair of states (p,q) and each character a in the alphabet:
if Distinct(p,q) is empty and Distinct(δ(p,a), δ(q,a)) is not empty:
set distinct(p,q) to be x
The bit I don't understand is "Distinct(δ(p,a), δ(q,a))" I think I understand the transition function where δ(p,a) = whatever state is reached from p with input a. but with the following DFA:
http://i.stack.imgur.com/arZ8O.png
resulting in this table:
imgur.com/Vg38ZDN.png
shouldn't (c,b) also be marked as an x since distinct(δ(b,0), δ(c,0)) is not empty (d) ?
Distinct(δ(p,a), δ(q,a)) will only be non-empty if δ(p,a) and δ(q,a) are distinct. In your example, δ(b,0) and δ(c,0) are both d. Distinct(d, d) is empty since it doesn't make sense for d to be distinct with itself. Since Distinct(d, d) is empty, we don't mark Distinct(c, b).
In general, Distinct(p, p) where p is a state will always be empty. Better yet, we don't consider it because it doesn't make sense.
In the dragon book, LL grammar is defined as follows:
A grammar is LL if and only if for any production A -> a|b, the following two conditions apply.
FIRST(a) and FIRST(b) are disjoint. This implies that they cannot both derive EMPTY
If b can derive EMPTY, then a cannot derive any string that begins with FOLLOW(A), that is FIRST(a) and FOLLOW(A) must be disjoint.
And I know that LL grammar can't be left recursive, but what is the formal reason? I guess left-recursive grammar will contradict rule 2, right? e.g., I've written following grammar:
S->SA|empty
A->a
Because FIRST(SA) = {a, empty} and FOLLOW(S) ={$, a}, then FIRST(SA) and FOLLOW(S) are not disjoint, so this grammar is not LL. But I don't know if it is the left-recursion make FIRST(SA) and FOLLOW(S) not disjoint, or there is some other reason? Put it in another way, is it true that every left-recursive grammar will have a production that will violate condition 2 of LL grammar?
OK, I figure it out, if a grammar contains left-recursive production, like:
S->SA
Then somehow it must contain another production to "finish" the recursion,say:
S->B
And since FIRST(B) is a subset of FIRST(SA), so they are joint, this violates condition 1, there must be conflict when filling parse table entries corresponding to terminals both in FIRST(B) and FIRST(SA). To summarize, left-recursion grammar could cause FIRST set of two or more productions to have common terminals, thus violating condition 1.
Consider your grammar:
S->SA|empty
A->a
This is a shorthand for the three rules:
S -> SA
S -> empty
A -> a
Now consider the string aaa. How was it produced? You can only read one character at a time if you have no lookahead, so you start off like this (you have S as start symbol):
S -> SA
S -> empty
A -> a
Fine, you have produced the first a. But now you cannot apply any more rules because there is no more non-terminals. You are stuck!
What you should have done was this:
S -> SA
S -> SA
S -> SA
S -> empty
A -> a
A -> a
A -> a
But you don't know this without reading the entire string. You would need an infinite amount of lookahead.
In a general sense, yes, every left-recursive grammar can have ambiguous strings without infinite lookahead. Look at the example again: There are two different rules for S. Which one should we use?
An LL(k) grammar is one that allows the construction of a deterministic, descent parser with only k symbols of lookahead. The problem with left recursion is that it makes it impossible to determine which rule to apply until the complete input string is examined, which makes the required k potentially infinite.
Using your example, choose a k, and give the parser an input sequence of length n >= k:
aaaaaaa...
A parser cannot decide if it should apply S->SA or S->empty by looking at the k symbols ahead because the decision would depend on how many times S->SA has been chosen before, and that is information the parser does not have.
The parser would have to choose S->SA exactly n times and S->empty once, and it's impossible to decide which is right by looking at the first k symbols in the input stream.
To know, a parser would have to both examine the complete input sequence, and keep count of how many times S->SA has been chosen, but such a parser would fall outside of the definition of LL(k).
Note that unlimited lookahead is not a solution because a parser runs on limited resources, so there will always be a finite input sequence of a length large enough to make the parser crash before producing any output.
In the book "The Theory of Parsing", Volume 2, by Aho and Ullman, page 681 you can find Lemma 8.3 that states: "No LL(k) grammar is left-recursive".
The proof says:
Suppose that G = (N, T, P, S) has a left-recursive nonterminal A. Then there is a derivation A -> Aw. If w -> e then it is easy to show that G is ambiguous and hence cannot be LL. Thus, assume that w -> v for some v in T+ (a non empty string of terminals). We can further assume that A -> u, being u some string of terminals and that there exists a derivation
Hence, there is another derivation:
Given:
I have no idea what the accepted language is.
From looking at it you can get several end results:
1.) bb
2.) ab(a,b)
3.) bbab(a, b)
4.) bbaaa
How to write regular expression for a DFA
In any automata, the purpose of state is like memory element. A state stores some information in automate like ON-OFF fan switch.
A Deterministic-Finite-Automata(DFA) called finite automata because finite amount of memory present in the form of states. For any Regular Language(RL) a DFA is always possible.
Let's see what information stored in the DFA (refer my colorful figure).
(note: In my explanation any number means zero or more times and Λ is null symbol)
State-1: is START state and information stored in it is even number of a has been come. And ZERO b.
Regular Expression(RE) for this state is = (aa)*.
State-4: Odd number of a has been come. And ZERO b.
Regular Expression for this state is = (aa)*a.
Figure: a BLUE states = EVEN number of a, and RED states = ODD number of a has been come.
NOTICE: Once first b has been come, move can't back to state-1 and state-4.
State-5: comes after Yellow b. Yellow b means b after odd numbers of a.
Once you gets b after odd numbers of a(at state-5) every thing is acceptable because there is self a loop for (b,a) at state-5.
You can write for state-5 : Yellow-b followed-by any string of a, b that is = Yellow-b (a + b)*
State-6: Just to differentiate whether odd a or even.
State-2: comes after even a then b then any number of b. = (aa)* bb*
State-3: comes after state-2 then first a then there is a loop via state-6.
We can write for state-3 comes = state-2 a (aa)* = (aa)*bb* a (aa)*
Because in our DFA, we have three final states so language accepted by DFA is union (+ in RE) of three RL (or three RE).
So the language accepted by the DFA is corresponding to three accepting states-2,3,5, And we can write like:
State-2 + state-3 + state-5
(aa)*bb* + (aa)*bb* a (aa)* + Yellow-b (a + b)*
I forgot to explain how Yellow-b comes?
ANSWER: Yellow-b is a b after state-4 or state-3. And we can write like:
Yellow-b = ( state-4 + state-3 ) b = ( (aa)*a + (aa)*bb* a (aa)* ) b
[ANSWER]
(aa)*bb* + (aa)*bb* a (aa)* + ( (aa)*a + (aa)*bb* a (aa)* ) b (a + b)*
English Description of Language: DFA accepts union of three languages
EVEN NUMBERs OF a's, FOLLOWED BY ONE OR MORE b's,
EVEN NUMBERs OF a's, FOLLOWED BY ONE OR MORE b's, FOLLOWED BY ODD NUMBERs OF a's.
A PREFIX STRING OF a AND b WITH ODD NUMBER OF a's, FOLLOWED BY b, FOLLOWED BY ANY STRING OF a AND b AND Λ.
English Description is complex but this the only way to describe the language. You can improve it by first convert given DFA into minimized DFA then write RE and description.
Also, there is a Derivative Method to find RE from a given Transition Graph using Arden's Theorem. I have explained here how to write a regular expression for a DFA using Arden's theorem. The transition graph must first be converted into a standard form without the null-move and single start state. But I prefer to learn Theory of computation by analysis instead of using the Mathematical derivation approach.
I guess this question isn't relevant anymore :) and it's probably better to guide you through it then just stating the answer, but I think I got a basic expression that covers it (it's probably minimizable), so i'll just write it down for future searchers
(aa)*b(b)* // for stoping at 2
U
(aa)*b(b)*a(aa)* // for stoping at 3
U
(aa)*b(b)*a(aa)*b((a)*(b)*)* // for stoping at 5 via 3
U
a(aa)*b((a)*(b)*)* // for stoping at 5 via 4
The examples (1 - 4) that you give there are not the language accepted by the DFA. They are merely strings that belong to the language that the DFA accepts. Therefore, they all fall in the same language.
If you want to figure out the regular expression that defines that DFA, you will need to do something called k-path induction, and you can read up on it here.