What we can call the main theorem states that binary search can be used if and only if for all x in S, p(x) implies p(y) for all y > x. This property is what we use when we discard the second half of the search space. It is equivalent to saying that ¬p(x) implies ¬p(y) for all y < x (the symbol ¬ denotes the logical not operator), which is what we use when we discard the first half of the search space.
Please explain this paragraph in simpler and detailed terms.
Consider that p(x) is some property of x. When using binary search this property is usually x being either greater, lesser, or equal than some other value k that you are looking for.
What we can call the main theorem states that binary search can be used if and only if for all x in S, p(x) implies p(y) for all y > x.
Lets say that x is some value in the middle of the list and you are looking for where k is. Lets also say that p(x) means that k is greater than x. If the list is sorted in ascending order, than all values y to the right of x (y > x) must also be greater than k (the property is transitive), and as such p(y) also holds for all y. This is the basis of binary search. If you are looking for k and some value x is known to be greater than k, than all elements to its right are also greater than k. Notice that this is only true if the list is sorted. Consider the list [a,b,c] and a value k that you are looking for. If it's known that a < b and b < c, if k < b is true, than k < c must also be true.
This property is what we use when we discard the second half of the search space.
This is what the previous conclusion allows you to do. As you know that the property that holds for x also holds for all y (that is, they are not the element you are looking for, because they are greater) than it's safe to discard them, and as such you keep looking for k only on the lower half.
The rest of the paragraph says pretty much the same thing for discarding the lower half.
In short, p(x) is some transitive property that should hold to all values to the right of any given value x (again, because it's transitive). ¬p(x), on the other hand, should hold for all values to the left of x. By being able to conclude that those are not the elements you are looking for, you can conclude that it's safe to discard either half of the list.
Related
How can you prove the language L given below is not context-free, I would like to know does my proof given below makes any sense, if not, what would be the correct method to prove?
L = {a^n b^n c^i|n ≤ i ≤ 2n}
I am trying to solve this language by contradiction. Suppose L is regular and with pumping length p such that S = a^p b^p c^p. Observe that S ∉ L. Since there must be a pumping cycle xy with length less than p, this can duplicate y which consists of some number of b to cause x(y^2)z to enter the language because the number of b exceeds the number of c by no longer bound by the given condition of i which is n ≤ i ≥ 2n, therefore, we have contradiction and hence language L is not context-free.
The proof is by contradiction. Assume the language is context-free. Then, by the pumping lemma for context-free languages, any string in L can be written as uvxyz where |vxy| < p, |vy| > 0 and for all natural numbers k, u(v^k)x(y^k)z is in the language as well. Choose a^p b^p c^(p+1). Then we must be able to write this string as uvxyz so that |vy| > 0. There are several possibilities to consider:
v and y consist only of a's. In this case, pumping in either direction causes the numbers of a's and b's to differ, producing a string not in the language; so this cannot be the case.
v and y consist only of a's and b's. In this case, pumping might keep the numbers of a's and b's the same, but pumping up will eventually cause the number of c's to be less than the number of a's and b's; so this cannot be the case.
v and y consist only of b's. This case is similar to (1) above and so cannot be a valid choice.
v and y consist only of b's and c's. Also similar to (1) and (3) in that pumping will cause the numbers of a's and b's to differ.
v and y consist only of c's. Pumping up will eventually cause there to be more c's than twice the number of a's; so this cannot be the case either.
No matter how we choose v and y, pumping will produce strings not in the language. This is a contradiction; this means our assumption that the language is context-free must have been incorrect.
I am new to Z notation,
Lets say I have a function f defined as X |--> Y ,
where X is string and Y is number.
How can I get highest Y value in this function? Does 'loop' exist in formal method so I can solve it using loop?
I know there is recursion in Z notation, but based on the material provided, I only found it apply in multiset or bag, can it apply in function?
Any extra reference application of 'loop' or recursion application will be appreciated. Sorry for my English.
You can just use the predefined function max that takes a set of integers as input and returns the maximum number. The input values here are the range (the set of all values) of the function:
max(ran(f))
Please note that the maximum is not defined for empty sets.
Regarding your question about recursion or loops: You can actually define a function recursively but I think your question aims more at a way to compute something. This is not easily expressed in Z and this is IMO a good thing because it is used for specifications and it is not a programming language. Even if there wouldn't be a max or ran function, you could still specify the number m you are looking for by:
\exists s:String # (s,m):f /\
\forall s2:String, i2:Z # (s2,i2):f ==> i2 <= m
("m is a value of f, belonging to an s and all other values i2 of f are smaller or equal")
After getting used to the style it is usually far better to understand than any programming language (except your are trying to describe an algorithm itself and not its expected outcome).#
Just for reference: An example of a recursive definition (let's call it rmax) for the maximum would consist of a base case:
\forall e:Z # rmax({e}) = e
and a recursive case:
\forall e:Z; S:\pow(Z) #
S \noteq {} \land
rmax({e} \cup S) = \IF e > rmax(S) \THEN e \ELSE rmax(S)
But note that this is still not a "computation rule" of rmax because e in the second rule can be an arbitrary element of S. In more complex scenarios it might even be not obvious that the defined relation is a function at all because depending on the chosen elements different results could be computed.
In an OWL-DL ontology, consider a property p with domain D and range R where D has a restriction over p to have cardinality of exactly one:
D SubClassOf p exactly 1 Thing
(D ⊑ =1 p.Thing)
Can we then infer that p is a functional property, since each d of type D will have exactly one value for p? If this is correct, can a reasoner infer this knowledge?
In OWL, a property is function when each individual has at most one value for the property. That "at most" is important; it is permitted for something to have no value for the property. (That means that a functional property in OWL is actually more like a possibly partial function in mathematics.) That said, if every individual has a exactly one value for a property, then it clearly has at most one value for the property, so the property would, as you suspect, be functional. We can walk though a specific case, though, to be sure that this is general, and because we need to make sure that the property p here actually has at most one value for every individual.
Proof: Suppose the property p has domain D, and D is a subclass of =1 p.Thing, so that every D has exactly one p
value. Is it the case that every individual x has at most one value
for p? There are two cases to consider:
x is a D. Then by the subclass axiom with the restriction, x must have exactly one value for p, and one is less than or equal to
one.
x is not a D. Then x has no values for p. If it did, then it would be in the domain of p, which is D, and that is a
contradiction. Then x has zero values for p, and zero is less
than or equal to one.
Then any individual x at most one value for the property p, which
is the definition of p being functional. Thus, p is functional.
QED
An OWL DL reasoner should be able to confirm this, and it shouldn't be hard to check.
I'm trying to understand this algorithm the DFA minimization algorithm at http://www.cs.umd.edu/class/fall2009/cmsc330/lectures/discussion2.pdf where it says:
while until there is no change in the table contents:
For each pair of states (p,q) and each character a in the alphabet:
if Distinct(p,q) is empty and Distinct(δ(p,a), δ(q,a)) is not empty:
set distinct(p,q) to be x
The bit I don't understand is "Distinct(δ(p,a), δ(q,a))" I think I understand the transition function where δ(p,a) = whatever state is reached from p with input a. but with the following DFA:
http://i.stack.imgur.com/arZ8O.png
resulting in this table:
imgur.com/Vg38ZDN.png
shouldn't (c,b) also be marked as an x since distinct(δ(b,0), δ(c,0)) is not empty (d) ?
Distinct(δ(p,a), δ(q,a)) will only be non-empty if δ(p,a) and δ(q,a) are distinct. In your example, δ(b,0) and δ(c,0) are both d. Distinct(d, d) is empty since it doesn't make sense for d to be distinct with itself. Since Distinct(d, d) is empty, we don't mark Distinct(c, b).
In general, Distinct(p, p) where p is a state will always be empty. Better yet, we don't consider it because it doesn't make sense.
I can't distinguish these symbols:
= and =:=
\= and =\=
[X,Y] and [X|Y]
What’s the difference ?
For the comparison operators (=, =:=, \=, =\=):
= succeeds if the terms unify (basically, if they're bound together)
=:= succeeds if the values of the terms are equal (should be equivalent to = if you're dealing with numbers, I believe)
\= is the negation of =
=\= is the negation of =:=
For more info about these operators and more, see this page.
For the list operators, [X|Y] is a way to refer to a list where X is the first element and Y is the list of the remaining elements. [X, Y] is just another way to refer to this, but it limits Y to being a single element, instead of possibly a whole list of them. For more info, see this section of the same page.