If I have a set F of functional dependencies on the relation schema r(A, B, C, D, E, F):
A --> BCD
BC --> DE
B --> D
D --> A
What would B+ be??
I think B+ denotes the closure of B
"I think B+ denotes the closure of B"
That is usually the intended meaning of appending a plus sign to something, however that "something", in the context of functional dependencies and normalization theory, must refer to the set of functional dependencies.
B+, where B is one of the attributes, still is meaningless by any convention I know of.
So, to answer the question that OP presumably intended to ask, if we call S his given set of FDs {A->BCD D->A ...}, then S+ is another set of FDs, which includes ALL FDs that can possibly be derived from the given set, augmented with all trivial dependencies such as A->A.
For example, from A->BCD and A->A, we can infer A->ABCD. From D->A and A->BCD we can infer D->BCD. Those inferred FDs are member of S+, but not of S.
(PS this set is usually not particularly useful, unless internally in systems that do computations on sets of FDs, such as perhaps automated algorithms for key determination)
B+ denotes closure of B.
B --> D B+ = {BD}
D --> A B+ = {ABD}
A --> BCD B+ = {ABCD}
BC --> DE B+ = {ABCDE}
All the attributes of the relation can be found by B.
So, B is the primary key of the relation.
Related
I have the following relation: R = (ABCDE) with the functional dependencies F = {A → B, B → CDE, E → AC}. The two decompositions I have are R1 = (BCDE) and R2 = (AE). How do I check whether or not these decompositions are in BCNF? I know how to check if they're lossless and dependency preserving (in this case I think both answers are yes), not how to check if they're in BCNF.
Assuming that F is a cover of the functional dependencies of R, the relation is already in BCNF.
In fact, to check that a relation is BCNF, we can check if all the dependecies of a cover have the determinant which is a superkey. In your case this is true (since the candidate keys of the relation are A, B, and E), so there is no need to decompose it.
Question I saw on site that explains the issue of mutual-exclusion (http://www.faculty.idc.ac.il/gadi/PPTs/Chapter2-Mutex-BasicTopics.pptx - page 8). Unfortunately there is no answer. Also, the original question is only about C but I didn't understand If the order is changing how it affects the result, so I added D.
Let A and B be two algorithms designed to solve the mutual-exclusion problem. In other words, their structure consists of an entry-section, critical-section and exit-section (but you cannot assume they satisfy mutual-exclusion or deadlock-freedom unless written otherwise). Assume that algorithms A and B do not use the same variables. We construct a new mutual-excusion algorithm, C, as follows:
Algorithm C
entry code of A
entry code of B
Critical Section
exit code of B
exit code of A
For each of the following assertions, please prove or disprove its correctness.
If both A and B are deadlock-free, C is deadlock-free.
If both A and B are starvation-free, C is starvation-free.
If A or B satisfy mutual-exclusion, C satisfies mutual-exclusion.
If A is deadlock-free and B is starvation-free, C is starvation-free.
If A is starvation-free and B is deadlock-free, C is starvation-free.
Also that same questions, but this time on D instead of C, where D is:
Algorithm D
entry code of A
entry code of B
Critical Section
exit code of A
exit code of B
Thanks!
Is the following TBox cyclic or acyclic? If it is a cyclic TBox, how could it be converted to an acyclic one?
A ⊑ ¬E
E ⊑ ¬A
A ⊑ ¬E
E ⊑ ¬A
This TBox doesn't really say anything except that the classes A and E are disjoint. The subclass relations could be read as implications:
If something is an A, then it is not an E.
If something is an E, then it is not an A.
To express disjointness in description logics, you'd typically say that the intersection of disjoint classes is equivalent, or a subclass, of the bottom concept, ⊥, which by definition has no instances. &bot is also the complement of the top concept, ⊤, which contains everything. Thus you could say any of the following:
A ⊓ E ⊑ ⊥
A ⊓ E ≡ ⊥
A ⊓ E ⊑ ¬⊤
A ⊓ E ≡ ¬⊤
To add what Joshua said, disjointedness representation depends upon the language you use. Example: EL doesnt support bottom and negation.
The axioms you have written is not cyclic.
Cycle: antecedent and consequent of an axiom should have at least one common predicate (Concept or role).
If an axiom contains a cycle, you have to adopt fixpoint semantics to make it unequivocal.
To the best of my knowledge, axioms are meant to get induced knowledge. Converting a cyclic axiom to an acylic axiom: It is difficult to produce similar semantics.
How to convert the following TBox axioms into an acyclic Tbox:
A \sqsubseteq \lnot E
\exists R.A \sqcap \lnot B \sqsubseteq C
C \sqsubseteq B \sqcup A
C = A \sqcup D
A \sqcap \exists R.E \sqsubseteq D
Consider the DFA :
What will be δ(A,01) equal to ?
options:
A) {D}
B) {C,D}
C) {B,C,D}
D) {A,B,C,D}
The correct answer is option B) but I don't get how. Please some one explain me the steps to solve it and also in general how do we solve sit for any DFA and any transition?
Thanks.
B) Option is not Correct answer! for this transition graph.
In Transition Graph(TG) symbol ε means NULL-move (ε-move). There is two NULL-moves in TG.
One: (A) --- `ε` ---->(B)
Second: (A) --- `ε` ---->(C)
A ε-move means without consume any symbol you can change state. In your diagram from A to B, or A to C.
What will be δ(A,01) equal to ?
Question asked "what is path from state A if input is 01". (as I understand because there is only one final state)
01 can be processed in either of two ways.
(A) -- ε --->(B) -- 0 --> (B) -- 1 --> (D)
(A) -- ε --->(C) -- 0 --> (B) -- 1 --> (D)
Also, there is no other way to process string 01 even if you don't wants to reach final state.
[ANSWER]
So there is misprinting in question (or either you did).
You can learn how to remove NULL-move from transition graph. Also HOW TO WRITE REGULAR EXPRESSION FOR A DFA
If you remove null-moves from TG you will get three ways to accept 01.
EQUIVALENT TRANSITION GRAPH WITHOUT NULL-MOVE
Note there is three start-states in Graph.
Three ways:
{ABD}
{CBD}
{BBD}
In all option state-(B) has to be come.
Also, you have written Consider the DFA : is wrong. The TG is not deterministic because there is there is non-deterministic move δ(A,ε) and next states are either B or C.
Lots of commonly useful properties of functions have concise names. For example, associativity, commutativity, transitivity, etc.
I am making a library for use with QuickCheck that provides shorthand definitions of these properties and others.
The one I have a question about is idempotence of unary functions. A function f is idempotent iif ∀x . f x == f (f x).
There is an interesting generalization of this property for which I am struggling to find a similarly concise name. To avoid biasing peoples name choices by suggesting one, I'll name it P and provide the following definition:
A function f has the P property with respect to g iif ∀x . f x == f (g x). We can see this as a generalization of idempotence by redefining idempotence in terms of P. A function f is idempotent iif it has the P property with respect to itself.
To see that this is a useful property observe that it justifies a rewrite rule that can be used to implement a number of common optimizations. This often but not always arises when g is some sort of canonicalization function. Some examples:
length is P with respect to map f (for all choices of f)
Converting to CNF is P with respect to converting to DNF (and vice versa)
Unicode normalization to form NFC is P with respect to normalization to form NFD (and vice versa)
minimum is P with respect to nub
What would you name this property?
One can say that map f is length-preserving, or that length is invariant under map fing. So how about:
g is f-preserving.
f is invariant under (applying) g.