SWRL syntax in Protege - semantic-web

I am using Protege5.0, and i want to implement SWRL rule ie
User(?u), isInActivity(?u, ?cm), ContextMeeting(?cm) -> FamilyContact(?f), hasStatus(?f, "Reject")
which means "if user is in meeting then familycontact has status "reject".
This syntax should work and protege doesn't show any error. However, its not working.
And when i write
User(?u), isInActivity(?u, ?cm), ContextMeeting(?cm), FamilyContact(?f) -> hasStatus(?f, "Reject")
This syntax works perfectly but its useless when i write complex rules in such format.
Can anyone explain me the difference between the two formats and also give me a perfect solution?
More explanation:
i have a main class People & subclasses of People are Contact & User. The subclasses of Contact are FamilyContact, EmployeeContact etc. **The User and Contact are related by an object property isContactOf(People,Contact).In my ontology there should be only one individual of class User. Now, i want to implement SWRL rules, ie If **user is in meeting then FamilyContact hasStatus "Reject".** This reject simply means that Family members can not call the user. Other rule is If user is in meeting then EmployeeContact hasStatus "Pass". hasStatus(Contact,String) is a functional property.
the second rule syntax works perfectly, however when I want to implement a rule for those instances which are both EmployeeContact and FamilyContact then i get problem. eg if i write a rule i.e
User(?u), isInActivity(?u, ?cm), ContextMeeting(?cm), FamilyContact(?f), EmployeeContact(?e), DifferentFrom(?f,?e)-> hasStatus(?f, "Reject").
It works somehow but i get a problem. It makes the other instances of EmployeeContact also the Instances of FamilyContact and vice versa.

The rule
User(?u) ∧ isInActivity(?u, ?cm) ∧ ContextMeeting(?cm) → FamilyContact(?f) ∧ hasStatus(?f, "Reject")
uses ?f in the right hand side (the consequent) of the rule, but not on the left (the antecedent). That's not allowed in the language (emphasis added):
2.1. Rules
Atoms may refer to individuals, data literals, individual variables or
data variables. Variables are treated as universally quantified, with
their scope limited to a given rule. As usual, only variables that
occur in the antecedent of a rule may occur in the consequent (a
condition usually referred to as "safety"). This safety condition does
not, in fact, restrict the expressive power of the language (because
existentials can already be captured using OWL someValuesFrom
restrictions).
If it were legal, then your rule would mean:
For every u, cm, and f,
if u is a User and cm is a ContextMeeting and u is in cm,
then f is a family contact and has status "reject".
But since there are no constraints on ?f, this says that any user is in any context meeting, then everything is a family contact with status "reject", and that's probably not what you want. Shouldn't ?f be related to ?u somehow? The proposed alternative:
User(?u) ∧ isInActivity(?u, ?cm) ∧ ContextMeeting(?cm) ∧ FamilyContact(?f) → hasStatus(?f, "Reject")
has a similar problem. It would mean:
For every u, cm, and f,
if u is a User and cm is a ContextMeeting and u is in cm and f is a family contact,
then f has status "reject".
There's still no connection between u and f, so this says that if any user is in any context meeting, then every family contact has status "reject". That doesn't seem like what you'd want either.

Related

Difference between VQ and V2Q

To output a verb phrase that has an object as a question, then as it seems RGL only offers two functions:
VQ -> QS -> VP
V2Q -> NP -> QS -> VP
And in these two functions, the verb type was divided into two different categories. But the type V2Q has a parameter that requires adding a preposition to the sentence. In order to generate the sentence Tell me who I am I used the following code:
MySentence = {s = (mkPhr
(mkImp
(mkVP
(mkV2Q
(mkV "tell")
(mkPrep ""))
(i_NP)
(mkQS
(mkQCl
(mkIComp (who_IP))
(i_NP)))))).s };
The code above generates the output I desire without a problem. So my question is, is there any reason the preposition was added to the verb V2Q? Or was this output generated in a wrong way?
First, yes you constructed the sentence correctly.
Why is there a slot for preposition in V2Q
In general, all V2* (and V3*) may take their NP object as a direct object, like eat ___, see ___, or with a preposition, like believe in ___.
This is more flexible than forcing all transitive verbs only take direct objects, and all prepositional phrases to be analysed as optional adverbials. Take a VP like "believe in yourself", it's not that you're believing (something) and also your location is yourself. It's nice to be able to encode that believe_V2 takes an obligatory argument, and that argument is introduced by the preposition in.
(Side note: for a VP like "sleep in a soft bed", "in a soft bed" is not an obligatory argument of sleep. So then we just make sleep into an intransitive verb, sleep_V, and make in a soft bed into an Adv.)
So, this generalises to all verbs that take some NP argument (V2V, V2S, V2Q, V2A). Take a VP like "lie [to children] [that moon is made of cheese]": the verb lie is a V2S that introduces its NP object with the preposition to.
In fact, many RGL languages offer a noPrep in their Paradigms module—you can Ctrl+F in the RGL synopsis page to see examples.
The constructors of V2Q
So why are you forced to make your V2Q with mkV2Q (mkV "tell") (mkPrep ""), even when there is no preposition?
More common verb types, like V2, have several overload instances of mkV2. The simplest is just mkV2 : Str -> V2. Since it's such a common thing for transitive verbs to have a direct object (i.e. not introduce their object with a preposition), and there are so many simple V2s, it would be pretty annoying to have to always specify a noPrep for most of them.
V2Q is rarer than V2, so nobody just hasn't bothered creating an instance that doesn't take a preposition. The constructor that takes preposition is more general than the constructor that doesn't, since you can always choose the preposition to be noPrep. Well, I just pushed a few new additions, see here, so if you get the latest RGL, you can now just do mkV2Q "tell".
This kind of thing is completely customisable: if you want more overload instances of some mkX oper, you can just make them.

What exactly does context do in K?

The use of context is briefly mentioned in the K tutorial as a way to customize the order evaluation. But I'm also seeing other context statements that contain rewrite arrows in them, like this one in the untyped simple language.
context ++(HOLE => lvalue(HOLE))
rule <k> ++loc(L) => I +Int 1 ...</k>
<store>... L |-> (I => I +Int 1) ...</store> [increment]
Could someone explain how exactly context work in K? In particular, I'm interested in:
Is there a more general usage of context in K than just stating the order of evaluation?
How does the order in which context statements are declared affect the semantics?
Thank you!
More detailed information about context declarations in K can be found in K's documentation here. In particular, contexts with rewrite arrows mean that heating and cooling will wrap the term to be heated or cooled in a particular symbol. In your example, that symbol is lvalue.
To answer your questions specifically:
Context declarations, like strictness attributes, are primarily used in order to specify the evaluation strategy. While in theory they can be used for other things, in practice this rarely happens. That said, evaluation strategies can be complex, which is part of why K has so many different features relating to evaluation strategy. In the example you mentioned, we use rewrites in a context declaration in order to provide a separate set of rules for evaluating lvalues (ie, to avoid actually evaluating all the way to a value, and only evaluate to a location).
K's sentences are unordered. Within a single module, you can reorder any of its sentences (except import statements, which must appear first) and there will not be an effect on the intended semantics (although backends may result in slightly different behavior for concrete execution if your semantics is nondeterministic). This includes context declarations.

How to make a indirect relation between two or more instances in Protégé

First of all, my English is poor, so sorry if my writing is confusing.
I'm trying to create the following relationship between instances: if A propertyX B, and C propertyY A, then C propertyX B. In my case, I want to specify that if a ManagerA "manages" an employee, and ManagerB has the same job as ManagerA, then he also manages the same employee.
I tried to use chain properties to do that, but the reasoner (FaCT ++ 1.6.5) doesn't work when I activate it (the log says a non-simple property is being used as one). I think the problem is in the fact that the property "manages" is asymmetric and irreflexive and the property "sameJob" is transitive and symmetric, but I'm not sure if that's the case. I applied the chain property in the "manages" property, stating: sameJob o manages SubPropertyOf: manages.
I'm just starting with Protégé and will appreciate any help a lot.
The reason for the error is due to manages not being a simple role, i.e. if you have r1 o ... o rn subPropertyOf r where n>1 then r is a non-simple role. Non-simple roles cannot be used in IrreflexiveObjectProperty and AsymmetricObjectProperty. See section 11 of OWL 2 syntax. The reason for the constraint on roles is to maintain decidability.
However, you can achieve the desired result by adding a SWRL rule:
manages(?x, ?y) ^ sameJob(?x, ?z) -> manages(?z, ?y).

Reason for equality definition in COQ and HOTT

In HOTT and also in COQ one cannot prove UIP, i.e.
\Prod_{p:a=a} p = refl a
But one can prove:
\Prod_{p:a=a} (a,p) = (a, refl a)
Why is this defined as it is?
Is it, because one wants to have a nice homotopy interpretation?
Or is there some natural, deeper reason for this definition?
Today we know of a good reason for rejecting UIP: it is incompatible with the principle of univalence from homotopy type theory, which roughly says that isomorphic types can be identified. However, as far as I am aware, the reason that Coq's equality does not validate UIP is mostly a historical accident inherited from one of its ancestors: Martin-Löf's intensional type theory, which predates HoTT by many years.
The behavior of equality in ITT was originally motivated by the desire to keep type checking decidable. This is possible in ITT because it requires us to explicitly mark every rewriting step in a proof. (Formally, these rewriting steps correspond to the use of the equality eliminator eq_rect in Coq.) By contrast, Martin-Löf designed another system called extensional type theory where rewriting is implicit: whenever two terms a and b are equal, in the sense that we can prove that a = b, they can be used interchangeably. This relies on an equality reflection rule which says that propositionally equal elements are also definitionally equal. Unfortunately, there is a price to pay for this convenience: type checking becomes undecidable. Roughly speaking, the type-checking algorithm relies crucially on the explicit rewriting steps of ITT to guide its computation, whereas these hints are absent in ETT.
We can prove UIP easily in ETT because of the equality reflection rule; however, it was unknown for a long time whether UIP was provable in ITT. We had to wait until the 90's for the work of Hofmann and Streicher, which showed that UIP cannot be proved in ITT by constructing a model where UIP is not valid. (Check also these slides by Hofmann, which explain the issue from a historic perspective.)
Edit
This doesn' t mean that UIP is incompatible with decidable type checking: it was shown later that it can be derived in other decidable variants of Martin-Löf type theory (such as Agda), and it can be safely added as an axiom in a system like Coq.
Intuitively, I tend to think of a = a as pi_1(A,a), i.e. the class of paths from a to itself modulo homotopy equivalence; whereas I think of { x:A | a = x } as the universal covering space of A, i.e. paths from a to some other point of A modulo homotopy equivalence. So, while pi_1(A,a) is often non-trivial, we do have that the universal covering space of A is contractible.

Give a grammar for the following language

Give a grammar for the following language {0^n w 1^n | n>=0 w is in {0,1}* and |w|=n}
Attempt at solution:
S--> 0S1|R
R--> 0R|1R|empty
not sure how to guarantee the length of r is the same as the number of 0's or 1's.
Here goes nothing.
Every word should look like this: 0^n w 1^n. So if we have the rule S -> 0S1 we reach a state, where every sentential form we can generate from this looks like 0^n S 0^n.
Well... This is not quite what we want. isn't it?
We go a step further, we want some Variable V, involved in the rules V -> 0|1 (edit: this would have been our "goal", but things could go wrong, so we don't use these two rules), which gives us the possibility to use S -> 0SV1 instead of S-> 0S1. What's the change?
Now we get sentential forms like these: 0^n S (V1)^n. So, for example 000SV1V1V1 would be some such sentential form. One minor addition we definitely need now is the rule S -> empty
Still not quite there yet, though, after all we want it to look more like 0^nV^n1^n in the end. So we add a rule which swaps V and 1. So we add 1V -> V1 to the set of rules. What are the possibilites now? Given a sentential form like 000SV1V1V1 we can now move all the V's to the left and all the 1's to the right.
And now we become real grammar nazis. We don't want anything to go wrong, so we make some minor changes. We do a little swippidy swappidy. We swap every occurence of S we had so far with a T. So S -> 0SV1 becomes 0TV1 etcetera. Also, we add the rules S -> empty|T and we remove the rule T -> empty. What do we gain from this? Well... Nothing at first sight. BUT, now we can build a mechanism which assures that nothing can go wrong when we turn the V's into 1's and 0's.
We simply add the rules TV -> C and CV -> CC. Oh Jesus Christ, all these rules.
Now, given a sentential form 0^n T V^n 1^n, we can slowly transform it into 0^n C^n 1^n. What's the use?
Nothing can go wrong, if we might not have pushed all the V's to the left.
So: A sentential form like 0000CCC1V111 can do no harm to our cause, as we cannot do anything about the V, unless it's next to a C, also, we have no possibility of pushing the C's around, since there is no such rule. Also, since we're going to add the rules C -> 0|1, if we prematurely change them to 1's and 0's, we cannot finish our word if there is still a V floating around.
This might be not neccessary at all, I'm not sure about that, BUT it is part of our proof, that all the words we can create are in the set of words we want to specify with this grammar.
The rules are:
S -> empty | T
T -> 0TV1
1V -> V1
TV -> C
CV -> CC
C -> 0|1
Edit:
This is a Type-0 Grammer, though.
With some changes, this can become an equivalent CSG, though:
S -> empty | T
T -> 0TV1 | 0C1
1V -> V1
CV -> CC
C -> 0 | 1
The main difference is, that we at some point can decide to stop adding 0TV1 to the sentential form, and instead finish up with 0C1, getting a form like 0^n C1 (V1) ^ (n-1). And again, if we prematurely transform all C's into 0's and 1's, we lose the possibility to remove all V's. So this should also generate the set we're looking for.
This is also my first answer to anything on stackoverflow, and since I kinda do like computer science theory, I hope my explanations are not wrong. If so, tell me.