Formalize english question into a knowledge base using ALC formulas - description-logic

I have the following four sentences:
Mary is a person.
Bulldog is a specie of dog. French bulldog is a specie of bulldog.
The only kind of dog that Mary owns is French bulldog.
A French is a person who owns only bulldog dogs.
I would like to formalize those into a Knowledge Base KB.
I will right below my approach and will also post some questions.
Concepts = Person, Dog, Bulldog
Individuals = MARY, FRENCHBULLDOG, BULLDOG
Roles = ownsDog
ABox = { Person(MARY),
Dog(BULLDOG),
Bulldog(FRENCHBULLDOG),
Person ⨅∀ ownsDog.Bulldog{FRENCHBULLDOG}(MARY), (1)
ownsDog(MARY, FRENCHBULLDOG) (2)
}
TBox = { French ≡ Person⨅∀ownsDog.Bulldog, Bulldog ⊆ Dog }
First I would like to know if the Knowledge Base is correct. And also if I should keep the axiom (1) or (2) or both of them.

Related

Translate english sentences to ALC description logic

I have the following two English sentences:
Mary is a Person.
Bulldog is a specie of dog. French bulldog is a specie of bulldog.
The only kind of dog that Mary owns is French bulldog.
I would like to know which of the following ways is the correct way to translate the third and the sentences based on the knowledge given.
1st approach
Bulldog ⊆ Dog
FrenchBulldog ⊆ Bulldog
FrenchBulldog ⊆ Dog
(∀owns.FrenchBulldog ⨅ Person)(MARY)
2nd approach
∀owns.Bulldog ⊆ ∀owns.Dog
∀owns.FrenchBulldog ⊆ ∀owns.Bulldog
(¬(∀owns.Dog⊔Bulldog) ⨅ ∀owns.FrenchBulldog ⨅ Person)(MARY) (*)
3rd approach
Bulldog ⊆ Dog
FrenchBulldog ⊆ Bulldog
(Person⨅(∀owns.FrenchBulldog⨅(∀owns.¬Dog⊔∀owns.¬Bulldog)))(MARY) (**)
I know that the first approach is correct. But I would like to re-written the third English sentence as approaches 2-(*), 3-(**).
Thanks in advance for any advice.
Your approach 1 is correct and approaches 2 and 3 are incorrect.
I assume with
(¬(∀owns.(Dog⊔Bulldog)) ⨅ ∀owns.FrenchBulldog ⨅ Person)(MARY)
by adding (¬(∀owns.(Dog⊔Bulldog)) you trying to ensure Mary only owns FrenchBulldogs, but it is achieving the opposite.
(¬(∀owns.(Dog⊔Bulldog)) ≡ ∃owns.¬(Dog⊔Bulldog) ≡ ∃owns.(¬Dog ⨅ ¬Bulldog)
Thus in essence you are saying that Mary owns only French bulldogs (∀owns.FrenchBulldog) AND you are saying she owns at least 1 thing that is not a dog and not a bulldog (∃owns.(¬Dog ⨅ ¬Bulldog)).

ALC: define an ALC Knowledge Base

Hello I am new to description logics and ALC and I find it confusing defining a KB.
More specifically I am trying to create an ALC KB for the first 3 sentences below and
an ALC formula φ that formalizes the last one.
Sentences:
• Anna is a person.
• The only kind of coffee that Anna drinks is latte.
• A French is a person who drinks only latte coffee.
• Anna is French.
My KB so far:
TBox T:
French ≡ Person ⊓ ∀drinks.Latte
Abox A:
Person(ANNA), drinks(ANNA, LATTE)
φ: French(ANNA)
My questions are:
Is it wrong that I considered latte as concept or I should have written ∀drinks.Coffee instead, because coffee could be considered also a concept?
Is the assertion drinks(ANNA, LATTE) redundant because in the Tbox ∀drinks.Latte exists?
Any suggestions would be appreciated. Cheers!
I think you can model Person, French, Coffee and Latte as concepts with the following axioms:
French ⊑ Person
Latte ⊑ Coffee
The axiom French ≡ Person ⊓ ∀drinks.Latte may be problematic. The reason being that the reasoner will infer whenever an individual x is a Person and x only drinks coffee, that x is French. But it is completely possible that there are people who only drink only lattes but they are not necessarily French. For that reason it is better to express it as follows:
French ⊑ Person ⊓ ∀drinks.Latte
If you now have ANNA as an individual and you assert French(ANNA), this is sufficient. I.e., the reasoner will "know" that ANNA drinks only lattes. However, if you do this in Protege (for example), the reasoner will not infer that ANNA only drinks lattes. The reason for this is that ANNA is in essence an instance of the complex concept expression Person ⊓ ∀drinks.Latte, because we said she is French. Reasoners give inferences in terms of named concepts only because in general there can be an infinite number of inferences in terms of complex concept expressions.
To see that the reasoner "knows" this. Create another sublass of Coffee class, say Expresso that is disjoint with Latte. Create an instance of Expresso, say EXPRESSO and assert drinks(ANNA, EXPRESSO). Running the reasoner now will cause an inconsistency.
As for your question regarding modeling Latte as concept or an individual: usually it is better to model as a class. I explain this for OWL in this SO question. This holds to true for ALC as well.
If you want understand more about when to use equivalence versus subsumption, I have written about this on my blog here.

Problems in using Existential restrictions in Protege

I want to find out if an Individual belonging to Class A, has a least one relation with ALL the Individuals of Class B.
I have a problem finding a suitable expression that gives me the DL query results I desire. For the below example:
Classs: Course {CourseA, CourseB, CourseC, CourseD}
Class: Program {UG_CE, G_CE}
Class: Student {John}
ObjectProperty: is-PartOf (Course,Program)
ObjectProperty: hasEnrolledIn (Student, Course)
for Individuals: CourseA and CourseB, I asserted the property:
is-PartOf UG_CE
For Individual John, the following 3 properties were asserted:
hasEnrolledIn CourseA
hasEnrolledIn CourseB
hasEnrolledIn CourseC
I also added to individual type
hasEnrolledIn only ({CourseA , CourseB , CourseC})
to address OWA problems.
I want to know if John has enrolled in all the courses that are required for UG_CE, note that John has enrolled in all courses and an additional course.
After invoking the reasoner, the following query will not give me the desired result:
Student that hasEnrolledIn only (is-PartOf value UG_CE)
since "only" is limited to defining the exact number of relationships, it does not serve the intended purpose. Also, I can't use Max or Min since the number of courses are inferred and not known in advance.
Can another approach address my problem?
While it's good to "close" the world with regard to what classes John is taking, it's just as important to close it with regard to what classes are required for UG_CE. I think you need an approach like this:
M requires A.
M requires B.
M : requires only {A, B}.
J enrolledIn A.
J enrolledIn B.
J enrolledIn C.
J : enrolledIn only {A, B, C}.
For an individual student J, you can find out whether they are enrolled in all the classes required for M by asking whether the set of classes required by M is a subset of the set of classes enrolled in by the student:
(inverse(requires) value M) SubClassOf (inverse(enrolledIn) value J)
or, in DL notation, with enumerated classes (lots of possible ways to express this):
∃ requires-1.{M} ⊑ ∃ enrolledIn-1.{J}
Now, if OWL had property negation, you could get the set of students who are only not enrolled in classes not required by an expression like this:
not(enrolledIn) only not(inverse(requires) value M)
That asks for things such that the only courses they're not enrolled in are courses not required by M. However, OWL doesn't have property negation expressions, so I'm not sure where that leaves us. The simplest thing to do would be add a "not enrolled in" property, though that doesn't seem as elegant.

DL QUERY : Pizza Ontology : Is there a way to get the toppings ON the pizza? [duplicate]

I'm using Protege v4.3 for making ontologies.
I have a question about OWL ontology and DL query.
For instance, in the Pizza ontology,
http://owl.cs.manchester.ac.uk/co-ode-files/ontologies/pizza.owl
I can execute the DL query
hasTopping some CheeseTopping
The result is
American, AmericanHot, Cajun,.. etc. That's OK.
Now, i tried DL query
isToppingOf some American
But the result is nothing.
Because the property isToppingOf is inverse property of hasTopping,
I expected to get the result including FourCheesesTopping, CheeseyVegetableTopping, etc. from that query(by inference). Bud it didn't.
Is there any ways automatic reasoning like that?
The class expression
hasTopping some CheeseTopping
is the set of individuals each of which is related to some CheeseTopping by the hasTopping property. In the Pizza ontology, where there are no individuals, you can still get class subclass results for this query because the definition of certain types of Pizzas (e.g., American) are such that any Pizza that is an American must have such a topping.
Now, the similarly-structured query
isToppingOf some American
is the set of individuals each of which is related to some American pizza by the isToppingOf property. However, the Pizza ontology defines no particular individuals, so there aren't any individuals as candidates. But what about classes that might be subclasses of this expression? For instance, you mentioned the FourCheeseTopping. Now, some particular instance of FourCheeseTopping, e.g., fourCheeseTopping23 could be a topping of some American pizza, e.g.:
fourCheeseTopping23 isToppingOf americanPizza72
However, fourCheeseTopping might not have been placed on any particular pizza yet. When we choose an arbitrary individual of type FourCheeseTopping, we can't infer that it is a topping of some American pizza, so we cannot infer that the class FourCheeseTopping is a subclass of
isToppingOf some American
because it's not the case that every instance of FourCheeseTopping must be the topping of some American pizza. For a similar case that might make the logical structure a bit clearer, consider the classes Employer and Person, and the object property employs and its inverse employedBy. We might say that every Employer must have some Person as an Employee (since otherwise they wouldn't be an employer):
Employer ⊑ employs some Person
However, since a person can be unemployed, it is not true that
Person ⊑ employedBy some Employer
even though employs and employedBy are inverses.
What you can do, though, if you want to know whether toppings of a particular type could be placed an pizza of a particular type, is to ask whether
PizzaType ⊓ ∃hasTopping.ToppingType
is equivalent to, or a subclass of, owl:Nothing. For instance, since an American pizza has only toppings of type TomatoTopping, MozzarellaTopping, and PeperoniTopping [sic], the class
American ⊓ ∃hasTopping.MixedSeafoodTopping
is equivalent to owl:Nothing:
On the other hand, since an American pizza must have a MozzarellaTopping, the class
American ⊓ ∃hasTopping.MozzarellaTopping
is equivalent to American:
When you ask what are the subclasses of:
isToppingOf some American
you are asking what classes contain toppings that are necessarily used on top of American pizzas. But in the pizza ontology, no such class exists. Consider cheese toppings: Are all cheese toppings on top of some American pizzas? No, some cheese toppings are on top of Italian pizzas. The same holds for all topping classes.

OWL. Union of object property

Suppose I have the following instance data and property axiom:
Mary hasChild John
Ben hasChild Tom
Mary hasHusband Ben
hasHusbandChild: hasHusband • hasChild
How can I create the property hasChilds such that:
hasChilds: hasChild ⊔ hasHusbandChild
is true?
OWL doesn't support union properties where you can say things like
p ≡ q ⊔ r
but you can get the effects of:
q ⊔ r ⊑ p
by doing two axioms:
q ⊑ p
 r ⊑ p
Now, 2 is not the same as 1, because with 1, you know that if p(x,y), then either q(x,y) or r(x,y), whereas with 2, p(x,y) can be true without either q(x,y) or r(x,y) being true.
Similarly, you can't define property chains in OWL like:
q • r ≡ p
but you use property chains on the left-hand side of subproperty axioms:
q • r ⊑ p
The difference between the two, of course, is that with 6 you can have p(x,y) without x and y being connected by a q • r chain.
It's not quite clear what you're asking, but I think what you're trying to ask is whether there's a way to say that the child of x's spouse is also a child of x. You can do that in OWL2 using property chains, specifically that
hasSpouse • hasChild ⊑ hasChild
This is equivalent to the first-order axiom:
∀ x,y,z : (hasSpouse(x,y) ∧ hasChild(y,z)) → hasChild(x,z)
A number of other questions on Stack Overflow are relevant here and will provide more guidance about how to add this kind of axiom to your OWL ontology:
OWL2 modelling a subclass with one different axiom
Adding statements of knowledge to an OWL Ontology in Protege)
owl:ObjectProperty and reasoning
Using Property Chains to get inferred Knowledge in an OWL Ontology(Protege)
As an alternative, you could also encode the first-order axiom as a SWRL rule.