Are schema:domainIncludes and rdfs:domain (as well as schema:rangeIncludes and rdfs:range) the same? - schema

Schema.org both defines and uses the predicates named domainIncludes and rangeIncludes to relate types to properties (i.e. <schema:name> <schema:domainIncludes> <schema:Person> and <schema:name> <schema:rangeIncludes> <schema:Text>).
However in RDF Schema 1.1's specification, the predicates domain and range are already defined (giving <schema:name> <rdfs:domain> <schema:Person> and <schema:name> <schema:range> <schema:Text>).
My question boils down to: are schema.org's domainIncludes and rangeIncludes predicates equivalent to the RDFS domain and range predicates?
And if so:
Why does schema.org define them in the first place and not just use the predicates provided by the RDF standard? It already makes use of other RDFS defined predicates such as rdfs:label and rdfs:comment. Was this a stylistic choice? (Did they not like the names "domain" and "range"?)
Why is this relationship between the predicates not defined using owl:equivalentProperty or an equivalent? Schema.org should be explicit when creating predicates that are already defined by accepted standards such as RDFS 1.1, especially given its mission is structuring and standardising the web.
Otherwise remain a big fan of schema.org : )

Why does schema.org define them in the first place and not just use the predicates provided by the RDF standard?
Schema.org doesn't want you to do inferencing using certain properties. If I knew that
<schema:name> <rdfs:domain> <schema:Person>
then whenever I saw a <schema:name> defined for an object, I could infer that the object was of type <schema:Person>. Schema.org uses <schema:name> for lots of things so uses <schema:domainIncludes> to indicate how you could or should use it but doesn't lock it down.
Why is this relationship between the predicates not defined using owl:equivalentProperty or an equivalent?
This is a policy issue for schema.org! My guess is that, like many general purpose ontologies (e.g. Semantic Sensor Network vocab), they like to keep weak semantics to allow for flexibility of application over the kind of strictness you're talking about that you need for inference.

If you look at RDF/OWL as a distributed DB, and an ontology as the DB Schema,
then this explanation might make sense to you:
If you look at the definition of rdfs:domain, you find:
Where a property P has more than one rdfs:domain property, then the resources denoted by subjects of triples with predicate P are instances of all the classes stated by the rdfs:domain properties.
So if you have more then one domain, all those must be true! -> AND
If you have multiple schema:domainIncludess, at least one must be true. -> OR
I find that in (DB-)practice, one almost always wants the later approach (OR).
All this is the same for rdfs:range and schema:rangeIncludes.

Related

Limit the cardinality of a relationship in OWL

I could not find this question already on SO, but I might have overlooked. My apologies if that is the case.
I'm trying to understand how to limit the cardinality of an object property using OWL.
I'm not sure this fully/correctly describes what I'm trying to accomplish, so here's an example:
Consider the hasWife and hasParent properties described in OWL 2 Web Ontology Language
Primer (Second Edition).
How do I state that any individual can only have 0 or 1 hasWife relationship, and must have exactly 2 hasParent relationships?
Extra points if you can tell me how to do this using WebProtégé.

the relation "is composed of" with protégé

I want to create an ontology describing a process with Protégé. I have a concept "process" connected with 5 other concepts (process tasks) by the relationship "is_composed_of". How to express this in Protégé. Do I create an ObjectProperties and I specify the domain and range in this case I will have 5 relationship "is_composed_of".
Is each instance of Process must be related to 5 instances of ProcessTask by the object property isComposedOf, then you'd use an axiom like:
Process &sqsubseteq; = 5 isComposedOf.ProcessTask
In the Manchester syntax, which is what you'd use in Protege, you'd go the Process class and add the following as a superclass:
isComposedOf exactly 5 ProcessTask
For more about quantified cardinality restrictions, see Meaning of OWL exact cardinality restrictions. Defining cardinality of data property in OWL include another example.

Does Jena support enforcing OWL constraints during a SPARQL Update query?

I'm trying to figure out if Jena (or any other SPARQL Update server) will enforce ontological constraints. For example, I want to enforce that only entities which have type x are allowed to have property y and the value of the property must have type z. I think this is what OWL can provide, but I'm not sure. Also, specifically, will Jena ensure that if I try to write a SPARQL Update query which does not follow these rules, that update will fail to insert and an error will be returned?
For example, I want to enforce that only entities which have type x are allowed to have property y and the value of the property must have type z. I think this is what OWL can provide, but I'm not sure.
What you're asking for is not what OWL provides. In OWL, you can say that:
propertyY rdfs:domain typeX
propertyY rdfs:domain typeZ
but this does not mean (at least, in the way that you're expecting), that only things of type X can have values for propertyY, and that the values must be of type Z. What it means is that whenever you see an assertion that uses propertyY, like
a propertyY b
an OWL reasoner can infer that
a rdf:type typeX
b rdf:type typeZ
The only time those inferences will be any kind of "constraint violation" is if you have some other way of inferring that a cannot be of type X, or that b cannot be of type Z. Then an OWL reasoner would recognize the inconsistency.
I'm trying to figure out if Jena (or any other SPARQL Update server) will enforce ontological constraints. … Also, specifically, will Jena ensure that if I try to write a SPARQL Update query which does not follow these rules, that update will fail to insert and an error will be returned?
I don't know whether Jena supports something like this out of the box, but you probably could probably either:
Use an OntModel with a reasoner attached, and run your SPARQL updates against that graph. Then, you can query the graph and see whether any inconsistency is found. How this would be done would depend on how the reasoner signals inconsistencies. This might not all that hard, but remember that Jena is really RDF-based and for full OWL reasoning, you'll need another reasoner that integrates with Jena (e.g., Pellet, but there are others, too, I think).
Alternatively, you might use a store that has reasoning built in, and probably has this kind of functionality already. I think that Stardog has some of these features.
What you need to understand about OWL is property restriction:
A property restriction is a special kind of class description. It describes an anonymous class, namely a class of all individuals that satisfy the restriction. OWL distinguishes two kinds of property restrictions: value constraints and cardinality constraints.
A value constraint puts constraints on the range of the property when applied to this particular class description.
A cardinality constraint puts constraints on the number of values a property can take, in the context of this particular class description.
Based on the description of your problem, you need to use a value constraint. These value constraints exists: some (someValuesFrom), only (allValuesFrom), and exactly (hasValue).
For example:
Class: Woman subClassOf: hasGender only Female
Class: Mother subClassOf: hasChild some Child
Class: Employee subClassOf: hasEmployeeID exaclty 1 ID
So depending on the restrictions on the individuals that you have defined, these individuals can be classified by the reasoner under the right class, which will be their type. The systems generally won't prevent you from entering false information, but the concept will either be declared unsatifiable or inconsistent. In the case of entering an individual that is incompatible with the constants in the ontology, the ontology becomes inconsistent (everything goes wrong) then you can retract the last fact I suppose. I am not sure about Jena, but OWL-API lets you temporarily add a concept to the ontology manager, and then you can check the consistency of the ontology. Normally when this check goes wrong, you can remove the last unsaved change to the ontology manger (via the change listener), and if everything is correct, you can save the changes in the ontology manager.

Subcategories from skos:broader, include parent category

Why in the following query, the Category:American_architecture_by_state is listed in the results?
select distinct ?s where { ?s skos:broader category:Architecture_in_Alabama}
Isn't category:American_architecture_by_state, supposed to be category:Architecture_in_Alabama's parent category not subcategory?
This is really messing up my results when I use skos:broader* as I expect it to start from Parent node and traverse to child nodes.
This is really messing up my results when I use skos:broader as I
expect it to start from Parent node and traverse to child nodes.
I'm not sure what you expect that. The definition of skos:broader is given in §8 Semantic Relations in the SKOS standard:
8. Semantic Relations
The properties skos:broader and skos:narrower are used to assert a
direct hierarchical link between two SKOS concepts. A triple A
skos:broader B asserts that B, the object of the triple, is a
broader concept than A, the subject of the triple. Similarly, a
triple C skos:narrower D asserts that D, the object of the
triple, is a narrower concept than C, the subject of the triple.
That means that the query
select distinct ?s where {
?s skos:broader category:Architecture_in_Alabama
}
should select categories that are narrower than category:Architecture_in_Alabama. Now, I'd agree that it seems like Architecture in America by State might be a supercategory of Architecture in Alabama, but it's important to note that categories aren't the same kind of things as classes. If A is a subclass of B, then everything that is an A is also B. That's not how categories work, though.
8.6.6. skos:broader and Transitivity
Note that skos:broader is not a transitive property. Similarly,
skos:narrower is not a transitive property.
8.6.8. Cycles in the Hierarchical Relation (skos:broaderTransitive and Reflexivity)
In the graph below, a cycle has been stated in the hierarchical
relation. Note that this graph is consistent with the SKOS data model,
i.e., there is no condition requiring that skos:broaderTransitive be
irreflexive.
Example 37 (consistent)
<A> skos:broader <B> .
<B> skos:broader <A> .
What it looks like might actually be the case is that the American architecture by state category is broader than most of the "Architecture in XXX" categories, and vice versa. E.g., look at the DBpedia page for American architecture by state; note that it has (just as one example) Architecture_in_New_York as a value of broader, but it also a value of broader for is skos:broader of Architecture_in_New_York. I agree that it's kind of weird, possibly undesirable, but it's not disallowed:
[For] many applications where knowledge organization systems are used, a
cycle in the hierarchical relation represents a potential problem. For
these applications, computing the transitive closure of
skos:broaderTransitive then looking for statements of the form X
skos:broaderTransitive X is a convenient strategy for finding cycles
in the hierarchical relation. How an application should handle such
statements is not defined in this specification and may vary between
applications.
Because Category:American_architecture_by_state is both present in skis:broader and its inverse is skos:broader of. The query is working perfectly well, if you don't want Category:American_architecture_by_state in your result set, you need to rethink your query.

Language features to implement relational algebra

I've been trying to encode a relational algebra in Scala (which to my knowlege has one of the most advanced type systems) and just don't seem to find a way to get where I want.
As I'm not that experienced with the academic field of programming language design I don't really know what feature to look for.
So what language features would be needed, and what language has those features, to implement a statically verified relational algebra?
Some of the requirements:
A Tuple is a function mapping names from a statically defined set of valid names for the tuple in question to values of the type specified by the name. Lets call this name-type set the domain.
A Relation is a Set of Tuples with the same domain such that the range of any tuple is uniqe in the Set
So far the model can eaisly be modeled in Scala simply by
trait Tuple
trait Relation[T<Tuple] extends Set[T]
The vals, vars and defs in Tuple is the name-type set defined above. But there should'n be two defs in Tuple with the same name. Also vars and impure defs should probably be restricted too.
Now for the tricky part:
A join of two relations is a relation where the domain of the tuples is the union of the domains from the operands tuples. Such that only tuples having the same ranges for the intersection of their domains is kept.
def join(r1:Relation[T1],r2:Relation[T2]):Relation[T1 with T2]
should do the trick.
A projection of a Relation is a Relation where the domain of the tuples is a subset of the operands tuples domain.
def project[T2](r:Relation[T],?1):Relation[T2>:T]
This is where I'm not sure if it's even possible to find a sollution. What do you think? What language features are needed to define project?
Implied above offcourse is that the API has to be usable. Layers and layers of boilerplate is not acceptable.
What your asking for is to be able to structurally define a type as the difference of two other types (the original relation and the projection definition). I honestly can't think of any language which would allow you to do that. Types can be structurally cumulative (A with B) since A with B is a structural sub-type of both A and B. However, if you think about it, a type operation A less B would actually be a supertype of A, rather than a sub-type. You're asking for an arbitrary, contravariant typing relation on naturally covariant types. It hasn't even been proven that sort of thing is sound with nominal existential types, much less structural declaration-point types.
I've worked on this sort of modeling before, and the route I took was to constraint projections to one of three domains: P == T, P == {F} where F in T, P == {$_1} where $_1 anonymous. The first is where the projection is equivalent to the input type, meaning it is a no-op (SELECT *). The second is saying that the projection is a single field contained within the input type. The third is the tricky one. It is saying that you are allowing the declaration of some anonymous type $_1 which has no static relationship to the input type. Presumably it will consist of fields which delegate to the input type, but we can't enforce that. This is roughly the strategy that LINQ takes.
Sorry I couldn't be more helpful. I wish it were possible to do what you're asking, it would open up a lot of very neat possibilities.
I think I have settled on just using the normal facilities for mapping collection for the project part. The client just specify a function [T<:Tuple](t:T) => P
With some java trickery to get to the class of P I should be able to use reflection to implement the query logic.
For the join I'll probably use DynamicProxy to implement the mapping function.
As a bonus I might be able to get the API to be usable with Scalas special for-syntax.