According to the protege 4.x documentation the property chain exists for the object properties however in my case I need to include a data property as follow:
if builds(B, A) o has_name(A, "Holly wood") -> has_name(B, "Holly wood")
To explain a bit, imagine we have a street with a name "Holly wood". This street is built of several segments (a segment is a part of street between to junctions) whose name should be the same as street name "Holly wood".
Note that, the street concept is different from the segment so they are not subclasses but they have the above relation (builds).
One solution is to make the has_name an Object property, then each name should be an object (instance).
if is_name_of(name, A) o is_built_of(A, B) -> is_name_of(name, B)
This does not seem quite OK to me as I think it is better to use data-type.
the other solution is to use SWRL as below:
Thing(?p), Thing(?q), builds(?q, ?p), has_name(?p, ?name) -> has_name(?q, ?name)
this does not work!!!!
can you help me figure out why or find a proper solution?
I think that the SWRL rule is the proper solution here. As you've noted, you can't use the data property in a subproperty chain axiom, but you would need to in order to get the behavior you're looking for. The structural specification for object subproperty axioms and for data subproperty axioms are:
9.2.1 Object Subproperties
SubObjectPropertyOf := 'SubObjectPropertyOf' '(' axiomAnnotations subObjectPropertyExpression superObjectPropertyExpression ')'
subObjectPropertyExpression := ObjectPropertyExpression | propertyExpressionChain
propertyExpressionChain := 'ObjectPropertyChain' '(' ObjectPropertyExpression ObjectPropertyExpression { ObjectPropertyExpression } ')'
9.3.1 Data Subproperties
SubDataPropertyOf := 'SubDataPropertyOf' '(' axiomAnnotations subDataPropertyExpression superDataPropertyExpression ')'
subDataPropertyExpression := DataPropertyExpression
superDataPropertyExpression := DataPropertyExpression
OWL 2 simply doesn't have a property chain expression that mixes object and datatype properties. Thus, you'd need to use a SWRL rule. You can use a rule like this (there's no need to use Thing(?p) ∧ Thing(?q), since every individual is automatically an owl:Thing):
builds(?q, ?p) ∧ has_name(?p, ?name) → has_name(?q, ?name)
Related
Sorry if this is a noob's and simple question, but it will help me resolve a conceptual confusion of mine! I have some guesses, but want to make sure.
I got the location of a part of brain via NeuroFMA ontology and the query below:
PREFIX fma: <http://sig.uw.edu/fma#>
select ?loc{
fma:Superior_temporal_gyrus fma:location ?loc}
The result was: fma:live_incus_fm_14056
I thought I might be able to get some more information on this item.
Question 1: Was there a difference if the result was a literal?
So, I used optional {?loc ?p ?o} and got some results.
However, I thought since this ontology also imported RDF and OWL, the following queries should work too, but it was not the case (hopefully these codes are correct)!
optional {?value rdfs:range ?loc}
optional {?loc rdfs:domain ?value}
optional {?loc rdf:type ?value}
Question 2 If the above queries are correct, are RDFS and OWL just a suggestion? Or do ontologies that import/ follow them have to use all their resources or at least expand on them?
Thanks!
An import declaration in OWL is, for the most part, just informative. It is typically used to signal that this ontology re-uses some of the concepts defined in the target (for example, it could define some additional subclasses of classes defined in the target data).
Whether the import results in any additional data being loaded into your dataset depends on what database/API/reasoner you use to process the ontology. Most tools don't automatically load the targets of import declarations, by default, so the presence or absence of the import-declaration will have no influence on what your queries return.
I thought since this ontology also imported RDF and OWL, the following queries should work too, but it was not the case (hopefully
these codes are correct)!
optional {?value rdfs:range ?loc}
optional {?loc rdfs:domain ?value}
optional {?loc rdfs:type ?value}
It's rdf:type, not rdfs:type. Apart from that, each of these individually look fine. However, judging from your broader query, ?loc is usually not a property, but a property value. Property values don't have domains and ranges. You could query for something like this, possibly:
optional { fma:location rdfs:domain ?value}
This asks "if the property fma:location has a domain declaration, return that declaration and bind it to the ?value variable".
More generally, whether these queries return any results has little or nothing to do with what import declaration are present in your ontology. If your ontology contains a range declaration for a property, the first pattern will return a result. If it contains a domain declaration, the second one will return a result.
And finally, if your ontology contains an instance of some class, the third pattern (corrected) will return a result. It's as simple as that.
There is no magic here: the query only returns what is present in your dataset. What is present in your dataset is determined by how you have loaded the data into your database, and (optionally) what form of reasoner you have enabled on top of your database.
Extensible records were one of the most amazing Elm's features, but since v0.16 adding and removing fields is no longer available. And this puts me in an awkward position.
Consider an example. I want to give a name to a random thing t, and extensible records provide me a perfect tool for this:
type alias Named t = { t | name: String }
„Okay,“ says the complier. Now I need a constructor, i.e. a function that equips a thing with specified name:
equip : String -> t -> Named t
equip name thing = { thing | name = name } -- Oops! Type mismatch
Compilation fails, because { thing | name = ... } syntax assumes thing to be a record with name field, but type system can't assure this. In fact, with Named t I've tried to express something opposite: t should be a record type without its own name field, and the function adds this field to a record. Anyway, field addition is necessary to implement equip function.
So, it seems impossible to write equip in polymorphic manner, but it's probably not a such big deal. After all, any time I'm going to give a name to some concrete thing I can do this by hands. Much worse, inverse function extract : Named t -> t (which erases name of a named thing) requires field removal mechanism, and thus is not implementable too:
extract : Named t -> t
extract thing = thing -- Error: No implicit upcast
It would be extremely important function, because I have tons of routines those accept old-fashioned unnamed things, and I need a way to use them for named things. Of course, massive refactoring of those functions is ineligible solution.
At last, after this long introduction, let me state my questions:
Does modern Elm provides some substitute for old deprecated field addition/removal syntax?
If not, is there some built-in function like equip and extract above? For every custom extensible record type I would like to have a polymorphic analyzer (a function that extracts its base part) and a polymorphic constructor (a function that combines base part with additive and produces the record).
Negative answers for both (1) and (2) would force me to implement Named t in a more traditional way:
type Named t = Named String t
In this case, I can't catch the purpose of extensible records. Is there a positive use case, a scenario in which extensible records play critical role?
Type { t | name : String } means a record that has a name field. It does not extend the t type but, rather, extends the compiler’s knowledge about t itself.
So in fact the type of equip is String -> { t | name : String } -> { t | name : String }.
What is more, as you noticed, Elm no longer supports adding fields to records so even if the type system allowed what you want, you still could not do it. { thing | name = name } syntax only supports updating the records of type { t | name : String }.
Similarly, there is no support for deleting fields from record.
If you really need to have types from which you can add or remove fields you can use Dict. The other options are either writing the transformers manually, or creating and using a code generator (this was recommended solution for JSON decoding boilerplate for a while).
And regarding the extensible records, Elm does not really support the “extensible” part much any more – the only remaining part is the { t | name : u } -> u projection so perhaps it should be called just scoped records. Elm docs itself acknowledge the extensibility is not very useful at the moment.
You could just wrap the t type with name but it wouldn't make a big difference compared to approach with custom type:
type alias Named t = { val: t, name: String }
equip : String -> t -> Named t
equip name thing = { val = thing, name = name }
extract : Named t -> t
extract thing = thing.val
Is there a positive use case, a scenario in which extensible records play critical role?
Yes, they are useful when your application Model grows too large and you face the question of how to scale out your application. Extensible records let you slice up the model in arbitrary ways, without committing to particular slices long term. If you sliced it up by splitting it into several smaller nested records, you would be committed to that particular arrangement - which might tend to lead to nested TEA and the 'out message' pattern; usually a bad design choice.
Instead, use extensible records to describe slices of the model, and group functions that operate over particular slices into their own modules. If you later need to work accross different areas of the model, you can create a new extensible record for that.
Its described by Richard Feldman in his Scaling Elm Apps talk:
https://www.youtube.com/watch?v=DoA4Txr4GUs&ab_channel=ElmEurope
I agree that extensible records can seem a bit useless in Elm, but it is a very good thing they are there to solve the scaling issue in the best way.
I am trying to create an OWL ontology using Protege. I want to use inverse functional properties as a resemblance for primary keys from relational databases. For example, I have a property, that has a unique id as object, thus identifying the entity and no other entity should be allowed to use this value with that property.
As the object value is a string, it has to be a data property. But in Protege, you cannot assign the Inverse functional characteristic to a data property.
Why can't I declare a data property to be a inverse functional property and how else should I create the "unique key" logic if not like this?
Thanks in advance,
Frank
The restriction on datatype properties is purely due to computational complexity. Without the restriction, the logic of OWL 2 DL would not be decidable. However, it is possible to express a notion of unique key in OWL 2:
ex:key a owl:DatatypeProperty .
owl:Thing owl:hasKey ( ex:key ) .
However, there is a subtle difference between this and an inverse functional property. Consider the following:
ex:this a [
a owl:Restriction;
owl:onProperty ex:prop;
owl:minCardinality 2;
owl:onClass [
a owl:Restriction;
owl:onProperty ex:key;
owl:hasValue 1
]
] .
If ex:key is a key for owl:Thing, then this ontology is consistent. However, if ex:key could be an inverse functional property, then this ontology would not be consistent. The reason is due to the way keys work in OWL 2. For a key to identify something, the thing has to be named explicitly. There could be several unnamed things having the same key (here, the key is the number 1) and yet, they would not be considered equal as long as they are not declared explicitly in the ontology. However, with inverse functional property, it is not the case. As a result, we would be able to infer that everything having value 1 on property ex:key is the same thing, and therefore, ex:this cannot have 2 values for the property ex:prop.
To me, the dcterms:identifier property seems like a legitimate inverse functional property. When two things share the same identifier, I think it is safe to conclude that it is the same thing.
Is there any compelling reason not to define it as such (owl:InverseFunctionalProperty) in my ontology?
If you need to stay in OWL 2 DL, then it's not a good idea to declare data properties to be inverse functional - only object properties can be declared as such without violating the constraints and end up in OWL 2 FULL.
dcterms:identifier has a range of rdfs:Literal defined here
You could use a HasKey axiom to achieve similar results: keys were introduced in OWL 2 for the purpose of identifying one or more properties whose values are identifiers for the referring individuals, and both object and data properties can be used.
I want to prove some facts about imperative object-oriented program. How can I represent a heterogeneous object graph in Coq? My main problem is that edges are implicit - each node consists of an integer label modelling object address and a data structure that models object state. So implicit edges are formed by fields inside data structure that model object pointers and contain address label of another node in a graph. To ensure that my graph is valid, adding new node to the graph must require a proof that all fields in a data structure that is being added refer to nodes that already exist in the graph. But how can I express 'all pointer fields in a data structure' in Coq?
It depends on how you represent a data structure, and what kinds of features the language you want to model has. Here's one possibility. Let's say that your language has two kinds of values: numbers and object references. We can write this type in Coq as:
Inductive value : Type :=
| VNum (n : nat)
| VRef (ref : nat).
A reference (or pointer) is just a natural number that can be used to uniquely identify objects on the heap. We can use functions to represent both objects and the heap as follows:
Definition object : Type := string -> option value.
Definition heap : Type := nat -> option object.
Paraphrasing in English, an object is a partial function from strings (which we use to model fields in the object) to values, and a heap is a partial function from nats (that is, object references) to objects. We can then express your property as:
Definition object_ok (o : object) (h : heap) : Prop :=
forall (s : string) (ref : nat),
o s = Some (VRef ref) ->
exists obj, h ref = Some obj.
Again, in English: if the field s of the object o is defined, and equal to a reference ref, then there exists some object obj stored at that address on the heap h.
The one problem with that representation is that Coq functions make it possible for heaps to have infinitely many objects, and objects to have infinitely many fields. You can circumvent this problem with an alternative representation that only allows for functions defined on finitely many inputs, such as lists of pairs, or (even better) a type of finite maps, such as this one.