I am working with an instance of Ontotext GraphDB and often want to clear a named graph with a large number of triples.
Currently, my technique involves issuing a SPARQL command to the graph server, which searches and matches a triple pattern of every triple in the named graph:
DELETE { GRAPH example:exampleGraph { ?s ?p ?o }} WHERE {?s ?p ?o .}
When there are a lot of triples, this approach often takes quite some time to clear the named graph.
I am wondering whether there is a more efficient way to do this. Even a triplestore-specific solution would be acceptable to me.
I should also note that I am using the RDF4J library to communicate with the graph. I understand that certain solutions may work on the Ontotext web interface, but I am only interested in a solution which I can implement programatically.
You can use the SPARQL CLEAR command for this:
CLEAR GRAPH example:exampleGraph
Or alternatively, DROP:
DROP GRAPH example:exampleGraph
The difference between the two is that CLEAR allows triplestores to keep an empty named graph, while the DROP completely removes the named graph. But in the case of GraphDB there's no practical difference as GraphDB never keeps a reference to an empty named graph.
If you don't want to use SPARQL, you can use the RDF4J API to programmatically call the clear() operation:
IRI graph = graphdb.getValueFactory().createIRI("http://example.org/exampleGraph");
try(RepositoryConnection conn = graphdb.getConnection()) {
conn.clear(graph);
}
or more succinctly:
IRI graph = graphdb.getValueFactory().createIRI("http://example.org/exampleGraph");
Repositories.consume(graphdb, conn -> conn.clear(graph));
Related
Sorry if this is a noob's and simple question, but it will help me resolve a conceptual confusion of mine! I have some guesses, but want to make sure.
I got the location of a part of brain via NeuroFMA ontology and the query below:
PREFIX fma: <http://sig.uw.edu/fma#>
select ?loc{
fma:Superior_temporal_gyrus fma:location ?loc}
The result was: fma:live_incus_fm_14056
I thought I might be able to get some more information on this item.
Question 1: Was there a difference if the result was a literal?
So, I used optional {?loc ?p ?o} and got some results.
However, I thought since this ontology also imported RDF and OWL, the following queries should work too, but it was not the case (hopefully these codes are correct)!
optional {?value rdfs:range ?loc}
optional {?loc rdfs:domain ?value}
optional {?loc rdf:type ?value}
Question 2 If the above queries are correct, are RDFS and OWL just a suggestion? Or do ontologies that import/ follow them have to use all their resources or at least expand on them?
Thanks!
An import declaration in OWL is, for the most part, just informative. It is typically used to signal that this ontology re-uses some of the concepts defined in the target (for example, it could define some additional subclasses of classes defined in the target data).
Whether the import results in any additional data being loaded into your dataset depends on what database/API/reasoner you use to process the ontology. Most tools don't automatically load the targets of import declarations, by default, so the presence or absence of the import-declaration will have no influence on what your queries return.
I thought since this ontology also imported RDF and OWL, the following queries should work too, but it was not the case (hopefully
these codes are correct)!
optional {?value rdfs:range ?loc}
optional {?loc rdfs:domain ?value}
optional {?loc rdfs:type ?value}
It's rdf:type, not rdfs:type. Apart from that, each of these individually look fine. However, judging from your broader query, ?loc is usually not a property, but a property value. Property values don't have domains and ranges. You could query for something like this, possibly:
optional { fma:location rdfs:domain ?value}
This asks "if the property fma:location has a domain declaration, return that declaration and bind it to the ?value variable".
More generally, whether these queries return any results has little or nothing to do with what import declaration are present in your ontology. If your ontology contains a range declaration for a property, the first pattern will return a result. If it contains a domain declaration, the second one will return a result.
And finally, if your ontology contains an instance of some class, the third pattern (corrected) will return a result. It's as simple as that.
There is no magic here: the query only returns what is present in your dataset. What is present in your dataset is determined by how you have loaded the data into your database, and (optionally) what form of reasoner you have enabled on top of your database.
I want to define multiple classes (with limited inferencing) as the range of an owl objecttypeproperty. Let me explain in detail by providing you an example.
I have two classes: Furniture and Device, which are not disjoint, i.e., another subclass/instance can inherit from both classes, e.g., Lamp can be a furniture and device.
Now I would like to define an OWL objecttypeproperty: hasComponent that can only accept range as either :Furniture or :Device, NOT both.
:hasComponent rdf:type owl:ObjectProperty ;
rdf:type owl:TransitiveProperty ;
rdfs:range :Furniture ,
:Device .
When I create an instance using the property:
:furniture1 rdf:type :furniture .
:device1 rdf:type :device .
:furtniture1 :hasComponent :lamp .
The inferencing engine will infer that :device1 is a :furniture, which I dont want, because I have already defined that device1 is a device.
One solution is to remove rdf:range and explicitly define the instance types, but I did not want to remove the range because it will limit the scope of the search space.
You have to create a union class of all the classes involved and subtract their intersection (example: ((Furniture or Device) and not (Furniture and Device))) and set that class as the range. The same approach needs to be used for domains.
You can declare this as a named class, or insert it (with the necessary RDF/XML structure around it) directly into the range axiom. I would think you'll probably need the same class in multiple places, so a named class might be the best solution.
is it possible to use Arbitrary Length Path Matching in protege SPARQL query tab?
You are using the Snap SPARQL Query Plugin, not the SPARQL Query plugin.
Unlike the SPARQL Query plugin, the Snap SPARQL Query plugin supports querying over inferred knowledge, but does not support property paths.
From Snap-SPARQL: A Java Framework for working
with SPARQL and OWL (section 4):
SPARQL 1.1 contains property path expressions that allow
regular-expression-like paths of properties to be matched. However,
these are not supported by the Snap-SPARQL framework. While this
would be a significant limitation under simple entailment, it is
not clear how much of a limitation it actually is under the OWL
entailment regime. This is because, one of motivations for property
path expressions is that they enable queries to be written whose
answers involve some kind of “transitivity” such as { ?x rdfs:subClassOf+ ?y } or { ?x :partOf+ ?y }.
In these cases, under the OWL entailment regime, transitivity comes
“for free” according to the semantics of the language, for example if
A is a subclass of B and B is a subclass of C, then A is
also a subclass of C. For more complex cases that involve choices
e.g. the lack of property path expressions imposes some inconvenience
and queries such as { ?x rdfs:label | dce:title ?y }, will need to
be written by the user, if possible.
Let us suppose that i ∈ sub ⊆ sup. Both plugins allow to "infer" that i ∈ sup:
with the SPARQL Query Plugin, you need to use property paths;
with the Snap SPARQL Query Plugin, you don't need to use property paths, and in fact you can't.
Choose Window > Reset selected tab to default state, if you need the "SPARQL Query" view to be the only view on the "SPARQL Query" tab.
I have A, B and C as classes related by the transitive property isSubClassOf.
So A isSuclassOF B and B isSubClassOf C. So by inference we have A isSubClassOf C.
My question: How can I write a SPARQL query to just return back for each Class its direct only subclass number. for example
A 0
B 1
C 1
Within the standard SPARQL language, you can do this by querying for those subclasses where no other subclass exists "in between", like so:
SELECT ?directSub ?super
WHERE { ?directSub rdfs:subClassOf ?super .
FILTER NOT EXISTS { ?otherSub rdfs:subClassOf ?super.
?directSub rdfs:subClassOf ?otherSub .
FILTER (?otherSub != ?directSub)
}
}
If you want to count the number of subclasses, you will need to adapt the above query using the COUNT and GROUP BY operators.
Many SPARQL engines offer some shortcuts for querying direct subclasses, however. For example in Sesame, when querying programmatically, you can disable inferencing for the duration of the query by setting a boolean property on the Query object to false. It also offers an additional reasoner which can be configured on top of a datastore and which allows you to query using a "virtual" property, sesame:directSubClassOf (as well as sesame:directType and sesame:directSubPropertyOf).
Other SPARQL engines have similar mechanisms.
I am building an ontology in TopBraidComposer which has a class hierarchy and a couple of rules that work great on their own. In my ontology, I'm working on a class level, so all the definitions I create only relate to classes, not individuals. Now I want to infer subclass definitions like this one:
I tried the following SPARQL query which seemed to do the job:
Then I added said query as a SPIN rule to the owl:Thing class like this:
After pressing enter, it is automatically converted to the following form:
It looks reasonable but when I now start the inferencing process, it doesn't terminate anymore when it did before I added the test rule. When I force stop the reasoning, I can see that the desired triple has been added to the Test class numerous times.
How can I infer an anonymous superclass in SPIN?
Edit:
A workaround is to bind the restrictions to classes. The logic then seems to work but it doesn't show up like the anonymous superclasses do; neither in TBC nor in Protege.
After a long search, I found out the solution is really simple:
a simple check for an existing relationship will prevent an infinite loop:
FILTER NOT EXISTS {
?test rdfs:subClassOf _:b0 .
} .
which will be auto corrected by TBC to
FILTER NOT EXISTS {
?test rdfs:subClassOf _:0 .
} .
That's it, the rule will work.