SPARQLRule not constructing - sparql

I have a Nodeshape with a sh:SPARQLTarget and a sh:SPARQLRule. I tried to run both Target and Rule as Queries and both deliver results, but when I'm executing the Shapes with the Apache Jena SHACL Processor, it won't construct any triples. Did I do something wrong? I'm out of ideas.
Here is my Nodeshape:
iep:hasKG331
a rdf:Property, sh:NodeShape ;
sh:Target [
a sh:SPARQLTarget ;
sh:select """
PREFIX express: <https://w3id.org/express#>
PREFIX ifcowl: <http://standards.buildingsmart.org/IFC/DEV/IFC4/ADD1/OWL#>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX iep: <https://www.inf.bi.rub.de/semweb/ns/ifc-enrichment-procedure/iep#>
SELECT ?this
WHERE {
?this rdf:type ifcowl:IfcWallStandardCase .
?relDefinesByProperties ifcowl:relatedObjects_IfcRelDefines ?this .
?relDefinesByProperties ifcowl:relatingPropertyDefinition_IfcRelDefinesByProperties ?pset .
}
""" ;
] ;
sh:rule [
a sh:SPARQLRule ;
sh:construct """
PREFIX express: <https://w3id.org/express#>
PREFIX ifcowl: <http://standards.buildingsmart.org/IFC/DEV/IFC4/ADD1/OWL#>
PREFIX iep: <xxx/ifc-enrichment-procedure/iep#>
CONSTRUCT {
$this iep:hasKG iep:hasKG331 .
}
WHERE {
?relDBP ifcowl:relatedObjects_IfcRelDefines $this .
?relDBP ifcowl:relatingPropertyDefinition_IfcRelDefinesByProperties ?propSet .
?propSet ifcowl:hasProperties_IfcPropertySet ?psv1 .
?propSet ifcowl:hasProperties_IfcPropertySet ?psv2 .
?psv1 iep:isExternal true .
?psv2 iep:isLoadBearing true .
}
""" ;
] .
As I mentioned, when I execute the Target or the Rule as single queries, I do get results and the Focus Nodes from the Target do come up as $this in the Rule. The IRIs iep:isExternal and iep:isLoadBearing got inferenced in a step before. Am I missing something?

Without looking at details, the sh:Target needs to be sh:target with lower-case t. All property names are typically lower-case.

Related

SHACL sh:rule with sh:condition in SPARQL in Topbraid Composer

The example ontology has two classes (:MyClass and :Value) and two properties (:MyObjProp and :MyDataProp).
:MyClass
a owl:Class ;
a sh:NodeShape ;
rdfs:subClassOf owl:Thing ;
.
:MyDataProp
a owl:DatatypeProperty ;
rdfs:domain :MyClass ;
rdfs:range xsd:string ;
.
:MyObjProp
a owl:ObjectProperty ;
rdfs:domain :MyClass ;
rdfs:range :Value ;
.
:Value
a owl:Class ;
rdfs:subClassOf owl:Thing ;
.
Some instances were added.
:MyClass_1
a :MyClass ;
:MyDataProp :Value_1 ;
:MyObjProp :Value_1 ;
.
:MyClass_2
a :MyClass ;
:MyObjProp :Value_2 ;
.
:Value_1
a :Value ;
.
:Value_2
a :Value ;
.
A NodeShape :NodeShapeRule with a sh:rule (:SPARQLRule_1) was created. This rule creates new triples. With the sh:condition the rule should be restricted to a subset of targets.
:NodeShapeRule
a sh:NodeShape ;
sh:rule :SPARQLRule_1 ;
sh:targetClass :MyClass ;
.
:SPARQLRule_1
a sh:SPARQLRule ;
sh:condition :NodeShapeConditionSPARQL ;
sh:construct """
PREFIX : <http://example.org/ex#>
CONSTRUCT
{
$this :MyDataProp \"New input\" .
}
WHERE
{
$this :MyObjProp ?p .
}
""" ;
.
For the restriction two equivalent NodeShapes were defined. The first constraint works with sh:property, the other uses sh:sparql.
:NodeShapeConditionProperty
a sh:NodeShape ;
sh:property [
sh:path :MyObjProp ;
sh:description "NodeShapeConditionProperty" ;
sh:hasValue :Value_1 ;
] ;
sh:targetClass :MyClass ;
.
:NodeShapeConditionSPARQL
a sh:NodeShape ;
sh:sparql [
sh:message "NodeShapeConditionSPARQL" ;
sh:prefixes <http://example.org/ex> ;
sh:select """
PREFIX : <http://example.org/ex#>
SELECT $this
WHERE
{
$this :MyObjProp ?prop .
}
""" ;
] ;
sh:targetClass :MyClass ;
.
While doing inferencing with Topbraid Composer I received different results for both solutions. Only the solution with sh:property provides the expected response. Please, can someone explain me this behavior?
:MyClass_1 :MyDataProp "New input"
The right explanation is that the SPAQRL query produces a constraint violation for each result (row) in SELECT query. So if the SPARQL query returns no result (rows) then all is fine and the rule will fire. The reason for this design is that this enables SPARQL queries to return more information about the violation, e.g. the focus node ($this) and the value node (?value).
Changing the :NodeShapeConditionSPARQL it produces violation for not existing results and then both solutions behave in the same manner.
:NodeShapeConditionSPARQL
a sh:NodeShape ;
sh:sparql [
sh:message "NodeShapeConditionSPARQL" ;
sh:prefixes <http://example.org/ex> ;
sh:select """
PREFIX : <http://example.org/ex#>
SELECT $this
WHERE
{
FILTER NOT EXISTS { $this :MyObjProp ?anyProp } .
}
""" ;
] ;
sh:targetClass :MyClass ;
.

How to stack SpinSail on top of GraphDB remote repository

I am using GraphDB to store my triples and need to stack SpinSail component upon the GraphDB to support Spin Rules along with all the rest of the features that GraphDB supports by default.
So far I have managed to create a SailRepository on the remote server supporting Spin Rules (more details below) but it seems that it only supports Spin and none of the other features that GraphDB supports (e.g. reviewing the Graph, adding triples through files, searching etc.)
The configuration file, once the repository is created, looks like below:
#prefix ms: <http://www.openrdf.org/config/sail/memory#> .
#prefix rep: <http://www.openrdf.org/config/repository#> .
#prefix sail: <http://www.openrdf.org/config/sail#> .
#prefix sr: <http://www.openrdf.org/config/repository/sail#> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
<#Test1> a rep:Repository;
rep:repositoryID "Test1";
rep:repositoryImpl [
rep:repositoryType "openrdf:SailRepository";
sr:sailImpl [
sail:delegate [
sail:sailType "openrdf:MemoryStore";
ms:persist true
];
sail:sailType "openrdf:SpinSail"
]
] .
Although a normal configuration file (i.e. if someone would create a repository through the workbench) would look like the one below:
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix rep: <http://www.openrdf.org/config/repository#> .
#prefix sail: <http://www.openrdf.org/config/sail#> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
<#Test> a rep:Repository;
rep:repositoryID "Test";
rep:repositoryImpl [
rep:repositoryType "graphdb:FreeSailRepository";
<http://www.openrdf.org/config/repository/sail#sailImpl> [
<http://www.ontotext.com/trree/owlim#base-URL> "http://example.org/owlim#";
<http://www.ontotext.com/trree/owlim#check-for-inconsistencies> "false";
<http://www.ontotext.com/trree/owlim#defaultNS> "";
<http://www.ontotext.com/trree/owlim#disable-sameAs> "false";
<http://www.ontotext.com/trree/owlim#enable-context-index> "false";
<http://www.ontotext.com/trree/owlim#enable-literal-index> "true";
<http://www.ontotext.com/trree/owlim#enablePredicateList> "true";
<http://www.ontotext.com/trree/owlim#entity-id-size> "32";
<http://www.ontotext.com/trree/owlim#entity-index-size> "10000000";
<http://www.ontotext.com/trree/owlim#imports> "";
<http://www.ontotext.com/trree/owlim#in-memory-literal-properties> "true";
<http://www.ontotext.com/trree/owlim#query-limit-results> "0";
<http://www.ontotext.com/trree/owlim#query-timeout> "0";
<http://www.ontotext.com/trree/owlim#read-only> "false";
<http://www.ontotext.com/trree/owlim#repository-type> "file-repository";
<http://www.ontotext.com/trree/owlim#ruleset> "owl2-rl-optimized";
<http://www.ontotext.com/trree/owlim#storage-folder> "storage";
<http://www.ontotext.com/trree/owlim#throw-QueryEvaluationException-on-timeout> "false";
sail:sailType "graphdb:FreeSail"
]
];
rdfs:label "Test" .
The following code was used to create the repository.
RemoteRepositoryManager manager = new RemoteRepositoryManager("http://localhost:7200");
manager.init();
String repositoryId = "Test1";
// create a configuration for the SAIL stack
boolean persist = true;
// create a configuration for the SAIL stack
SailImplConfig spinSailConfig = new MemoryStoreConfig(persist);
spinSailConfig = new SpinSailConfig(spinSailConfig);
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(spinSailConfig);
// create a configuration for the repository implementation
// RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
RepositoryConfig repConfig = new RepositoryConfig(repositoryId, repositoryTypeSpec);
manager.addRepositoryConfig(repConfig);
Once I created this repository I was able to insert the following rule into it through the SPARQL section (by using INSERT DATA):
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX sp: <http://spinrdf.org/sp#>
PREFIX spin: <http://spinrdf.org/spin#>
PREFIX ex: <http://example.org/>
INSERT DATA {
ex:Person a rdfs:Class ;
spin:rule [
a sp:Construct ;
sp:text """PREFIX ex: <http://example.org/>
CONSTRUCT { ?this ex:childOf ?parent . }
WHERE { ?parent ex:parentOf ?this . }"""
] . }
and then similarly add the following statements:
PREFIX ex: <http://example.org/>
INSERT DATA {
ex:John a ex:Father ;
ex:parentOf ex:Lucy .
ex:Lucy a ex:Person .
}
After that by running the following query:
PREFIX ex: <http://example.org/>
SELECT ?child WHERE { ?child ex:childOf ?parent }
I was able to confirm that the Spin rule was executed successfully.
So my question is:
Is there a way to create a remote repository supporting all the features of GraphDB and then stack upon it the SpinSail component?
As of the moment GraphDB (version 8.10.0) does not support SpinSail. Such option is under consideration for one of the next GraphDB releases.

Can I append a single branch of rdf:List rdf:rest's with one SPARQL Update query using VALUES keyword?

I am trying append to an rdf:List in SPARQL like so:
DELETE {
?end rdf:rest rdf:nil .
}
INSERT {
?end rdf:rest _:b0 .
_:b0 rdf:type rdf:List .
_:b0 rdf:first _:b1 .
_:b1 rdfs:label ?t .
_:b0 rdf:rest rdf:nil .
}
WHERE
{ <http://example/~/blah> (rdf:rest)* ?end .
?end rdf:rest rdf:nil
VALUES ?t { "txt1" "txt2" "txt3" "txt4" }
}
but I get the txtX's appended as rdf:rest's all at once which makes four branches, when what I'd like is to have the update executed sequentially for each value, to make a single branch. Is there a way to do this with a single query or a variable length string of VALUES?
No, that's not possible with a single SPARQL query. The WHERE part allows for pattern matching, but not for recursion per each value of the VALUE clause.

command line tdbquery with text index

I trying to run a text search query with Jena via command line.
tdbquery --desc textsearch.ttl --query search.rq
The query return empty results with the messages:
17:23:46 WARN TextQueryPF :: Failed to find the text index : tried context and as a text-enabled dataset
17:23:46 WARN TextQueryPF :: No text index - no text search performed
My assembler file is:
#prefix : <http://localhost/jena_example/#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix tdb: <http://jena.hpl.hp.com/2008/tdb#> .
#prefix ja: <http://jena.hpl.hp.com/2005/11/Assembler#> .
#prefix text: <http://jena.apache.org/text#> .
## Example of a TDB dataset and text index
## Initialize TDB
[] ja:loadClass "org.apache.jena.tdb.TDB" .
tdb:DatasetTDB rdfs:subClassOf ja:RDFDataset .
tdb:GraphTDB rdfs:subClassOf ja:Model .
## Initialize text query
[] ja:loadClass "org.apache.jena.query.text.TextQuery" .
# A TextDataset is a regular dataset with a text index.
text:TextDataset rdfs:subClassOf ja:RDFDataset .
# Lucene index
text:TextIndexLucene rdfs:subClassOf text:TextIndex .
# Solr index
text:TextIndexSolr rdfs:subClassOf text:TextIndex .
## ---------------------------------------------------------------
## This URI must be fixed - it's used to assemble the text dataset.
:text_dataset rdf:type text:TextDataset ;
text:dataset <#dataset> ;
text:index <#indexLucene> ;
.
# A TDB datset used for RDF storage
<#dataset> rdf:type tdb:DatasetTDB ;
tdb:location "DB2" ;
tdb:unionDefaultGraph true ; # Optional
.
# Text index description
<#indexLucene> a text:TextIndexLucene ;
text:directory <file:Lucene2> ;
##text:directory "mem" ;
text:entityMap <#entMap> ;
.
# Mapping in the index
# URI stored in field "uri"
# rdfs:label is mapped to field "text"
<#entMap> a text:EntityMap ;
text:entityField "uri" ;
text:defaultField "text" ;
text:map (
[ text:field "text" ; text:predicate rdfs:label ]
) .
My query is :
PREFIX text: <http://jena.apache.org/text#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT ?s
{ ?s text:query 'London' ;
rdfs:label ?label
}
I would like to know if I miss any configuration or this query can only be done inside fuseki.
First, you can do text search outside Fuseki. The example you took the code from shows how to do this using plain Jena Dataset in Java.
Second, Andy Seaborne on Jena mailing list suggests the following:
SELECT (count(*) AS ?C) { ?x text:query .... }
in order to "touch" the index before running the real queries.

The DELETE/INSERT operation can be used to remove triples containing blank nodes: how?

I would like to use a SPARQL DELETE/INSERT to ensure that after repeated updates ?performance and certain connected blank nodes do not have multiple property values but only zero (for optional cases) or one (for mandatory cases).
If I send the DELETE/INSERT (see below) to a Jena Fuseki 1.1.1 server I receive this error message: "Blank nodes not allowed in DELETE templates".
However, the specification contains this sentence: "The DELETE/INSERT operation can be used to remove triples containing blank nodes."
So what's a valid form of a DELETE/INSERT that does the job in this case? To ease maintenance it would be good if the DELETE and INSERT parts can remain structurally similar. (This is a follow-up question.)
DELETE {
?performance
mo:performer ?_ ;
mo:singer ?_ ;
mo:performance_of [ ### error marked here ###
dc:title ?_ ;
mo:composed_in [ a mo:Composition ;
mo:composer ?_
]
]
}
INSERT {
?performance
mo:performer ?performer ; # optional
mo:singer ?singer ; # optional
mo:performance_of [
dc:title ?title ; # mandatory
mo:composed_in [ a mo:Composition ;
mo:composer ?composer # optional
]
]
}
WHERE {}
You need something in the WHERE part. This will find the bnodes, put them in variables, and use the DELETE to remove them. The DELETE{} is not itself a pattern to match - the graph pattern is the WHERE {} part.
Something like:
DELETE{
?performance
mo:performance_of ?X .
?X dc:title ?title ;
mo:composed_in ?Y .
?Y a mo:Composition .
?Y mo:composer ?composer .
?performance mo:performer ?performer ;
mo:singer ?singer
}
WHERE {
?performance
mo:performance_of ?X .
?X dc:title ?title ;
mo:composed_in ?Y .
?Y a mo:Composition .
OPTIONAL { ?Y mo:composer ?composer }
OPTIONAL {
?performance mo:performer ?performer ;
mo:singer ?singer
}
}
There is no point making DELETE{} and INSERT{} the same - it's effectively a no-op.
If a variable in bout bound in a particular row from the WHERE{} part, the deletion simply skips that triple, not the rest of the instantiated template.
It may be clearer, to humans, to write the SPARQL Update in several parts. One HTTP request can have several operations, separated by ";":
DELETE{} WHERE {} ;
DELETE{} WHERE {} ;
INSERT DATA{}