SPARQL INSERT not working with PUT method. why? - sparql

I am trying to create a new object with PUT method and to add some of my own prefixes with SPARQL query. But, the object is being created without the added prefixes. It works with POST and PATCH though. Why and is there alternative way for SPARQL to use with PUT method and add using user-defined prefixes?
PREFIX dc: <http://purl.org/dc/elements/1.1/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX indexing: <http://fedora.info/definitions/v4/indexing#>
DELETE { }
INSERT {
<> indexing:hasIndexingTransformation "default";
rdf:type indexing:Indexable;
dc:title "title3";
dc:identifier "test:10";
}
WHERE { }
What I am saying was all the above values specified in the insert clause are not added at all.
EDIT1:
url = 'http://example.com/rest/object1'
payload = """
PREFIX dc: <http://purl.org/dc/elements/1.1/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX indexing: <http://fedora.info/definitions/v4/indexing#>
PREFIX custom: <http://customnamespaces/custom#/>
DELETE { }
INSERT {
<> indexing:hasIndexingTransformation "default";
rdf:type indexing:Indexable;
dc:title "title1";
custom:objState "Active";
custom:ownerId "Owner1";
dc:identifier "object1";
}
WHERE { }
"""
headers = {
'content-type': "application/sparql-update",
'cache-control': "no-cache"
}
response = requests.request("PUT", url, data=payload, headers=headers, auth=('username','password'))

Prefixes are not triples and therefore cannot be added using a SPARQL query. You can always specify prefixes in the SPARQL query and it will generate the correct URI for storage in your triple store.
Also note that your custom namespace is errantly defined by ending with both a hash and a slash. It should be either PREFIX custom: <http://customnamespaces/custom#> or PREFIX custom: <http://customnamespaces/custom/>.
I.e. by your query indexing:hasIndexingTransformation will be stored in the triple store as <http://fedora.info/definitions/v4/indexing#hasIndexingTransformation>.
There is no reason to store the prefix in the triple store (actually, prefixes are an artifact of the text serialization, not the data itself), so you can subsequently query this data in one of two ways.
1) Using a prefix
PREFIX indexing: <http://fedora.info/definitions/v4/indexing#>
SELECT ?o {
[] indexing:hasIndexingTransformation ?o .
}
2) Using the full URI:
SELECT ?o {
[] <http://fedora.info/definitions/v4/indexing#hasIndexingTransformation> ?o .
}

Related

Join has cross-product in SPARQL query

I am trying to come up with a trivial minimal functional example of a federated query not using any web interface or something, but a local query engine (in my case AG free tier).
Now this at least doesn't throw any errors and returns stuff, but I'm getting a warning that the query contains a cross-product. Now I guess the problem is that there is nothing that actually relates the results from the two endpoints.
join has cross-product. Unique LHS variables: ?actor_1_1, ?birthDate, ?spouseURI_1_2, ?spouseName; Unique RHS variables ?actor_2_1, ?gender
But how do I relate them then? In a generic manner, since I guess this information needs to be passed to the query-resolver, and eventually I'll query other databases.
PREFIX dbpo: <http://dbpedia.org/ontology/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
SELECT ?birthDate ?spouseName ?gender {
{ SERVICE <https://dbpedia.org/sparql>
{ SELECT ?birthDate ?spouseName WHERE {
?actor rdfs:label "Arnold Schwarzenegger"#en ;
dbpo:birthDate ?birthDate ;
dbpo:spouse ?spouseURI .
?spouseURI rdfs:label ?spouseName .
FILTER ( lang(?spouseName) = "en" )
}
}
}
{ SERVICE <https://query.wikidata.org/sparql>
{ SELECT ?gender WHERE {
?actor wdt:P1559 "Arnold Alois Schwarzenegger"#de .
?actor wdt:P21 ?gender .
}
}
}
}
I'd appreciate any help here, I haven't had as hard a time getting into a subject since forever.

How to stack SpinSail on top of GraphDB remote repository

I am using GraphDB to store my triples and need to stack SpinSail component upon the GraphDB to support Spin Rules along with all the rest of the features that GraphDB supports by default.
So far I have managed to create a SailRepository on the remote server supporting Spin Rules (more details below) but it seems that it only supports Spin and none of the other features that GraphDB supports (e.g. reviewing the Graph, adding triples through files, searching etc.)
The configuration file, once the repository is created, looks like below:
#prefix ms: <http://www.openrdf.org/config/sail/memory#> .
#prefix rep: <http://www.openrdf.org/config/repository#> .
#prefix sail: <http://www.openrdf.org/config/sail#> .
#prefix sr: <http://www.openrdf.org/config/repository/sail#> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
<#Test1> a rep:Repository;
rep:repositoryID "Test1";
rep:repositoryImpl [
rep:repositoryType "openrdf:SailRepository";
sr:sailImpl [
sail:delegate [
sail:sailType "openrdf:MemoryStore";
ms:persist true
];
sail:sailType "openrdf:SpinSail"
]
] .
Although a normal configuration file (i.e. if someone would create a repository through the workbench) would look like the one below:
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix rep: <http://www.openrdf.org/config/repository#> .
#prefix sail: <http://www.openrdf.org/config/sail#> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
<#Test> a rep:Repository;
rep:repositoryID "Test";
rep:repositoryImpl [
rep:repositoryType "graphdb:FreeSailRepository";
<http://www.openrdf.org/config/repository/sail#sailImpl> [
<http://www.ontotext.com/trree/owlim#base-URL> "http://example.org/owlim#";
<http://www.ontotext.com/trree/owlim#check-for-inconsistencies> "false";
<http://www.ontotext.com/trree/owlim#defaultNS> "";
<http://www.ontotext.com/trree/owlim#disable-sameAs> "false";
<http://www.ontotext.com/trree/owlim#enable-context-index> "false";
<http://www.ontotext.com/trree/owlim#enable-literal-index> "true";
<http://www.ontotext.com/trree/owlim#enablePredicateList> "true";
<http://www.ontotext.com/trree/owlim#entity-id-size> "32";
<http://www.ontotext.com/trree/owlim#entity-index-size> "10000000";
<http://www.ontotext.com/trree/owlim#imports> "";
<http://www.ontotext.com/trree/owlim#in-memory-literal-properties> "true";
<http://www.ontotext.com/trree/owlim#query-limit-results> "0";
<http://www.ontotext.com/trree/owlim#query-timeout> "0";
<http://www.ontotext.com/trree/owlim#read-only> "false";
<http://www.ontotext.com/trree/owlim#repository-type> "file-repository";
<http://www.ontotext.com/trree/owlim#ruleset> "owl2-rl-optimized";
<http://www.ontotext.com/trree/owlim#storage-folder> "storage";
<http://www.ontotext.com/trree/owlim#throw-QueryEvaluationException-on-timeout> "false";
sail:sailType "graphdb:FreeSail"
]
];
rdfs:label "Test" .
The following code was used to create the repository.
RemoteRepositoryManager manager = new RemoteRepositoryManager("http://localhost:7200");
manager.init();
String repositoryId = "Test1";
// create a configuration for the SAIL stack
boolean persist = true;
// create a configuration for the SAIL stack
SailImplConfig spinSailConfig = new MemoryStoreConfig(persist);
spinSailConfig = new SpinSailConfig(spinSailConfig);
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(spinSailConfig);
// create a configuration for the repository implementation
// RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
RepositoryConfig repConfig = new RepositoryConfig(repositoryId, repositoryTypeSpec);
manager.addRepositoryConfig(repConfig);
Once I created this repository I was able to insert the following rule into it through the SPARQL section (by using INSERT DATA):
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX sp: <http://spinrdf.org/sp#>
PREFIX spin: <http://spinrdf.org/spin#>
PREFIX ex: <http://example.org/>
INSERT DATA {
ex:Person a rdfs:Class ;
spin:rule [
a sp:Construct ;
sp:text """PREFIX ex: <http://example.org/>
CONSTRUCT { ?this ex:childOf ?parent . }
WHERE { ?parent ex:parentOf ?this . }"""
] . }
and then similarly add the following statements:
PREFIX ex: <http://example.org/>
INSERT DATA {
ex:John a ex:Father ;
ex:parentOf ex:Lucy .
ex:Lucy a ex:Person .
}
After that by running the following query:
PREFIX ex: <http://example.org/>
SELECT ?child WHERE { ?child ex:childOf ?parent }
I was able to confirm that the Spin rule was executed successfully.
So my question is:
Is there a way to create a remote repository supporting all the features of GraphDB and then stack upon it the SpinSail component?
As of the moment GraphDB (version 8.10.0) does not support SpinSail. Such option is under consideration for one of the next GraphDB releases.

How to select a property according to a condition in RDF

In the following RDF sample, I would like to get a value of one of properties according to a condition.
The condition is below:
If a value of property referProperty is "ex:sender", then I get a value of property ex:sender.
If a value of property referProperty is "ex:reciever", then I get a value of property ex:reciever.
#prefix ex: <http://www.example.org#> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
ex:mail1
ex:sender "John"^^xsd:string ;
ex:receiver "Bill"^^xsd:string ;
ex:paymentType ex:paymentType1 .
ex:mail2
ex:sender "Jack"^^xsd:string ;
ex:receiver "Tom"^^xsd:string ;
ex:paymentType ex:paymentType2 .
ex:paymentType1
ex:paymentTypeDescription "PREPAID"^^xsd:string ;
ex:referProperty "ex:sender"^^xsd:string .
ex:paymentType2
ex:paymentTypeDescription "COLLECT"^^xsd:string ;
ex:referProperty "ex:receiver"^^xsd:string .
Sparql to get a value is below:
PREFIX ex:<http://ww....ex#>
SELECT ?mail ?refProp ?payer
WHERE
{
{
SELECT ?mail ?refProp
WHERE
{
?mail ex:paymentType / ex:referProperty ?refProp.
}
}
?mail ?refProp ?payer.
}
It is possible to get a value by the Sparql. But I question that the way using the above RDF sample and Sparql is irregular, because it seems to extends outside first-order predicate logic.
Using a value which was gotten from literal node as a variable to search a property is irregular. In theory, property”ex:referProperty” should be defined in RDFS.
Do you think so? If you think so, teach me a regular way which includes right RDF and right Sparql.
Option 1
One can transform URIs to strings and vice versa using STR(), IRI() and REPLACE():
PREFIX ex: <http://www.example.org#>
SELECT ?mail ?str ?payer {
?mail ?p ?payer; ex:paymentType/ex:referProperty ?str.
FILTER (replace(str(?p), str(ex:), "ex:") = ?str)
}
or
PREFIX ex: <http://www.example.org#>
SELECT ?mail ?str ?payer {
?mail ?p ?payer; ex:paymentType/ex:referProperty ?str.
FILTER (URI(replace(?str, "ex:", str(ex:))) = ?p)
}
Option 2
You can implement your conditions using BIND and IF:
PREFIX ex: <http://www.example.org#>
SELECT ?mail ?str ?payer {
?mail ex:paymentType/ex:referProperty ?str.
BIND (IF(?str = "ex:sender", ex:sender, ex:receiver) AS ?p)
?mail ?p ?payer.
}
Option 3
Use VALUES:
PREFIX ex: <http://www.example.org#>
SELECT ?mail ?str ?payer {
VALUES (?str ?uri) {("ex:sender" ex:sender) ("ex:receiver" ex:receiver)}
?mail ?uri ?payer; ex:paymentType/ex:referProperty ?str.
}
Option 4
As suggested by Damyan Ognyanov, you could use URIs instead of strings in your reference data, i. e. ex:sender instead of "ex:sender" etc. Then your query will be simply the following:
PREFIX ex: <http://www.example.org#>
SELECT ?mail ?str ?payer {
?mail ?uri ?payer; ex:paymentType/ex:referProperty ?uri.
}
Update
From your comment:
Isn’t using ex:sender as property and object in one RDF contradiction? Using ex:sender as URI in RDF makes the border between RDF an RDFS dubious.
From RDF 1.1 Concepts and Abstract Syntax:
The set of nodes of an RDF graph is the set of subjects and objects of triples in the graph.
It is possible for a predicate IRI to also occur as a node in the same graph.
In RDF, "properties" (or rather their "types") are first-class objects.
RDFS is rather a vocabulary (which introduces e. g. the notion of rdf:Property) than a schema.
There is no clear distinction between "ABox" and "TBox" in RDF.

Replace null values with '' 'sem:sparql' output json in MarkLogic

I using ML 8.0-6.3
I am using sem:sparql() function to run the SPARQL queries.
If there is no triple for a particular variable(variables are in OPTIONAL block) I am getting null value in the JSON output.
Is there any work around in MarkLogic to replace the null values with "".
Like:
coming output:
{
"ncFacetIri": "http://www.test.com/facet/UL",
"acronym": "UL",
"acronym1": null
}
expected json output:
{
"ncFacetIri": "http://www.test.com/facet/UL",
"acronym": "UL",
"acronym1": ""
}
This way I am converting the sem:sparql output to JSON objects:
<a>{sem:sparql($query)}</a>/json:object ! json:object(.)
Please help.
You can use COALESCE for that:
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT DISTINCT ?subject ?type (COALESCE(?l, "") as ?label)
WHERE {
?subject rdf:type ?type.
OPTIONAL {
?subject rdfs:label ?l.
}
}
HTH!

sparql queries with round brackets throw exception

I am trying to extract labels from DBpedia for some persons. I am partially successful now, but I got stuck in the following problem. The following code works.
public class DbPediaQueryExtractor {
public static void main(String [] args) {
String entity = "Aharon_Barak";
String queryString ="PREFIX dbres: <http://dbpedia.org/resource/> SELECT * WHERE {dbres:"+ entity+ "<http://www.w3.org/2000/01/rdf-schema#label> ?o FILTER (langMatches(lang(?o),\"en\"))}";
//String queryString="select * where { ?instance <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person>; <http://www.w3.org/2000/01/rdf-schema#label> ?o FILTER (langMatches(lang(?o),\"en\")) } LIMIT 5000000";
QueryExecution qexec = getResult(queryString);
try {
ResultSet results = qexec.execSelect();
for ( ; results.hasNext(); )
{
QuerySolution soln = results.nextSolution();
System.out.print(soln.get("?o") + "\n");
}
}
finally {
qexec.close();
}
}
public static QueryExecution getResult(String queryString){
Query query = QueryFactory.create(queryString);
//VirtuosoQueryExecution vqe = VirtuosoQueryExecutionFactory.create (sparql, graph);
QueryExecution qexec = QueryExecutionFactory.sparqlService("http://dbpedia.org/sparql", query);
return qexec;
}
}
However, when the entity contains brackets, it does not work. For example,
String entity = "William_H._Miller_(writer)";
leads to this exception:
Exception in thread "main" com.hp.hpl.jena.query.QueryParseException: Encountered " "(" "( "" at line 1, column 86.`
What is the problem?
It took some copying and pasting to see what exactly was going on. I'd suggest that you put newlines in your query for easier readability. The query you're using is:
PREFIX dbres: <http://dbpedia.org/resource/>
SELECT * WHERE
{
dbres:??? <http://www.w3.org/2000/01/rdf-schema#label> ?o
FILTER (langMatches(lang(?o),"en"))
}
where ??? is being replaced by the contents of the string entity. You're doing absolutely no input validation here to ensure that the value of entity will be legal to paste in. Based on your question, it sounds like entity contains William_H._Miller_(writer), so you're getting the query:
PREFIX dbres: <http://dbpedia.org/resource/>
SELECT * WHERE
{
dbres:William_H._Miller_(writer) <http://www.w3.org/2000/01/rdf-schema#label> ?o
FILTER (langMatches(lang(?o),"en"))
}
You can paste that into the public DBpedia endpoint, and you'll get a similar parse error message:
Virtuoso 37000 Error SP030: SPARQL compiler, line 6: syntax error at 'writer' before ')'
SPARQL query:
define sql:big-data-const 0
#output-format:text/html
define sql:signal-void-variables 1 define input:default-graph-uri <http://dbpedia.org> PREFIX dbres: <http://dbpedia.org/resource/>
SELECT * WHERE
{
dbres:William_H._Miller_(writer) <http://www.w3.org/2000/01/rdf-schema#label> ?o
FILTER (langMatches(lang(?o),"en"))
}
Better than hitting DBpedia's endpoint with bad queries, you can also use the SPARQL query validator, which reports for that query:
Syntax error: Lexical error at line 4, column 34. Encountered: ")" (41), after : "writer"
In Jena, you can use the ParameterizedSparqlString to avoid these sorts of issues. Here's your example, reworked to use a parameterized string:
import com.hp.hpl.jena.query.ParameterizedSparqlString;
public class PSSExample {
public static void main( String[] args ) {
// Create a parameterized SPARQL string for the particular query, and add the
// dbres prefix to it, for later use.
final ParameterizedSparqlString queryString = new ParameterizedSparqlString(
"PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>\n" +
"SELECT * WHERE\n" +
"{\n" +
" ?entity rdfs:label ?o\n" +
" FILTER (langMatches(lang(?o),\"en\"))\n" +
"}\n"
) {{
setNsPrefix( "dbres", "http://dbpedia.org/resource/" );
}};
// Entity is the same.
final String entity = "William_H._Miller_(writer)";
// Now retrieve the URI for dbres, concatentate it with entity, and use
// it as the value of ?entity in the query.
queryString.setIri( "?entity", queryString.getNsPrefixURI( "dbres" )+entity );
// Show the query.
System.out.println( queryString.toString() );
}
}
The output is:
PREFIX dbres: <http://dbpedia.org/resource/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT * WHERE
{
<http://dbpedia.org/resource/William_H._Miller_(writer)> rdfs:label ?o
FILTER (langMatches(lang(?o),"en"))
}
You can run this query at the public endpoint and get the expected results. Notice that if you use an entity that doesn't need special escaping, e.g.,
final String entity = "George_Washington";
then the query output will use the prefixed form:
PREFIX dbres: <http://dbpedia.org/resource/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT * WHERE
{
dbres:George_Washington rdfs:label ?o
FILTER (langMatches(lang(?o),"en"))
}
This is very convenient, because you don't have to do any checking about whether your suffix, i.e., entity, has any characters that need to be escaped; Jena takes care of that for you.