I need to generate dynamically the name of a graph depending on the time.
I've tough that some think like
select ?g where {
bind(concat("<urn:myNewGraph_",str(now()),">") as ?g)
}
would have done the trick, but with Stardog I get a null result.
If instead I run this
select ?g where {
bind(concat("urn:myNewGraph_",str(now())) as ?g)
}
i get urn:myNewGraph_2015-05-28T09:37:11.823Z
Any Ideas?
moreover I'm not sure that even if i can get somehow a string like <urn:myNewGraph_2015-05-28T09:37:11.823Z> would have worked as a valid argument for a graph name as can be seen from this not-working test:
INSERT {graph ?g {<urn:s> <urn:p> <urn:o>}
where {
?g="<rn:myNewGraph_2015-05-28T09:37:11.823Z>"
}
is there a proper way to generate an urn/iri/uri dynamically?
Your original query looks correct, and produces a valid result when I execute it using a different SPARQL engine (Sesame), so I guess that you might want to report this to the Stardog developers as a possible bug.
However, to be able to use the value thus obtained it needs to be an actual URI (or IRI) - whereas what you're producing is a literal string.
You need to change two things: first of all, get rid of the enclosing < and > (these brackets are not actually part of the IRI) - so actually your second query is better. Second, use the IRI function to convert your string value to an IRI:
INSERT {GRAPH ?g {<urn:s> <urn:p> <urn:o>} }
WHERE {
BIND( IRI(CONCAT("urn:myNewGraph_",STR(NOW()))) as ?g)
}
Not sure it's necessary in your case, but in general you may need to use the ENCODE_FOR_URI function in there somewhere, to make sure that any special characters in your string are properly encoded/escaped before turning it into an IRI.
Related
I am currently trying to create pointers to datatype values as they cannot be linked directly. However, I would like to be able to evaluate the pointers from within the SPARQL environment, which raised specifically in the case that the desired value is part of an ordered rdf:List some questions for me. My approach is to use property paths within a SPARQL query in which I can use the defined individual, property and index of the ordered list that the pointer has attached to it.
Given the following example data with the shortened syntax for ordered lists by ttl:
ex:myObject ex:somePropery ("1" "2" "3") .
ex:myPointer ex:lookAtIndividual ex:myObject;
ex:lookAtProperty ex:someProperty ;
ex:lookAtIndex "3"^^xsd:integer .
Now I would like to create a SPARQL query that -- based on the pointer -- returns the value at the given index. To my understanding the query could/should look something like this:
SELECT ?value
WHERE {
ex:myPointer ex:lookAtIndividual ?individual ;
ex:lookAtProperty ?prop ;
ex:lookAtIndex ?index .
?individual ?prop/rdf:rest{?index-1}/rdf:first ?value .
}
But if I try to execute this query with TopBraid, it shows an error message that ?index has been found when <INTEGER> was expected. I also tried binding the index in the SPARQL query via BIND(?index-1 AS ?i), again without success. If the pointed value is not stored in a list, the query without property path works fine.
Is it in general possible to use a value that is connected via datatype property within a SPARQL query as path length for property paths?
This syntax: rdf:rest{<number>} is not standard SPARQL. So the short answer is, regrettably: no, you can't use variables as integers in SPARQL property paths, for the simple reason that you can't use integers in SPARQL property paths at all.
In an earlier draft of the SPARQL standard, there was a proposal to use this kind of syntax to allow specifying the min and max length of a property path, e.g. rdf:rest{1, 3} would match any paths using rdf:rest properties between length 1 and 3. But this was never fully standardized and most SPARQL engines don't implement it.
If you happen to use a SPARQL engine that does implement it, you will have to get in touch with the developers directly to ask if they can extend the mechanism to allow use of variables in this position (the error message suggests to me that it's currently just not possible).
As an aside: there's a SPARQL 1.2 community initiative going on. It only just got started but one of the proposals on the table is re-introducing this particular piece of functionality to the standard.
I am using rdflib in Python to build my first rdf graph. However, I do not understand the explicit purpose of defining Literal datatypes. I have scraped over the documentation and did my due diligence with google and the stackoverflow search, but I cannot seem to find an actual explanation for this. Why not just leave everything as a plain old Literal?
From what I have experimented with, is this so that you can search for explicit terms in your Sparql query with BIND? Does this also help with FILTERing? i.e. FILTER (?var1 > ?var2), where var1 and var2 should represent integers/floats/etc? Does it help with querying speed? Or am I just way off altogether?
Specifically, why add the following triple to mygraph
mygraph.add((amazingrdf, ns['hasValue'], Literal('42.0', datatype=XSD.float)))
instead of just this?
mygraph.add((amazingrdf, ns['hasValue'], Literal("42.0")))
I suspect that there must be some purpose I am overlooking. I appreciate your help and explanations - I want to learn this right the first time! Thanks!
Comparing two xsd:integer values in SPARQL:
ASK { FILTER (9 < 15) }
Result: true. Now with xsd:string:
ASK { FILTER ("9" < "15") }
Result: false, because when sorting strings, 9 comes after 1.
Some equality checks with xsd:decimal:
ASK { FILTER (+1.000 = 01.0) }
Result is true, it’s the same number. Now with xsd:string:
ASK { FILTER ("+1.000" = "01.0") }
False, because they are clearly different strings.
Doing some maths with xsd:integer:
SELECT (1+1 AS ?result) {}
It returns 2 (as an xsd:integer). Now for strings:
SELECT ("1"+"1" AS ?result) {}
It returns "11" as an xsd:string, because adding strings is interpreted as string concatenation (at least in Jena where I tried this; in other SPARQL engines, adding two strings might be an error, returning nothing).
As you can see, using the right datatype is important to communicate your intent to code that works with the data. The SPARQL examples make this very clear, but when working directly with an RDF API, the same kind of issues crop up around object identity, ordering, and so on.
As shown in the examples above, SPARQL offers convenient syntax for xsd:string, xsd:integer and xsd:decimal (and, not shown, for xsd:boolean and for language-tagged strings). That elevates those datatypes above the rest.
Part of my query looks something like this:
GRAPH g1: {VALUES (?ut) {$U1}
?IC_uri skos:related ?ut .
}
Normally, based on user input, $U1 gets a list of URIs. I would like to send for test purposes values for $U1 so that the declaration of values is ignored and all possible values are considered. In fact, it should produce the same results as:
GRAPH g1: {
# VALUES (?ut) {$U1}
?IC_uri skos:related ?ut .
}
I remember there was a way to do that, but I couldn't find it in the SPARQL specification.
I'd propose three options:
FILTER (?ut IN ($ut)), passing $ut instead of a list of URIs;
BIND ($ut as ?ut), passing $ut instead of a single URI;
VALUES (?ut) {(UNDEF)}, passing (UNDEF) instead of a space-separated list of (parentheses-enclosed) URIs.
Such SPARQL injections can not be considered safe.
The UNDEF keyword first mentioned in 10.2.2 VALUES Examples:
If a variable has no value for a particular solution in the VALUES clause, the keyword UNDEF is used instead of an RDF term.
I find that these are two different query languages:
SPARQL-QUERY and SPARQL-UPDATE.
What other types I could see in SPRARQL?
And I am looking for a syntax where I can replace a particular element property with a new value.
But, using insert query, I can only see that the new value is being added as additional value of the property instead of replacing the whole values of the property.
So, is there any other language for this purpose, like sparql-update something?
Also, I can see that delete option is there. But I don't want to specify a particular value, but to delete the whole pattern. Of course, we can specify the pattern I guess. But I just wonder, if there is a specific language for this purpose.
EDIT:
And in the following query, I don't find the purpose of using where clause at all. It always inserts specified value as a new value, but is not replacing it. We need to use the delete clause specifically. Then what's the purpose of where clause here?
PREFIX dc: <http://purl.org/dc/elements/1.1/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX indexing: <http://fedora.info/definitions/v4/indexing#>
PREFIX custom: <http://namespaces.info/custom#>
DELETE {
}
INSERT {
<> indexing:hasIndexingTransformation "default";
rdf:type indexing:Indexable;
dc:title "title3";
custom:ownerId "owner6";
dc:identifier "test:10";
}
WHERE {
<>
custom:ownerId "owner1";
}
The SPARQL recommendation is separated into separate documents, see SPARQL 1.1 Overview from W3C.
The WHERE clause can be empty, but also look into INSERT DATA, which takes a set of triple specifications (not patterns - no variables) and inserts them. No WHERE clause id needed int that case. Same for deleting triple specifications with DELETE DATA.
SPIN is a way to represent a wide range of business rules.
This is the official one line description for spin (spinrdf).
Spin enables users to represent their rules with sparqls in ontologies.
I needed to make these descriptions since there is no spinrdf tag.
I have been using spin about a week to write some rules. Now I'm writing some functions to simplify my sparqls in my rules. I have a written a simple date comparison function compareDates. When I call the function with the following sparql there is no errors and gives the expected result.
SELECT ?result
WHERE {
BIND(:compareDates("2015-03-03"^^xsd:date, "2015-06-09"^^xsd:date) as ?result)
}
I would like to use sp:now function comes with spin. When I use the following sparql I have no output.
SELECT ?result
WHERE {
BIND(:compareDates("2015-03-03"^^xsd:date, sp:now()) as ?result)
}
Then I tried the following, but no luck:
SELECT ?result
WHERE {
BIND(sp:now() as ?now)
BIND(:compareDates("2015-03-03"^^xsd:date, ?now) as ?result)
}
And then I decided to see what sp:now returns and I have runned the following sparql the result is null. This lead me to a conclusion that I won't be able to run this function.
SELECT ?now
WHERE {
BIND(sp:now() as ?now)
}
I would like to use that function or similar one but I don't get the problem. Any comment is appreciated.
UPDATE 1
As shown in the following screenshot, the function does not contain any body! This would be the problem but, why it's been placed in the related ontology if won't work.
After some research I have find out two alternative methods for having now datetime. In fact there exists a sparql implementaion of now() function documented here.
SELECT ?now
WHERE {
BIND(now() as ?now).
}
This sparql will return the following:
[now]
2015-03-24T22:12:29.183+02:00
There is an alternative method placed in spin ontology; afn:now() which is placed under spl:MiscFunctions class. This function will give the same result.
By the way, I have been using xsd:date as my functions argument but the both now function alternatives returns xsd:dateTime literals.
To convert these to xsd:date is another story.
There exists some cast functions but they convert only type but not trim the hour part of the xsd:dateTime which causes my comparison to fail.
Thus have come up with the following sparql which uses an indirect approach to convert xsd:dateTime to xsd:date :
SELECT ?nowDateTime ?nowDate
WHERE {
BIND(now() as ?nowDateTime).
BIND(spif:cast(spif:dateFormat(?nowDateTime, "yyyy-MM-dd"), xsd:date) as ?nowDate).
}
Which converted the successfully.
This could be a premature way to convert between to date literal types but this is what I have came up to solve my problem.
Any advice is appreciated.