I want to implement a user-defined boolean aggregation function in SPARQL and I am checking whether how easy/feasible is this in different SPARQL engines. In regards to Virtuoso, is it possible? If so, where could I find further information about it? By googling I found how to do it for SQL but not for SPARQL: http://docs.openlinksw.com/virtuoso/aggregates/
Thanks your attention and help,
Luis
You're more than halfway there.
Virtuoso lets you use SQL functions, both built-in (bif:) and user-defined (sql:), within SPARQL queries, as discussed in the documentation:
A SPARQL expression can contain calls to Virtuoso/PL functions and built-in SQL functions in both the WHERE clause and in the result set. Two namespace prefixes, bif and sql are reserved for these purposes. When a function name starts with the bif: namespace prefix, the rest of the name is treated as the name of a SQL BIF (Built-In Function). When a function name starts with the sql: namespace prefix, the rest of the name is treated as the name of a Virtuoso/PL function owned by DBA with database qualifier DB, e.g., sql:example(...) is converted into DB.DBA."example"(...).
ObDisclaimer: OpenLink Software produces Virtuoso, and employs me.
Related
I want to know if it's possible to pass in a list as a parameter in native queries.
When search up online, an article in Baeldung has exactly what I want to do:
Collection-Valued Positional Parameters usage
I did the exact same thing, except that in the article, they used "createQuery" and I used "createNativeQuery". Not sure if this is the reason why mine is not working.
CreateQuery means JPQL was passed in which is parsed and modified into SQL, which allows it to break the collection parameter into its components to pass each into the SQL statement. CreateNativeQuery uses your SQL which isn't modified, and JDBC doesn't understand collections so requires parameters broken up into individual arguments in the SQL. You have to do it yourself and dynamically build the SQL based on the number of parameters in the collection.
There are other questions with solutions that touch on other options, such as using SQL within criteria or JPQL queries that can let you get the best of both.
I am using Template Driven Extraction to generate an SQL view and RDF triples from the same set of documents. The SQL view is used for quick inspection of the raw data, while the triples are used downstream to feed information to a knowledge graph.
I now need to extract the RDF triples into an external file, and I'm struggling with separating out those triples that back the SQL view. The documentation suggests that I should use fixed subjects or predicates in my Sparql query, which is something I can't do because I don't know either of the two beforehand. I tried filtering out the SQL triples in XQuery, but I could not devise a way to detect whether a certain value returned by sem:sparql or a triple returned by cts:triples was one of SQL's or mine.
Any help on how to get a dump of all non-SQL triples out of MarkLogic would be appreciated.
Thanks,
Hans
Subjects from SQL views are not real sem:iri's (they are sql:rowID's), so you can use the following to exclude them:
FILTER( ISIRI(?subject) )
HTH!
You could try to use the function tde:node-data-extract.
It basically lets you see the results of a document and TDEs.
While it may involve some work doings this with all documents and converting it into RDF again it should be possible.
Are variable names in SPARQL queries case-sensitive? E.g., will variables ?abc and ?ABC (within a given scope) always refer to the same variable?
If the answer can only be given in relation to a specific implementation, I'm most interested in the current version of Jena (ARQ).
Yes the variables are case sensitive. ?abc and ?ABC are surely different. They do not map to same bindings for the query.
Yes, variable names are case sensitive. This is not explicitly stated in the SPARQL specification, but is implied by the fact that everything that is not case sensitive (e.g. SPARQL keywords such as "SELECT") is explicitly stated to be so (so as you yourself remarked, it is implied by not mentioning the opposite).
All compliant SPARQL implementations that I know of, including Sesame, Jena, GraphDB, Stardog, Redland, dotNetRDF, etc. etc. implement variable names in this fashion.
I am using this client
http://yasgui.laurensrietveld.nl
and I hope to query bioportal http://bioportal.bioontology.org
Most of my prior queries had a PREFIX and no FROM part. Can I move any FROM URL into PREFIX?
Using YASGUI client, what is the difference between FROM and the Endpoint field?
Can I rewrite any query with a from statement into a query that does not have it?
I am not able to list for example details of Human Phenotype Ontology concept id: HP:0000023 because I am not sure what to put into FROM or if to use it at all.
There are a number of terms and mechanisms here. Let's go over them one by one.
First of all, a PREFIX clause is simply a declaration of a syntax shortcut, for use within your query. So this line:
PREFIX ex: <http://example.org/>
says that the string ex: is a shortcut for the string http://example.org/. If you have this prefix declared at the start of your query, you can use ex:someUrl (instead of <http://example.org/someUrl>) in other places in your query. It's simply there to make queries easier to read and write, but apart from that it has no influence on the meaning of your query.
A SPARQL endpoint is another term for a web service that can answer SPARQL queries.
The FROM clause of a SPARQL query determines the dataset (or more precisely, the default graph, which is part of the dataset) over which the query is executed. Any SPARQL endpoint may contain several graphs, each identified by a URI (so-called named graphs). A collection of such graph together is a dataset. If you don't specify a FROM clause (and perhaps also one or more FROM NAMED clauses), the dataset queried is simply whatever default dataset the endpoint chooses.
So, what this mean for your specific questions?
Most of my prior queries had a PREFIX and no FROM part. Can I move any FROM URL into PREFIX?
As you can see from the above explanation, that would make no sense. They are different mechanisms, for different purposes, that just both happen to use URIs.
Using YASGUI client, what is the difference between FROM and the Endpoint field?
The endpoint field defines which service YASGUI needs to send the query to. The FROM clause tells the endpoint what dataset you want to query.
Can I rewrite any query with a from statement into a query that does not have it?
Not generally, no. The absence of a FROM clause means that the endpoint executes the query over its default dataset. Depending on how that endpoint is configured, this may mean that you either get a lot more results (namely not just from the one dataset you want, but from a lot of others) or none at all (in case the dataset you wanted to query is not part of the endpoint's default dataset).
I want to get results from sparql query and the results contain no namespace.
ex: there is result in triple format like:
"http://www.xyz.com#Raxit" "http://www.w3.org/1999/02/22-rdf-syntax-ns#type" "http://www.xyz.com#Name"
So i want to get only following:
Raxit type Name
I want to get this results directly from sparql query. I am using virtuoso.
Is it possible to get this from sparql?
Please share your thoughts regarding this.
Thanks in Advance.
If your data is regular, and you know that the sub-string you want always occurs after a # character, then you can use the strafter function from SPARQL 1.1. I do not know whether this is available in Virtuoso's implementation or not.
However this is, in general, a very risky strategy. Not all URI's are formatted with a local name part after a # character. In fact, in general, a URI may not have a legal or useful localname at all. So you should ask yourself: why do you think you need this? Generally speaking, a semantic web application uses the whole URI as an indivisible identifier. If your need is actually for something human-friendly to display in a UI, have your query also look for rdfs:label or skos:label properties. Worst case, try to abbreviate the URI to q-name form (i.e. prefix:name), using the prefixes from the model or a service like prefix.cc
The simplest way to achieve this is to not bother with adapting your query, but to just post-process the result yourself. Depending on which client library you use to communicate with Virtuoso, you will typically find it has API support to parse the result, get back values, and for each value then get only local name (I suggest you look for a URI.getLocalName() method or something similar).