Is there an error in the example given for inference of friends on the following docs page?
https://docs.cambridgesemantics.com/anzograph/v2.2/userdoc/inferences.htm?Highlight=inference#inference-example
I had to place an # before the prefix in order to get the example working.
EDIT::
So a good look at the TURTLE w3.org page states both PREFIX and #prefix are fine.
But with PREFIX it appears there should not be a '.' at the end of the PREFIX line, so removing that works fine to load the ttl file as both a graph and an EXTERNAL source.
Thanks for the report. The update will be made.
Related
The following query works fine with Jena ARQ but in GraphDB it does not retrieve anything:
SELECT *
FROM <http://www.bobdc.com/miscfiles/BeatlesMusicians.ttl>
WHERE { ?s ?p ?o .}
Is there something I need to configure on GraphDB to get this to work?
After reviewing https://www.w3.org/TR/sparql11-query/ it looks to me like the spec is saying that treating the FROM URI as a URL and retrieving triples from that location is something that the query engine can do but is not required to do.
Section 13.2 says that FROM specifies an IRI, which sounds like it's talking about a named graph and not necessarily a remote dataset to retrieve (that is, treating the URI as a URL), which is what I was looking for.
Section 13.2.3 does have "FROM http://example.org/dft.ttl" in one of its examples, which looks to me like it's specifying a Turtle file on some remote server to read into the default graph, like I was trying to do.
Section 21 says "SPARQL queries using FROM, FROM NAMED, or GRAPH may cause the specified URI to be dereferenced," but as we see from the word "may", does not require it. (The rest of that paragraph has a little more about this.)
I have found that the Jena tools arq and fuseki do this, but GraphDB and Blazegraph do not.
I would like to perform a full text search on a subset of dbpedia (which i have in a tdb store) with lucene and jena.
String TDBDirectory = "path" ;
Dataset dataset = TDBFactory.createDataset(TDBDirectory) ;
But not over all resources, only over titles. I think by making indices only over the needed triples I can perform a faster search. E.g.
<http://de.dbpedia.org/resource/Gurke> <http://www.w3.org/2000/01/rdf-schema#label> "Gurke"#de .
Here I would like to search for "Gurke", but not in any other triples than the ones with the #label property.
So my question is how do I build indices and search only triples with the #label property?
I have already looked at http://jena.sourceforge.net/ARQ/lucene-arq.html but it's not detailed enough or too difficult for me.
http://jena.sourceforge.net/ is the old home for Jena -- the project is now http://jena.apache.org/ (how did you managed to find that old page?)
The project recently introduced a replacement for LARQ.
http://jena.apache.org/documentation/query/text-query.html
and this is now part of the main codebase. It will released with the 2.10.2 release - for the moment you must use the development build from https://repository.apache.org/content/repositories/snapshots/org/apache/jena/. You either need to be using Fuseki or add it as a dependency for your project.
This new text search subsystem works much better with TDB and Fuseki.
So I have this .rdf that I have loaded onto Stardog and then I am using Pubby running over Jetty, to browse the triple store.
In my rdf file, I have several blank nodes which is given a blank node identifier by stardog. So this is a snippet of the rdf file.
<kbp:ORG rdf:about="http://somehostname/resource/res1">
<kbp:canonical_mention>
<rdf:Description>
<kbp:mentionStart rdf:datatype="http://www.w3.org/2001/XMLSchema#integer">1234</kbp:mentionStart>
<kbp:mentionEnd rdf:datatype="http://www.w3.org/2001/XMLSchema#integer">1239</kbp:mentionEnd>
</rdf:Description>
</kbp:canonical_mention>
So basically I have some resource "res1" which has links to blank node which has a mention start and mention end offset value.
The snippet of the config.ttl file for Pubby is shown below.
conf:dataset [
# SPARQL endpoint URL of the dataset
conf:sparqlEndpoint <http://localhost:5822/xxx/query>;
#conf:sparqlEndpoint <http://localhost:5822/foaf/query>;
# Default graph name to query (not necessary for most endpoints)
conf:sparqlDefaultGraph <http://dbpedia.org>;
# Common URI prefix of all resource URIs in the SPARQL dataset
conf:datasetBase <http://somehostname/>;
...
...
So the key thing is the datasetBase which maps URIs to URL.
When I try to map this, there is an "Anonymous node" link but upon clicking, nothing is displayed. My guess is, this is because the blank node has some identifier like _:bnode1234 which is not mapped by Pubby.
I wanted to know if anyone out there knows how to map these blank nodes.
(Note: If I load this rdf as a static rdf file directly onto Pubby, it works fine. But when I use stardog as a triple store, this mapping doesn't quite work)
It probably works in Pubby because they are keeping the BNode id's available; generally, the SPARQL spec does not guarantee or require that BNode identifiers are persistent. That is, you can issue the same query multiple times, which brings back the same result set (including bnodes) and the identifiers can be different each time. Similarly, a bnode identifier in a query is treated like a variable, it does not mean you are querying for that specific bnode.
Thus, Pubby is probably being helpful and making that work which is why using it directly works as opposed to a third party database.
Stardog does support the Jena/ARQ trick of putting a bnode identifier in angle brackets, that is, <_:bnode1234> which is taken to mean, the bnode with the identifier "bnode1234". If you can get Pubby to use that syntax in queries for bnodes, it will probably work.
But generally, I think this is something you will have to take up with the Pubby developers.
I'm working on a project using sparql & dbpedia.
I 'm currently having an issue with a textuel property with a slash on it.
Here is a working query with the property "discharge" which express the amount of water per time of a river:
PREFIX dbp: <http://dbpedia.org/property/>
SELECT ?discharge
WHERE
{
<http://dbpedia.org/resource/Nile> dbp:discharge ?discharge .
FILTER(ISLITERAL(?discharge))
}
LIMIT 200
This request is working fine.
Still if use, a similar property called "discharge_m3/s", it"s not working anymore and I got this error which increminates the slash on the property name:
Virtuoso 37000 Error SP030: SPARQL compiler, line 3: syntax error at
'/' before 's'
Any idea to go through this ?
Do you mean you are trying to use the property in prefixed name form i.e. dbp:discharge_m3/s?
If that is the case you can't do that because that is not a valid prefixed name according to the SPARQL grammer hence the compiler error.
You would have to include the full URI instead of the prefixed name form e.g.
<http://dbpedia.org/property/discharge_m3/s>
In compliant SPARQL 1.1 systems, you can backslash-escape the slash: dbp:discharge_m3\/s. I'm not sure if Virtuoso supports that syntax yet. In the meantime, #RobV's solution will work.
I'm pretty new to SPARQL, OWL and Jena, so please excuse if I'm asking utterly stupid questions. I'm having a problem that is driving me nuts since a couple of days. I'm using the following String as a query for a Jena QueryFactory.create(queryString),
queryString = "PREFIX foaf: <http://xmlns.com/foaf/0.1/>"+
"PREFIX ho: <http://www.flatlandfarm.de/fhtw/ontologies/2010/5/22/helloOwl.owl#>" +
"SELECT ?name ?person ?test ?group "+
"WHERE { ?person foaf:name ?name ; "+
" a ho:GoodPerson ; "+
" ho:isMemberOf ?group ; "+
"}";
Until this morning it worked as long as I only asked for properties from the foaf namespace. As soon as I asked for properties from my own namespace I always got empty results. While I was about to post this question here and did some final tests to be able to post it as precise as possible, it suddenly worked. So as I did not know what exactly to ask for anymore, I deleted my question before posting it. A couple of hours later I used Protege's Pellet plugin to create and export an inferred Model. I called it helloOwlInferred.owl and uploaded it to the directory on my server where helloWl.owl resided yet. I adjusted my method to load the inferred ontology and changed the above query so that the prefix ho: was assigned to the inferred ontology as well. At once, nothing worked any more. To be exact it was not nothing that worked any more but it was the same symptoms I had till this morning with my original query. My prefix did not work any more. I did a simple test: I renamed all the helloWorldInferred.owl files (the one on my server for the prefix and my local copy which I loaded) to helloWorld.owl. Strange enough that fixed everything.
Renaming it back to helloWorldInferred.owl broke everything again. And so on. What is going on there? Do I just need to wait a couple of weeks until my ontology gets "registered as a valid prefix"?
Maybe your OWL file contains the rdf:ID="something" construct (or some other form of relative URL, such as rdf:about="#something")?
rdf:ID and relative URLs are expanded into full absolute URLs, such as http://whatever/file.owl#something, by using the base URL of the OWL file. If the base URL is not explicitly specified in the file (using something like xml:base="http://whatever/file.owl"), then the location of the file on the web server (or in your file system if you load a local file) will be used as the base URI.
So if you move the file around or have copies in several locations, then the URIs in your file will change, and hence you'd have to change your SPARQL query accordingly.
Including an explicit xml:base, or avoiding relative URIs and rdf:ID, should fix the problem.
The whole idea of prefixes and QNames is just to compress URIs to save space and improve readability, the most common issue with them is spelling errors either in the definitions themselves or in the QNames.
Most likely the prefix definition you are using in your query is causing URIs to be generated which don't match the actual URIs of properties in your ontology.
That being said your issue may be due to something with Jena so it may well be worth asking your question on the Jena Mailing List
It looks like this was caused by a bug (or a feature?) in Protege. When I exported the inferred ontology with a new name, Protege changed the definitions of xmlns(blank) and xml:base to the name of the new file, but it did not change the definition of the actual namespace.
xmlns="http://xyz.com/helloOwl.owl" => xmlns="http://xyz.com/helloOwlInferred.owl"
xml:base="http://xyz.com/helloOwl.owl" => xml:base="http://xyz.com/helloOwlInferred.owl"
xmlns:helloOwl="http://xyz.com/helloOwl.owl" => xml:base="http://xyz.com/helloOwl.owl"
<!ENTITY helloOwl "http://wxyz.com/helloOwl.owl#" > => <!ENTITY helloOwl "http://wxyz.com/helloOwl.owl#" >
Since I fixed that it seems to work.
My fault not having examined the the actual source with the necessary attention.
You have to define a precise URI prefix for ho:, then tell it to Protegé (there is a panel for namespaces and define the same URI as the ontology prefix), so that, when you define GoodPerson in Protegé, it assumes you mean http://www.flatlandfarm.de/fhtw/ontologies/2010/5/22/helloOwl.owl#GoodPerson, which is the same as ho:GoodPerson only if you have used the same URI prefix for the two.
If you don't do so, Protegé (or some other component, like a web server) will do these silly things like composing the ontology's URI and its default URI prefix (the one that goes in front of GoodPerson when you don't specify any prefix) using the file name (or even worse, a URI like file:///home/user/...).
Remember, the ontology's URI is technically different than the URI's prefix that you use for the entities associated to the ontology itself (classes, properties etc), and ho: is just a shortcut having a local meaning, which depends on what you define in documents like files or SPARQL queries.
The ontology URI can also be different than the URL from where the ontology file can be fetched, although it is good to make them the same. Usually you need to play with URL rewriting in Apache to make that happen, but sometimes that ontology file isn't physically published, since the ontology is loaded into a SPARQL endpoint and its URI is resolved to an RDF document through the help of the endpoint itself, by rewriting the ontology URI into a SPARQL request that issues a DESCRIBE statement. The same trick can be used to resolve any other URI (i.e., your ontology-instantiating data), as long as associated data are accessible from your SPARQL endpoint (ie, are in your triple store).