Jena SPARQL API using inference rules file - sparql

I am working with Jena SPARQL API, and I want to execute queries on my RDF files after applying inference rules. I created a .rul file that contains all my rules; now I want to run those rules and execute my queries. When I used OWL, I proceeded this way:
OntModel model1 = ModelFactory.createOntologyModel( OntModelSpec..OWL_MEM_MICRO_RULE_INF);
// read the RDF/XML file
model1.read( "./files/ontology.owl", "RDF/XML" );
model1.read( "./files/data.rdf", "RDF/XML" );
// Create a new query
String queryString =
".....my query";
Query query = QueryFactory.create(queryString);
QueryExecution qe = QueryExecutionFactory.create(query, model1);
ResultSet results = qe.execSelect();
ResultSetFormatter.out(System.out, results, query);
I want to do the same thing with inferences rules, i.e., load my .rul file like this:
model1.read( "./files/rules.rul", "RDF/XML" );
This didn't work with .rul files, the rules are not executed. Any ideas how to load a .rul file? Thanks in advance.

Jena rules aren't RDF, and you don't read them into a model.
RDFS is RDF, and it is implemented internally using rules.
To build an inference model:
Model baseData = ...
List<Rule> rules = Rule.rulesFromURL("file:YourRulesFile") ;
Reasoner reasoner = new GenericRuleReasoner(rules);
Model infModel = ModelFactory.createInfModel(reasoner, baseData) ;
See ModelFactory for other ways to build models (e.g., RDFS inference) directly.

Related

Apache Jena - Is it possible to write to output the BASE directive?

I just started using Jena Apache, on their introduction they explain how to write out the created model. As input I'm using a Turtle syntax file containing some data about some OWL ontologies, and I'm using the #base directive to use relative URI's on the syntax:
#base <https://valbuena.com/ontology-test/> .
And then writing my data as:
<sensor/AD590/1> a sosa:Sensor ;
rdfs:label "AD590 #1 temperatue sensor"#en ;
sosa:observes <room/1#Temperature> ;
ssn:implements <MeasureRoomTempProcedure> .
Apache Jena is able to read that #base directive and expands the relative URI to its full version, but when I write it out Jena doesn't write the #base directive and the relative URI's. The output is shown as:
<https://valbuena.com/ontology-test/sensor/AD590/1> a sosa:Sensor ;
rdfs:label "AD590 #1 temperatue sensor"#en ;
sosa:observes <https://valbuena.com/ontology-test/room/1#Temperature> ;
ssn:implements <https://valbuena.com/ontology-test/MeasureRoomTempProcedure> .
My code is the following:
Model m = ModelFactory.createOntologyModel();
String base = "https://valbuena.com/ontology-test/";
InputStream in = FileManager.get().open("src/main/files/example.ttl");
if (in == null) {
System.out.println("file error");
return;
} else {
m.read(in, null, "TURTLE");
}
m.write(System.out, "TURTLE");
There are multiple read and write commands that take as parameter the base:
On the read() I've found that if on the data file the #base isn't declared it must be done on the read command, othwerwise it can be set to null.
On the write() the base parameter is optional, it doesn't matter if I specify the base (even like null or an URI) or not, the output is always the same, the #base doesn't appear and all realtive URI's are full URI's.
I'm not sure if this is a bug or it's just not possible.
First - consider using a prefix like ":" -- this is not the same as base but makes the output nice as well.
You can configure the base with (current version of Jena):
RDFWriter.create()
.source(model)
.lang(Lang.TTL)
.base("http://base/")
.output(System.out);
It seems that the command used on the introduction tutorial of Jena RDF API is not updated and they show the reading method I showed before (FileManager) which now is replaced by RDFDataMgr. The FileManager way doesn't work with "base" directive well.
After experimenting I've found that the base directive works well with:
Model model = ModelFactory.createDefaultModel();
RDFDataMgr.read(model,"src/main/files/example.ttl");
model.write(System.out, "TURTLE", base);
or
Model model = ModelFactory.createDefaultModel();
model.read("src/main/files/example.ttl");
model.write(System.out, "TURTLE", base);
Although the model.write() command is said to be legacy on RDF output documentation (whereas model.read() is considered common on RDF input documentation, don't understand why), it is the only one I have found that allows the "base" parameter (required to put the #base directive on the output again), RDFDataMgr write methods don't include it.
Thanks to #AndyS for providing a simpler way to read the data, which led to fix the problem.
#AndyS's answer allowed me to write relative URIs to the file, but did not include the base in use for RDFXML variations. To get the xml base directive added correctly, I had to use the following
RDFDataMgr.read(graph, is, Lang.RDFXML);
Map<String, Object> properties = new HashMap<>();
properties.put("xmlbase", "http://example#");
Context cxt = new Context();
cxt.set(SysRIOT.sysRdfWriterProperties, properties);
RDFWriter.create().source(graph).format(RDFFormat.RDFXML_PLAIN).base("http://example#").context(cxt).output(os);

How to create and use GeoSpatial indexes in Marklogic from Sparql

I have loaded the geospatial data from geonames.org into Marklogic using RDF import.
When using the Query Console to explore the data, I see the data has been loaded into an xml document and looks like this:
<sem:triple>
<sem:subject>http://sws.geonames.org/2736540/</sem:subject>
<sem:predicate>http://www.w3.org/2003/01/geo/wgs84_pos#lat</sem:predicate>
<sem:object datatype="http://www.w3.org/2001/XMLSchema#string">40.41476</sem:object>
</sem:triple>
<sem:triple>
<sem:subject>http://sws.geonames.org/2736540/</sem:subject>
<sem:predicate>http://www.w3.org/2003/01/geo/wgs84_pos#long</sem:predicate>
<sem:object datatype="http://www.w3.org/2001/XMLSchema#string">-8.54304</sem:object>
</sem:triple>
I am able to do a SPARQL DESCRIBE and see data. Here is an example.
#prefix geonames: <http://www.geonames.org/ontology#> .
#prefix xs: <http://www.w3.org/2001/XMLSchema#> .
#prefix p0: <http://www.w3.org/2003/01/geo/wgs84_pos#> .
<http://sws.geonames.org/2736540/> geonames:parentCountry <http://sws.geonames.org/2264397/> ;
geonames:countryCode "PT"^^xs:string ;
p0:long "-8.54304"^^xs:string ;
geonames:featureCode <http://www.geonames.org/ontology#P.PPL> ;
geonames:parentADM1 <http://sws.geonames.org/2742610/> ;
geonames:parentFeature <http://sws.geonames.org/2742610/> ;
<http://www.w3.org/2000/01/rdf-schema#isDefinedBy> "http://sws.geonames.org/2736540/about.rdf"^^xs:string ;
a geonames:Feature ;
geonames:locationMap <http://www.geonames.org/2736540/pedreira-de-vilarinho.html> ;
geonames:name "Pedreira de Vilarinho"^^xs:string ;
geonames:nearbyFeatures <http://sws.geonames.org/2736540/nearby.rdf> ;
geonames:featureClass geonames:P ;
p0:lat "40.41476"^^xs:string .
I want to query over this data using SPARQL QUERY as my Query Type in a way where I can take advantage of the geospatial indexes that MarkLogic can create.
I have been having trouble with two aspects of this.
How to correctly create the geospatial indexes for the wgs84_pos#lat and wgs84_pos#long predicates?
How do I access those indexes from SPARQL QUERY?
I would like to have a sparql query that would be able to find subjects within some range of a Point.
=====================================
Followup:
After following David Ennis's Answer (Which worked nicely, thanks!) I ended up with this sample Xquery that was able to select data out of documents via geosearch and then use those IRI's in a sparql values query.
Example:
xquery version "1.0-ml";
import module namespace sem = "http://marklogic.com/semantics"
at "/MarkLogic/semantics.xqy";
let $matches := cts:search(//rdf:RDF,
cts:element-pair-geospatial-query (
fn:QName("http://www.geonames.org/ontology#","Feature"),
fn:QName("http://www.w3.org/2003/01/geo/wgs84_pos#", "lat"),
fn:QName ("http://www.w3.org/2003/01/geo/wgs84_pos#","long"),
cts:circle(10, cts:point(19.8,99.8))))
let $iris := sem:iri($matches//#rdf:about)
let $bindings := (fn:map(function($n) { map:entry("featureIRI", $n) }, $iris))
let $sparql := '
PREFIX wgs: <http://www.w3.org/2003/01/geo/wgs84_pos#>
SELECT *
WHERE {
?featureIRI wgs:lat ?lat;
wgs:long ?long.
}
'
return sem:sparql-values($sparql, $bindings)
This xquery queries the geospatial index, finds matching documents and then selects the IRI in the rdf:about attribute of the xml document.
It then maps over all of those IRIs and creates map entries that can be passed in the bindings parameter of the sem:sparql-values function.
I do not believe you can do what you want via just native SPARQL. Geospacial queries in any SPARQL implementation are extensions like geoSPARQL, Apache Jena geospacial queries etc.
My suggested approach in MarkLogic:
Insert the geonames subjects into MarkLogic as unmanaged triples (an XML or JSON document with embedded triples for each one)
In the same document, include the geo-spacial data in one of the acceptable MarkLogic formats. This essentially adds geo-spacial metadata to the triple since it is in the same fragment.
Add geo-spacial path-range-indexes for the geospacial data.
Use SPARQL inside of MarkLogic with a cts query restriction.
The Building Blocks for above:
Understanding unmanaged triples
Understanding Geo-spacial Region Types
Understanding Geo-spacial Indexes
Understanding Geo-spacial Queries
Understanding Semantics with cts search
Another approach to the final query could be the Optic API but I do not see how it would negate the need to do steps 1-3

SPARUL query to drop most graphs, using Jena API

I am trying to clear most of the graphs contained in my local Virtuoso triple store, using Apache Jena, as part of my clean up process before and after my unit tests. I think that something like this should be done. First, I retrieve the graph URIs to be deleted; then I execute a SPARUL Drop operation.
String sparqlEndpointUsername = ...;
String sparqlEndpointPassword = ...;
String sparqlQueryString = ...; // Returns the URIs of the graphs to be deleted
HttpAuthenticator authenticator = new SimpleAuthenticator(sparqlEndpointUsername,
sparqlEndpointPassword.toCharArray());
ResultSet resultSetToReturn = null;
try (QueryEngineHTTP queryEngine = new QueryEngineHTTP(sparqlEndpoint, sparqlQueryString, authenticator)) {
resultSetToReturn = queryEngine.execSelect();
resultSetToReturn = ResultSetFactory.copyResults(resultSetToReturn);
while(resultSetToReturn.hasNext()){
String graphURI = resultSetToReturn.next().getResource("?g").getURI();
UpdateRequest request = UpdateFactory.create() ;
request.add("DROP GRAPH <"+graphURI+">");
Dataset dataset = ...; // how can I create a default dataset pointing to my local virtuoso installation?
// And perform the operations.
UpdateAction.execute(request, dataset) ;
}
}
;
Questions:
As shown in this example, ARQ needs a dataset to operate on. How would I create this dataset pointing to my local Virtuoso installation for an update operation?
Is there perhaps an alternative to my approach? Would using another approach (apart from jena) be a better idea?
Please note that I am not trying to delete all graphs. I am deleting only the graphs whose names are returned through the SPARQL query defined in the beginning (3rd line).
Your question appears to be specific to Virtuoso, and meant to remove all RDF data, so you could use Virtuoso's built-in RDF_GLOBAL_RESET() function.
This is not a SPARQL/SPARUL query; it is usually issued through an SQL connection -- which could be JDBC, ODBC, ADO.NET, OLE DB, iSQL, etc.
That said, as you are connecting through a SPARUL-privileged connection, you should be able to use Virtuoso's (limited) SQL-in-SPARQL support, a la --
SELECT
( bif:RDF_GLOBAL_RESET() AS reset )
WHERE
{ ?s ?p ?o }
LIMIT 1
(Executing this through an unprivileged connection like the default SPARQL endpoint will result in an error like Virtuoso 37000 Error SP031: SPARQL compiler: Function bif:RDF_GLOBAL_RESET() can not be used in text of SPARQL query due to security restrictions.)
(ObDisclaimer: OpenLink Software produces Virtuoso, and employs me.)
You can build a single SPARQL Update request:
DROP GRAPH <g1> ;
DROP GRAPH <g2> ;
DROP GRAPH <g3> ;
... ;
because in SPARQL Update one HTTP requests can be several update operations, separated by ;.

Query trig file with Sparql

I have a .trig file which I want to query without pushing to Jena Fuseki.
However when I try to load the model using:
Model model= FileManager.get().loadModel("filepath/demo.trig");
certain links in the original TRIG file are getting lost.
this is the code snippet:
FileManager.get().addLocatorClassLoader(RDFProject.class.getClassLoader());
Model model= FileManager.get().loadModel("filePath/demo.trig");
model.write(System.out);
Is there any alternate way to do this?
Use RDFDataMgr to load a dataset (not a model) and query that.
Dataset ds = RDFDataMgr.loadDataset("filepath/demo.trig");

TDB Jena Querying

I am trying to query in Java with Jena using TDB. So basically i got an n3 file name song.n3 and using this file I want to use this with TDB. So I have created a directory which is generated in my Java1 folder (Netbeans project folder) and then I have the source of the actual n3 file. After running this code I am having the error "java.lang.NoClassDefFoundError". Basically debugging the code lead to the error being caused by the line : Dataset dataset = TDBFactory.createDataset(directory);. I am not too sure why this error is caused could it be that it because my directory is empty with no model.
public static void main(String[] args) throws IOException {
String directory = "./tdb";
Dataset dataset = TDBFactory.createDataset(directory);
Model tdb = dataset.getDefaultModel();
String source = "C:\\Users\\Name\\Documents\\NetBeansProjects\\Java1\\src\\song.n3";
FileManager.get().readModel( tdb, source, "N3" );
String queryString = "PREFIX owl: <http://www.w3.org/2002/07/owl#> SELECT * WHERE { ?x owl:sameas ?y }";
Query query = QueryFactory.create(queryString);
QueryExecution qe = QueryExecutionFactory.create(query, tdb);
ResultSet results = qe.execSelect();
ResultSetFormatter.out(System.out, results, query);
qe.close();
}
}
This should be a problem with your CLASSPATH, when I use TDB I have the following script to load the Jena-TDB libraries into my classpath ..
#!/bin/bash
CP="."
for i in ./TDB-0.8.9/lib/*.jar ; do
CP=$CP:./TDB-0.8.9/lib/$i
done
export CLASSPATH=$CP
It is bash but very easy to translate into a Windows script. Bottom line, make sure that all the jars in /lib/ directory are in the CLASSPATH. Anyway, it would help it you give the complete java.lang.NoClassDefFoundError where the class not found is shown, that would give you a hint of what it is missing. Probably some of the logging libs that are not shipped inside the jena distribution.
Also, watch out for that owl:sameas predicate. SPARQL and RDF are case sensitive and the correct predicate is owl:sameAs.