Query a local ttl file using SPARQL on Virtuoso Conductor? - sparql

I am trying to learn SPARQL and I am trying to query a local ttl file which is my downloads
The path is : C:/Users/abc/Downloads/human-instructions-english-wikihow/en_0_rdf_result.ttl
SELECT ?s ?p ?o
FROM <C:/Users/abc/Downloads/human-instructions-english-wikihow/en_0_rdf_result.ttl>
WHERE {?s ?p ?o}
LIMIT 1000
So I am trying to execute a very simple query like this but it does not return any output.
I understand we have to put a SPARQL endpoint or something with 'http' in FROM but this file is on my Downloads and I cant seem to figure out what would be the endpoint.
Please, help me with this. Thanks.

(If you haven't already, you need to install the Virtuoso Sponger Middleware module, cartridges_dav.vad, for your version of Virtuoso Enterprise/Commercial Edition or Open Source Edition.)
First, you need to add this line to the top of your SPARQL query --
define get:soft "replace"
That "define pragma" is a SPARQL extension, which tells Virtuoso to resolve remote URLs it encounters in the rest of the query.
Then, you need to use a full URI for the target file. This may be a file: scheme URI, IFF --
the URI is properly constructed
the target file is accessible through the filesystem where Virtuoso is running
the directory holding the target file is included in the DirsAllowed parameter in the virtuoso.ini file
Also see How to import JSON-LD into Virtuoso.

Related

jena doesn't use LocationMapper on owl-imports

I have a ttl file with owl-imports clause like
#prefix xsd: <http://www.w3.org/2001/XMLSchema#>
<http://test/data.ttl>
a owl:Ontology ;
owl:imports <file:///Users/tht/workspace/jenatest/test_course.ttl> ;
owl:versionInfo "tht testing owl:imports"^^xsd:string .
When test_course.ttl file exists, FileManager.get().readModel loads the model, the other ttl is imported and sparql queries work fine. But if i remove the file and use FileManager.get().setLocationMapper().addAltEntry() to redirect to another existing file, the model is not what i expect and the sparql queries return no results.
So owl-imports works fine, but it seems like jena is not using LocationMapper when importing? or could it be my mapping uris are incorrect? I'm using something like
mapper.addAltEntry("file:///Users/tht/workspace/jenatest/test_course.ttl",
"file:///Users/tht/workspace/jenatest/test_course.redirected.ttl")
OntModels have their own FileManager for handling owl:imports.
This, and the LocationMapper, are accessed via the OntModel's DocumentManager:
model.getDocumentManager().addAltEntry(..., ...)
and other APIs calls.

Pure-SPARQL migration of data from one endpoint to another?

It looks like this question has been raised before, but subsequently deleted?!
For data in one SQL table, I can easily replicate the structure and then migrate the data to another table (or database?).
CREATE TABLE new_table
AS (SELECT * FROM old_table);
SELECT *
INTO new_table [IN externaldb]
FROM old_table
WHERE condition;
Is there something analogous for RDF/SPARQL? Something that combines a select and an insert into one SPARQL statement?
Specifically, I use Karma, which publishes data to an embedded OpenRDF/Sesame endpoint. There's a text box on the GUI for the endpoint, so I can change it to a free-standing RDF4J, since RDF4J is a fork of Sesame.
Unfortunately, I get an error like invalid SPARQL endpoint from Karma when I put the address for a Virtuoso, Stardog or Blazegraph endpoint in the endpoint text box. I suspect it might be possible to modify and recompile Karma, or (more realistically), I could write a small tool with the Jena or RDF4J libraries to select into RAM or scratch disk space and then insert into the other endpoint.
But if there's a pure-SPARQL solution, I'd sure like to hear it.
In SPARQL, you can only specify the source endpoint. Therefore, a partial pure-SPARQL solution would be to run the following update on your target triplestore:
INSERT { ?s ?p ?o }
WHERE { SERVICE <http://source/sparql>
{
?s ?p ?o
}
}
This will copy over all triples from the (remote) source's default graph to your target store, but it doesn't copy over any named graphs. To copy over any named graphs as well, you can execute this in addition:
INSERT { GRAPH ?g { ?s ?p ?o } }
WHERE { SERVICE <http://source/sparql>
{
GRAPH ?g {
?s ?p ?o
}
}
}
If you're not hung up on pure SPARQL though, different toolkits and frameworks offer you all sorts of options. For example, using RDF4J's Repository API you could just wrap both source and target in a SPARQLRepository proxy (or just use a HTTPRepository if either one is an actual RDF4J store), and then just run copy API operations. There's many different ways to do that, one possible approach (disclaimer: I didn't test this code fragment) is this:
SPARQLRepository source = new SPARQLRepository("http://source/sparql");
source.initialize();
SPARQLRepository target = new SPARQLRepository("http://target/sparql");
target.initialize();
try (RepositoryConnection sourceConn = source.getConnection();
RepositoryConnection targetConn = target.getConnection()) {
sourceConn.export(new RDFInserter(targetConn));
}

Apache Marmotta Delete All Graphs using SPARQL Update

I am trying to clear all loaded graphs into my Apache Marmotta instance. I have tried several SPARQL queries to do so, but I am not able to remove the RDF/XML graph that I imported. What is the appropriate syntax to do so?
Try this query:
DELETE WHERE { ?x ?y ?z }
Be careful, as it deletes every triple in the database, including the built-in ones of Marmotta.
A couple of things I did for experimenting:
I downloaded the source code of Marmotta and used the Silver Searcher tool for searching for DELETE queries with the following command:
ag '"DELETE '
This did not help much.
I navigated to the Marmotta installation directory and watched the debug log:
tail -f marmotta-home/log/marmotta-main.log
This showed that the parser is not able to process the query DELETE DATA { ?s ?p ?o }. The exception behind the "error while executing update" was:
org.openrdf.sail.SailException: org.openrdf.rio.RDFParseException: Expected an RDF value here, found '?' [line 8]
[followed by a long stacktrace]
This shows that the parser does not allow variables in the query after DELETE DATA.
Based on a related StackOverflow answer, I tried CLEAR / CLEAR GRAPH / DROP / DROP GRAPH, but they did not work.
I tried many combinations of DELETE, *, ?s ?p ?p and accidentally managed to get it working with the DELETE WHERE construct. According to the W3C documentation:
The DELETE WHERE operation is a shortcut form for the DELETE/INSERT operation where bindings matched by the WHERE clause are used to define the triples in a graph that will be deleted.

How to suppress objectProperty in Fuseki?

I've problem to suppress one objectProperty in graph on my fuseki server.
I tried to use many way to delete my objectProperty without results.
I tried to use s-delete to suppress:
./s-delete http://localhost:3030/ds 'DELETE {?s <http://www.semanticweb.org/ds/dependsOfExchange> ?o}'
or
./s-delete http://localhost:3030/ds 'DELETE {GRAPH ?g {?s <http://www.semanticweb.org/ds/dependsOfExchange> ?o} WHERE{?s <http://www.semanticweb.org/ds/dependsOfExchange> ?o}}'
I tried to find some information about how to use correctly s-delete to suppress objectproperty or dataproperty in data stored in my fuseki server, but I haven't found anything useful. And there is no update or suppress tools accessible by the browser.
Thanks in advance.
s-delete executes part of the SPARQL Graph Store Protocol which operaes on whole graphs only. It is an HTTP delete on a named graph.
To modify part of a graph, you need SPARQL Update, and so use s-update.
The delete command, using the DELETE WHERE short form might be:
DELETE WHERE {?s <http://www.semanticweb.org/ds/dependsOfExchange> ?o}
or the GRAPH version.

How to create a SPARQL endpoint using Virtuoso?

I have just setup Virtuoso and I have loaded an OWL file (created with Protege software) present on my local machine in to Virtuoso using the following code:
SQL> DB.DBA.RDF_LOAD_RDFXML_MT (file_to_string_output ('antibiotics.owl'), '', 'http://myexample.com');
Now, my question is how do I access the URI myexample.com ? How do I create a SPARQL endpoint in Virtuoso so that I can query it?
No need to create a sparql endpoint, since it's already there.
Check the inserted RDF data on your Virtuoso instance sparql endpoint http://cname:port/sparql (usually: http://localhost:8890/sparql). To do some test queries, use the web interface of Virtuoso (the conductor) http://localhost:8890/conductor and go to the 'Linked Data' tab.
Enter a query like:
SELECT ?s ?p ?o
FROM <http://myexample.com>
WHERE {?s ?p ?o}
LIMIT 1000
to get started.
You can also query directly from the vsql command line by adding 'SPARQL ' in front of your sparql query.
To get results in a specific format directly via html get request:
http://localhost:8890/sparql?query=(YourQueryUriEncodedWithout())&format=json
For a more detailed answer, consult the documentation here:
http://docs.openlinksw.com/virtuoso/rdfsparql.html
Points of interest:
16.2.3.15. SPARQL Endpoint with JSON/P Output Option: Curl Example
16.2.5. SPARQL Inline in SQL
If you still want your own endpoint:
16.2.3.4.6. Creating and Using a SPARQL-WebID based Endpoint
Best regards...