Save triples in a SPARQL remote endpoint using Jena library? - sparql

How can Jena be used to save triples in a SPARQL endpoint?
I could use SPARQL RestFul API but I wonder if this is also doable using Jena classes.

For SPARQL Update you can do the following:
UpdateRequest update = UpdateFactory.create("# Your SPARQL Updates");
UpdateProcessor processor = UpdateExecutionFactory.createRemote(update, "http://your-domain/update");
processor.execute();
If you are talking about the graph store protocol i.e. uploading entire graphs at once then you can use the DatasetAccessor API e.g.
DatasetAccessor accessor = DatasetAccessorFactory.createHTTP("http://your-domain/ds");
accessor.putModel(m);

If you are talking about MarkLogic specifically (you tagged the question with marklogic), then this github project will likely interest you:
https://github.com/marklogic/marklogic-jena
This library integrates MarkLogic Semantics feature into the Jena RDF
Framework as a persistence and query layer.
Note: not officially released yet currently, but close. Might be worth a look..
HTH!

Related

RDF4J SAIL API implementation

I am trying to build a federated RDF application based on rdf4j and FedX. What I need is to be able to:
Optimize the querying plan and joining strategies.
To expose different and heterogeneous databases (A timeseries or a relational DB for example) in a federated fashion.
I went a little bit through the rdf4j documentation and I got a grasp. And therefore I have some little questions:
Is there any documentation that explains how to implement the SAIL API? I tried to debug and follow the flow of execution of an example query using a RDF memory store and I got lost.
Suppose I want to expose a relational database in my datacenter, Should I implement a SPARQL repository or an HTTP repository? should I in anyway implement the SAIL api?
Concerning fedX, how can I make it possible to use the SERVICE and VALUES terms as proposed in the SPARQL 1.1 federated queries? How can I change the Joning strategies? the query plan?
I know that this can be answered if I dive deeply into the code but I wonder if someone has already exposed some kind of a database using the rdf4j API or even worked and tuned RDF4J.
Thanks to you all!
Is there any documentation that explains how to implement the SAIL API? I tried to debug and follow the flow of execution of an example
query using a RDF memory store and I got lost.
There is a basic design draft but it's incomplete. A more comprehensive HowTo has been in the planning for a while but it never quite gets the priority it needs.
That said, I don't think you need to implement your own SAIL for what you have in mind. There's plenty of existing implementations that can do what you need.
Suppose I want to expose a relational database in my datacenter, Should I implement a SPARQL repository or an HTTP repository?
I don't understand the question. HTTPRepository is a client-side proxy for an RDF4J Server. SPARQLRepository is a client-side proxy for a (non-RDF4J) SPARQL endpoint. Neither has anything to do with relational database.
should I in anyway implement the SAIL api?
It depends on your use case, but I doubt it - at least not right at the outset. I'd probably use an existing R2RML library that is compatible with RDF4J, like for example the R2RML API, or CARML - either a live mapping or an offline batch mapping between the relational data and your triplestore may solve your problem.
Concerning fedX, how can I make it possible to use the SERVICE and VALUES terms as proposed in the SPARQL 1.1 federated queries?
You don't need to "make it possible" to do that, FedX supports this out of the box.
How can I change the Joning strategies? the query plan?
You can't (at least not easily), nor should you want to. Quite a lot of research and development went into RDF4J's and FedX query planning strategies. I'm not saying either is perfect, but you're unlikely to do better.

SPARQL over custom representation of semantic data

I have a non-standard way of storing and representing semantic data, and I was looking into some possibilities of supporting SPARQL queries. It seems that the best solution is to implement a so-called driver of a standard API framework, such as Apache Jena, but at least for Jena it's not so clear how can this be done. The following image taken from the official documentation suggests that I should implement the Store API, however I couldn't find documentation concerning this. Furthermore, the Java docs of TDB, Jena's native triple store, implies that there is no Store API.
A secondary question is whether there is a Python alternative to Jena (which is written in Java)?

Where I can find some rdf and some sparql queries to practise to write in sparql?

I am trying to practice myself in writing some SPARQL queries. Does anybody know where I can find the best material? Where I can find some RDF file and some tasks to try to write my own SPARQL queries. I am good with SQL, and I just need some material to learn to write in SPARQL.
All sample RDF and queries from the O'Reilly book "Learning SPARQL" are available on the book's home page at learningsparql.com. (Full disclosure: I wrote it.)
data.gov and DataHub have a lot of downloadable RDF data sets. If a public SPARQL endpoint is available, DataHub usually lists it. For example: the Rijksmuseum page offers RDF downloads and a link to the endpoint.
My Experiment has a tutorial with examples and a working endpoint.
If you download Jena, you get their example RDF files and SPARQL queries.
Uniprot has a SPARQL endpoint with examples. The RDF is available for download. some of the files are quite large.
There's a large number of downloadable ontologies in RDF format at the OBO Foundry.
Watch this: Probe the Semantic Web with SPARQL
SPARQL Cheat Sheet Slide Deck
As mentioned above: the website for Bob DuCharme's excellent Learning SPARQL Book

Java source to RDF conversion

Is there a way to convert java source directly to equivalent RDF? I am aware of manually creating and java object to RDF/OWL object mapping by Jena API, but I need the automation of the mapping of java source code to RDF/OWL object. Is there any available tool for that?
Thanks in advance
You might check out Empire, which is integration between JPA and SPARQL letting you build an application around standard POJOs which are stored in an RDF triplestore. It handles round-tripping between RDF and Java for you and abstracts most of the details of RDF -- though some SPARQL knowledge is ideal.

How to reflect the semantic web benefits in Enterprise Information System?

I am developing a demo of semantic web-based Information System, which just uses SPARQL instead of traditional SQL to manipulate dataset. How the application can demonstrate Semantic Web benefits.
I did steps as below:
The client gets parameters from web UI.
Requests a web service.
The service generates a SPARQL command according to given parameters.
The service uses Jena/SDB API to execute the SPARQL command.
Retrieves or persists data from or to MySQL.
Parsing returned result set.
Responses a JSON object to the client.
The client uses Javascript + html to display data.
Currently, the application just has CRUD operations. Only one difference to the traditional IS, which is using SPARQL instead of SQL. It seems that cannot see obviously semantic features. I'm just thinking of two points:
To demonstrate data federating through SPARQL. From this point, can I imagine that the system can be broken down into several subsystem and work on their independent dataset but they can communicate with each other by SPARQL, which because they work on the RDF specification.
Reasoning over datasets. I use Ontologies to describe data schema, should my reasoning operation need to based on them. In my application, I try to get a RDF model, and use Pellet to do inferences. Is that corrent way?
Basically, if the application can demostrate data federating and reasoning, which can be seen as a semantic web-based application. Do I understand it right?
Hopefully, the application can combine services together automatically through semantic description. Furthermore, any other third party data sources may be communicate with the system and work immediately.
Yes ,you are right.the benefit with semantic web being you can write separate set of ontologies which will describe the domains(e.g. product,user) and then combine them using inference ,reasoning and make the data seem much more useful(r.g. product types and user preferences).
The difference being the rules for the data are now written with the data and not in the business logic layer.
Hope this helps .:)