Java source to RDF conversion - semantic-web

Is there a way to convert java source directly to equivalent RDF? I am aware of manually creating and java object to RDF/OWL object mapping by Jena API, but I need the automation of the mapping of java source code to RDF/OWL object. Is there any available tool for that?
Thanks in advance

You might check out Empire, which is integration between JPA and SPARQL letting you build an application around standard POJOs which are stored in an RDF triplestore. It handles round-tripping between RDF and Java for you and abstracts most of the details of RDF -- though some SPARQL knowledge is ideal.

Related

SPARQL over custom representation of semantic data

I have a non-standard way of storing and representing semantic data, and I was looking into some possibilities of supporting SPARQL queries. It seems that the best solution is to implement a so-called driver of a standard API framework, such as Apache Jena, but at least for Jena it's not so clear how can this be done. The following image taken from the official documentation suggests that I should implement the Store API, however I couldn't find documentation concerning this. Furthermore, the Java docs of TDB, Jena's native triple store, implies that there is no Store API.
A secondary question is whether there is a Python alternative to Jena (which is written in Java)?

Save triples in a SPARQL remote endpoint using Jena library?

How can Jena be used to save triples in a SPARQL endpoint?
I could use SPARQL RestFul API but I wonder if this is also doable using Jena classes.
For SPARQL Update you can do the following:
UpdateRequest update = UpdateFactory.create("# Your SPARQL Updates");
UpdateProcessor processor = UpdateExecutionFactory.createRemote(update, "http://your-domain/update");
processor.execute();
If you are talking about the graph store protocol i.e. uploading entire graphs at once then you can use the DatasetAccessor API e.g.
DatasetAccessor accessor = DatasetAccessorFactory.createHTTP("http://your-domain/ds");
accessor.putModel(m);
If you are talking about MarkLogic specifically (you tagged the question with marklogic), then this github project will likely interest you:
https://github.com/marklogic/marklogic-jena
This library integrates MarkLogic Semantics feature into the Jena RDF
Framework as a persistence and query layer.
Note: not officially released yet currently, but close. Might be worth a look..
HTH!

What can be done using OWL reasoning?

I'm working on an OWL ontology and I need some specific issues
I only need ontology schema (TBox) and I got lost, what are the operations that can be
completed using reasoning and sparql and OWL API?
More specifically, I need the following:
1- check cardinalities between classes and properties.
2- find subsumption relationships for a specific class.
3- check whether specific facts hold (e.g. are two classes are disjoint)
4- find the paths (a class-property series) between a set of classes.
What each of reasoning, sparql and OWL API used for? and which one is suitable for my situation?
Actually I don't know how to start and what technique to use.
In addition. Would you please refer me to some reference?
Thanks.
Number 1 is not clear: do you want to know which cardinality axioms are asserted? This can be done without a reasoner. Number 4 is a bit vague as well, can you provide an example?
2, 3 and 5 require a reasoner to be perform accurately.
A reasoner is a program that will explicit implicit information: subsumption, realisation, consistency checks are all operations for which a reasoner is needed. In your tasks, subsumption is clearly needed.
OWLAPI is a Java API to manipulate OWL ontologies; in your case, it could be useful to write the connecting code to use a reasoner for your tasks. Compatible reasoners are Pellet, HermiT, FaCT++, and a few more.
SPARQL is an RDF query language. OWLAPI does not support it. You could use it for your tasks, but they look more OWL oriented than RDF oriented to me. Jena is a Java library supporting RDF, OWL, SPARQL and interfaces with reasoners such as Pellet. Depending on how you decide to solve the above tasks, it might fit more of your requirements than the OWLAPI.
Jena tutorials:
https://jena.apache.org/tutorials/index.html
OWLAPI documentation:
https://github.com/owlcs/owlapi/wiki/Documentation

J2SE desktop applications - JPA database vs Collections?

I come from a web development background and haven't done anything significant in Java in quite some time.
I'm doing a small project, most of which involves some models with relationships and straightforward CRUD operations with those objects.
JPA/EclipseLink seems to suit the problem, but this is the kind of app that has File->Open and File->Save features, i.e. the data will be stored in files by the user, rather than persisting in the database between sessions.
The last time I worked on a project like this, I stored the objects in ArrayList objects, but having worked with MVC frameworks since, that seems a bit primitive. On the other hand, using JPA, opening a file would require loading a whole bunch of objects in the database, just for the convenience of not having to write code to manage the objects.
What's the typical approach for managing model data with Java SE desktop applications?
JPA was specifically build with databases in mind. This means that typically it operates on a big datastore with objects belonging to many different users.
In a file based scenario, quite often files are not that big and all objects in the file belong to the same user and same document. In that case I'd say for a binary format the old Java serialization still works for temporary files.
For longer term or interchangeable formats XML is better suited. Using JAXB (included in the standard Java library) you can marshal and demarshal Java objects to XML using an annotation based approach that on the surface resembles JPA. In fact, I've worked with model objects that have both JPA and JAXB annotations so they can be stored in a Database as well as in an XML file.
If your desktop app however uses files that represents potentially huge datasets for which you need paging and querying, then using JPA might still be the better option. There are various small embedded DBs available for Java, although I don't know how simple it is to let a data source point to a user selected file. Normally a persistence unit in Java is mapped to a fixed data source and you can't yet create persistence units on the fly.
Yet another option would be to use JDO, which is a mapping technology like JPA, but not an ORM. It's much more independent of the backend persistence technology that's being used and indeed maps to files as well.
Sorry that this is not a real answer, but more like some things to take into account, but hope it's helpful in some way.

XStream <-> Alternative binary formats (e.g. protocol buffers)

We currently use XStream for encoding our web service inputs/outputs in XML. However we are considering switching to a binary format with code generator for multiple languages (protobuf, Thrift, Hessian, etc) to make supporting new clients easier and less reliant on hand-coding (also to better support our message formats which include binary data).
However most of our objects on the server are POJOs with XStream handling the serialization via reflection and annotations, and most of these libraries assume they will be generating the POJOs themselves. I can think of a few ways to interface an alternative library:
Write an XStream marshaler for the target format.
Write custom code to marshal the POJOs to/from the classes generated by the alternative library.
Subclass the generated classes to implement the POJO logic. May require some rewriting. (Also did I mention we want to use Terracotta?)
Use another library that supports both reflection (like XStream) and code generation.
However I'm not sure which serialization library would be best suited to the above techniques.
(1) might not be that much work since many serialization libraries include a helper API that knows how to read/write primitive values and delimiters.
(2) probably gives you the widest choice of tools: https://github.com/eishay/jvm-serializers/wiki/ToolBehavior (some are language-neutral). Flawed but hopefully not totally useless benchmarks: https://github.com/eishay/jvm-serializers/wiki
Many of these tools generate classes, which would require writing code to convert to/from your POJOs. Tools that work with POJOs directly typically aren't language-neutral.
(3) seems like a bad idea (not knowing anything about your specific project). I normally keep my message classes free of any other logic.
(4) The Protostuff library (which supports the Protocol Buffer format) lets you write a "schema" to describe how you want your POJOs serialized. But writing this schema might end up being more work and more error-prone than just writing code to convert between your POJOs and some tool's generated classes.
Protostuff can also automatically generate a schema via reflection, but this might yield a message format that feels a bit Java-centric.