I have a non-standard way of storing and representing semantic data, and I was looking into some possibilities of supporting SPARQL queries. It seems that the best solution is to implement a so-called driver of a standard API framework, such as Apache Jena, but at least for Jena it's not so clear how can this be done. The following image taken from the official documentation suggests that I should implement the Store API, however I couldn't find documentation concerning this. Furthermore, the Java docs of TDB, Jena's native triple store, implies that there is no Store API.
A secondary question is whether there is a Python alternative to Jena (which is written in Java)?
Related
I am trying to build a federated RDF application based on rdf4j and FedX. What I need is to be able to:
Optimize the querying plan and joining strategies.
To expose different and heterogeneous databases (A timeseries or a relational DB for example) in a federated fashion.
I went a little bit through the rdf4j documentation and I got a grasp. And therefore I have some little questions:
Is there any documentation that explains how to implement the SAIL API? I tried to debug and follow the flow of execution of an example query using a RDF memory store and I got lost.
Suppose I want to expose a relational database in my datacenter, Should I implement a SPARQL repository or an HTTP repository? should I in anyway implement the SAIL api?
Concerning fedX, how can I make it possible to use the SERVICE and VALUES terms as proposed in the SPARQL 1.1 federated queries? How can I change the Joning strategies? the query plan?
I know that this can be answered if I dive deeply into the code but I wonder if someone has already exposed some kind of a database using the rdf4j API or even worked and tuned RDF4J.
Thanks to you all!
Is there any documentation that explains how to implement the SAIL API? I tried to debug and follow the flow of execution of an example
query using a RDF memory store and I got lost.
There is a basic design draft but it's incomplete. A more comprehensive HowTo has been in the planning for a while but it never quite gets the priority it needs.
That said, I don't think you need to implement your own SAIL for what you have in mind. There's plenty of existing implementations that can do what you need.
Suppose I want to expose a relational database in my datacenter, Should I implement a SPARQL repository or an HTTP repository?
I don't understand the question. HTTPRepository is a client-side proxy for an RDF4J Server. SPARQLRepository is a client-side proxy for a (non-RDF4J) SPARQL endpoint. Neither has anything to do with relational database.
should I in anyway implement the SAIL api?
It depends on your use case, but I doubt it - at least not right at the outset. I'd probably use an existing R2RML library that is compatible with RDF4J, like for example the R2RML API, or CARML - either a live mapping or an offline batch mapping between the relational data and your triplestore may solve your problem.
Concerning fedX, how can I make it possible to use the SERVICE and VALUES terms as proposed in the SPARQL 1.1 federated queries?
You don't need to "make it possible" to do that, FedX supports this out of the box.
How can I change the Joning strategies? the query plan?
You can't (at least not easily), nor should you want to. Quite a lot of research and development went into RDF4J's and FedX query planning strategies. I'm not saying either is perfect, but you're unlikely to do better.
I need your help with the following situation.
I have a local relational database that contains information about several places in a city. These places could be any kind of attraction: Museum, a cathedral, or even a square.
As an example I have information about "Square Victoria" (https://en.wikipedia.org/wiki/Victoria_Square,_Montreal)
A simple search in google gave me the wikipedia URL above. But I want to be able to do it programmatically.
For each place in the database I have also its category (square, museum, church, ....). These categories are local only and do not match any standardized categorization.
My goal is to improve this database by associating each place to its dbpedia URI.
My question is what is the best way to do that? I have some theoretical background about Semantic Web technologies but I don't have yet the practice skills to determine how to do that.
More specific questions:
Is it possible to determine the dbpedia URI using sparql only?
If it is not possible to do it with sparql only, what other technologies would I need to be able to accomplish that?
Thank you
First of all I would recommend, if you have not done it yet, to have a look at wikidata. This project is a semantic extension to wikipedia, but contrary to dbpedia, the data is not extracted from wikipedia, it is created by contributors, and therefore appears (or will appear as the project is still growing) to be more relevant.
The service offers many solutions to access data (including a Sparql endpoint), and it's main advantage is that the underlying software is mediawiki, same used for wikipedia and other Wikimedia foundation projects. The mediawiki API offers an Opensearch option that should allow you to search more efficiently that Sparql queries.
Putting everything together, I think it might be worth having a look at wikidata + wikipedia API to get pivot data to align you local database.
No direct answer but I hope that will help.
How can Jena be used to save triples in a SPARQL endpoint?
I could use SPARQL RestFul API but I wonder if this is also doable using Jena classes.
For SPARQL Update you can do the following:
UpdateRequest update = UpdateFactory.create("# Your SPARQL Updates");
UpdateProcessor processor = UpdateExecutionFactory.createRemote(update, "http://your-domain/update");
processor.execute();
If you are talking about the graph store protocol i.e. uploading entire graphs at once then you can use the DatasetAccessor API e.g.
DatasetAccessor accessor = DatasetAccessorFactory.createHTTP("http://your-domain/ds");
accessor.putModel(m);
If you are talking about MarkLogic specifically (you tagged the question with marklogic), then this github project will likely interest you:
https://github.com/marklogic/marklogic-jena
This library integrates MarkLogic Semantics feature into the Jena RDF
Framework as a persistence and query layer.
Note: not officially released yet currently, but close. Might be worth a look..
HTH!
I'm currently investigating the options to extract person names, locations, tech words and categories from text (a lot articles from the web) which will then feeded into a Lucene/ElasticSearch index. The additional information is then added as metadata and should increase precision of the search.
E.g. when someone queries 'wicket' he should be able to decide whether he means the cricket sport or the Apache project. I tried to implement this on my own with minor success so far. Now I found a lot tools, but I'm not sure if they are suited for this task and which of them integrates good with Lucene or if precision of entity extraction is high enough.
Dbpedia Spotlight, the demo looks very promising
OpenNLP requires training. Which training data to use?
OpenNLP tools
Stanbol
NLTK
balie
UIMA
GATE -> example code
Apache Mahout
Stanford CRF-NER
maui-indexer
Mallet
Illinois Named Entity Tagger Not open source but free
wikipedianer data
My questions:
Does anyone have experience with some of the listed tools above and its precision/recall? Or if there is training data required + available.
Are there articles or tutorials where I can get started with entity extraction(NER) for each and every tool?
How can they be integrated with Lucene?
Here are some questions related to that subject:
Does an algorithm exist to help detect the "primary topic" of an English sentence?
Named Entity Recognition Libraries for Java
Named entity recognition with Java
The problem you are facing in the 'wicket' example is called entity disambiguation, not entity extraction/recognition (NER). NER can be useful but only when the categories are specific enough. Most NER systems doesn't have enough granularity to distinguish between a sport and a software project (both types would fall outside the typically recognized types: person, org, location).
For disambiguation, you need a knowledge base against which entities are being disambiguated. DBpedia is a typical choice due to its broad coverage. See my answer for How to use DBPedia to extract Tags/Keywords from content? where I provide more explanation, and mentions several tools for disambiguation including:
Zemanta
Maui-indexer
Dbpedia Spotlight
Extractiv (my company)
These tools often use a language-independent API like REST, and I do not know that they directly provide Lucene support, but I hope my answer has been beneficial for the problem you are trying to solve.
You can use OpenNLP to extract names of people, places, organisations without training. You just use pre-exisiting models which can be downloaded from here: http://opennlp.sourceforge.net/models-1.5/
For an example on how to use one of these model see: http://opennlp.apache.org/documentation/1.5.3/manual/opennlp.html#tools.namefind
Rosoka is a commercial product that provides a computation of "Salience" which measures the importance of the term or entity to the document. Salience is based on the linguistic usage and not the frequency. Using the salience values you can determine the primary topic of the document as a whole.
The output is in your choice of XML or JSON which makes it very easy to use with Lucene.
It is written in java.
There is an Amazon Cloud version available at https://aws.amazon.com/marketplace/pp/B00E6FGJZ0. The cost to try it out is $0.99/hour. The Rosoka Cloud version does not have all of the Java API features available to it that the full Rosoka does.
Yes both versions perform entity and term disambiguation based on the linguistic usage.
The disambiguation, whether human or software requires that there is enough contextual information to be able to determine the difference. The context may be contained within the document, within a corpus constraint, or within the context of the users. The former being more specific, and the later having the greater potential ambiguity. I.e. typing in the key word "wicket" into a Google search, could refer to either cricket, Apache software or the Star Wars Ewok character (i.e. an Entity). The general The sentence "The wicket is guarded by the batsman" has contextual clues within the sentence to interpret it as an object. "Wicket Wystri Warrick was a male Ewok scout" should enterpret "Wicket" as the given name of the person entity "Wicket Wystri Warrick". "Welcome to Apache Wicket" has the contextual clues that "Wicket" is part of a place name, etc.
Lately I have been fiddling with stanford crf ner. They have released quite a few versions http://nlp.stanford.edu/software/CRF-NER.shtml
The good thing is you can train your own classifier. You should follow the link which has the guidelines on how to train your own NER. http://nlp.stanford.edu/software/crf-faq.shtml#a
Unfortunately, in my case, the named entities are not efficiently extracted from the document. Most of the entities go undetected.
Just in case you find it useful.
I am developing a demo of semantic web-based Information System, which just uses SPARQL instead of traditional SQL to manipulate dataset. How the application can demonstrate Semantic Web benefits.
I did steps as below:
The client gets parameters from web UI.
Requests a web service.
The service generates a SPARQL command according to given parameters.
The service uses Jena/SDB API to execute the SPARQL command.
Retrieves or persists data from or to MySQL.
Parsing returned result set.
Responses a JSON object to the client.
The client uses Javascript + html to display data.
Currently, the application just has CRUD operations. Only one difference to the traditional IS, which is using SPARQL instead of SQL. It seems that cannot see obviously semantic features. I'm just thinking of two points:
To demonstrate data federating through SPARQL. From this point, can I imagine that the system can be broken down into several subsystem and work on their independent dataset but they can communicate with each other by SPARQL, which because they work on the RDF specification.
Reasoning over datasets. I use Ontologies to describe data schema, should my reasoning operation need to based on them. In my application, I try to get a RDF model, and use Pellet to do inferences. Is that corrent way?
Basically, if the application can demostrate data federating and reasoning, which can be seen as a semantic web-based application. Do I understand it right?
Hopefully, the application can combine services together automatically through semantic description. Furthermore, any other third party data sources may be communicate with the system and work immediately.
Yes ,you are right.the benefit with semantic web being you can write separate set of ontologies which will describe the domains(e.g. product,user) and then combine them using inference ,reasoning and make the data seem much more useful(r.g. product types and user preferences).
The difference being the rules for the data are now written with the data and not in the business logic layer.
Hope this helps .:)