I have an OWL ontology and I am using Pellet to do reasoning over it. Like most ontologies it starts by including various standard ontologies:
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:owl="http://www.w3.org/2002/07/owl#">
I know that some reasoners have these standard ontologies 'built-in', but Pellet doesn't. Is there any way I can continue to use Pellet when I am offline & can't access them? (Or if their URL goes offline, like dublincore.org did last week for routine maintenance)
Pellet recognizes all of these namespaces when loading and should not attempt to dereference the URIs. If it does, it suggests the application using Pellet is doing something incorrectly.
You may find more help on the pellet-users mailing list.
A generalized solution to this problem -- access to ontologies w/out public Web access -- is described in Local Ontology Repositories with Pellet. Enjoy.
Make local copies of the four files and replace the remote URLs with local URIs (i.e. file://... or serve them from your own box: http://localhost...).
Related
I've been trying to figure out how to mount a SPARQL endpoint for a couple of days, but as much as I read I can not understand it.
Comment my intention: I have an open data server mounted on CKAN and my goal is to be able to use SPARQL queries on the data. I know I could not do it directly on the datasets themselves, and I would have to define my own OWL and convert the data I want to use from CSV format (which is the format they are currently in) to RDF triple format (to be used as linked data).
The idea was to first test with the metadata of the repositories that can be generated automatically with the extension ckanext-dcat, but is that I really do not find where to start. I've searched for information on how to install a Virtuoso server for the SPARQL, but the information I've found leaves a lot to be desired, not to say that I can find nowhere to explain how I could actually introduce my own OWLs and RDFs into Virtuoso itself.
Someone who can lend me a hand to know how to start? Thank you
I'm a little confused. Maybe this is two or more questions?
1. How to convert tabular data, like CSV, into the RDF semantic format?
This can be done with an R2RML approach. Karma is a great GUI for that purpose. Like you say, a conversion like that can really be improved with an underlying OWL ontology. But it can be done without creating a custom ontology, too.
I have elaborated on this in the answer to another question.
2. Now that I have some RDF formatted data, how can I expose it with a SPARQL endpoint?
Virtuoso is a reasonable choice. There are multiple ways to deploy it and multiple ways to load the data, and therefore LOTs of tutorial on the subject. Here's one good one, from DBpedia.
If you'd like a simpler path to starting an RDF triplestore with a SPARQL endpoint, Stardog and Blazegraph are available as JARs, and RDF4J can easily be deployed within a container like Tomcat.
All provide web-based graphical interfaces for loading data and running queries, in addition to SPARQL REST endpoints. At least Stardog also provides command-line tools for bulk loading.
I'm trying to design an ontology and i'm forced to use SEMFacet as a part of the project.
SemFacet is an open source search engine that is built over Semantic web technology it works as follow i create an ontology using protege and i upload it to SemFacet and i start searching my ontology.
My ontology has courses and a predicate that describes what these courses are about. So for example let's suppose i have an individual course CS101 that is instantiated from courses class. The course class has a data-object property called description its type is xsd^^string.
My problem is that whenever the predicate i.e. description property is preceded by a URI "Imaginary URI" SemFacet can't find what i'm taking about. But if i remove the URI everything seems to work just fine.
I told my professor about the issue, he told me that because you are using a URI that does not exist. to be honest i'm not convinced about using a URI that does not exist.
What do you think?
Chances are, SEMFacet does not support blank nodes (that's a proper name for "imaginary URIs") correctly.
Unless SEMFacet tries to resolve the resources pointed by the URI, you don't need to create a live URI (i.e. the one with HTTP 200 OK response), but only a valid one.
Make sure that you don't leave empty IRIs in Protégé.
#berezovskiy I think OP did not mean blank nodes with imaginary URIs but he meant URIs which he created himself and do not exist like : http://mysuperfancyuri.com
So maybe your professor just wants you to be more standard conform and use already existing predicates instead of creating your own ones. You could for example look at dcterms:description (http://purl.org/dc/terms/description) for a description predicate.
I know that in the web you have lots an lots of pages linked to one another and you can go from page to page and so on.
How does the semantic web work? I understand that it uses the concept of Linked Data, where data is identified and linked by URIs or IRIs and not the web pages them self. But I don't understand how the data is linked across the web when all of the data is stored in local triplestores and are linked internally in the triplestores. Are browsers capable to go from triplestore to triplestore behind the scenes and get back all kinds of data? Or how is the data actually linked? Is there a mechanism to go from data to data all across the web and use the meaning of data in real life situations, or a tool that does something like this?
Also anybody can create ontologies and define and describe anything in all kinds of different ways. Won't this lead to a big mess of data?
So, main question:
How does the semantic web and liked data actually work?
It's a tricky and multifaceted question.
First I'll answer some of you questions.
But I don't understand how the data is linked across the web when all of the data is stored in local triplestores and are linked internally in the triplestores
First of all, it is important to realize that triplestores are not a necessity. You could have SQL servers and D2RQ/R2RML mapping on top to translate queries dynamically. Or plain RDF files. Or simple JSON documents in MongoDB, etc, which you extend by adding a JSON-LD #context.
What is important, is that you serve data in one of the RDF formats such as turtle or JSON-LD
Are browsers capable to go from triplestore to triplestore behind the scenes and get back all kinds of data?
See, they don't have to because, as you mention, URIs are used so that a browser (not necessarily a web browser) can download the data. And of course these URI are URLs and are dereferenceable. Otherwise they are just identifiers.
Or how is the data actually linked?
It is linked simply by reusing identifiers for objects and properties. That's why URI (IRI) is used, so that the identifiers are globally unique and created privately within a domain. Of course there is a risk of being mischievous by creating URIs is someone else's domain. It's a separate topic though.
Is there a mechanism to go from data to data all across the web and use the meaning of data in real life situations, or a tool that does something like this?
One simple mechanism is to simply crawl RDF data and download to a local store. Simple occurrence of matching identifiers will combine the data into larger dataset with less mapping effort required. That is of course the theory because you risk data can be corrupt, incorrect or duplicated so you need some curation. Technology exists to help you do that and it's not something you wouldn't experience is traditional data warehousing. Search engines harvest semantic markup from HTML pages (RFDa/Microdata) is similar manner.
Another option is to use federated queries. SPARQL has the ability to automatically download RDF data and perform queries over it in memory.
Last but not least, there are federated queries using Triple Pattern Fragments
Now about Semantic Web
As I wrote, the question is not that simple. You mostly ask about Linked Data. There is more to Semantic Web than that:
ontologies/taxonomies
inferencing
rules
semantic/faceted search
I hope I answered your question to some extent.
I am new with Jena and I am implementing an application in order to manipulate RDF data. I have already implemented some basic fonctions to see Classes, Properties and other stuffs.
And I wonder if it could be possible to load multiple ontologies with Jena and then have some SPARQL queries on those. (I already know that is possible because Protégé does it).
Thank's for your interest.
I'm open for any questions or precision.
i'm new to semantic web.
I'm trying to do a sample application where i can query data from different data sources in one query.
i have created a small rdf file which contains references to dbpedia resources for defining localities. my question is : how can i get the data contained in my file and other information which is in the description of the distant resource (for example : the name of the person from the local file, and the total poulation in a city dbpedia-owl:populationTotal from the distant rdf file).
i don't really understand the sparql query language, i tried to use the JENA ARQ API with the SERVICE keyword but it doesn't solve the problem.
Any help please?
I guess you are looking for something like the Semantic Web Client Library, which tries to leverage the GGG. Albeit, the standard exploration algorithm of this framework is that it follows rdfs:seeAlso links. Nevertheless, the general approach seems to be what your are looking for, i.e., you would create a local graph that consists of your seed graph and that traverse the relations up to a certain level, e.g., three steps, resolves the URIs and load that content into your local triple. Utilising advanced technologies like SPARQL federation might be something for later ;)
I have retrived data from two different sources using SPARQL query with named graphs.
I used jena-ARQ to execute the sparql query.