SPARQL search by autoID generated in Protege 5 - sparql

I am a newbie from SPARQL & Prot. I have followed the tutorials in Youtube by Dr Noureddin Sadawi and a bit of the Pizza example tutorial in Protege Wiki. Here it is said that:
"In the Entity URI pane, select auto ID. When you create a new class, property or individual P4 will give it a meaningless URI and a readable label. That way if you exchange ontologies, correcting spelling mistakes (by merely changing labels) won’t cause the links between the ontologies to break"
Afterwards, I have created a small ontology for a "user" class with 3 "Data Property": age, email and gender. Then, I created 2 entities in order to make some tests.
In the final stage of my test with Fuseki server as in the Youtube tutorials, but with v2 and I tried to search for one of the users. However, the owl file is generated with the AutoIDs I created the classes and properties, so I don't know how to proceed with SPARQL.
For instance, this is one of the entities:
<!-- http://localhost/ontologies/2015/user-test#OWLNamedIndividual_84a421a4_9a25_4960_9e94_c8df6e58053c -->
<owl:NamedIndividual rdf:about="&user-test;OWLNamedIndividual_84a421a4_9a25_4960_9e94_c8df6e58053c">
<rdf:type rdf:resource="&user-test;OWLClass_a3e98b7d_52be_4a6b_af80_64ea7697332b"/>
<rdfs:label xml:lang="es">user02</rdfs:label>
<OWLDataProperty_9ade37f6_4e7c_4693_a830_2e0d895b7127 rdf:datatype="&xsd;string">25-34</OWLDataProperty_9ade37f6_4e7c_4693_a830_2e0d895b7127>
<OWLDataProperty_4492a8ab_183f_4e0d_b456_cbffdaeead79 rdf:datatype="&xsd;string">female</OWLDataProperty_4492a8ab_183f_4e0d_b456_cbffdaeead79>
<OWLDataProperty_2da03b48_c32a_4c74_91c7_fb552c9c39e5 rdf:datatype="&xsd;string">sus02#hotmail.com</OWLDataProperty_2da03b48_c32a_4c74_91c7_fb552c9c39e5>
</owl:NamedIndividual>
Is this the correct way of proceeding? Which is the best alternative to work with Protege, Jena engine, Fuseki and so on?
Thanks in advance

Related

Linking my own ontology with DBPedia's populated ontology

I have this assignment where I have created an rdfs ontology with artists, their songs, their genres, etc using Protégé and populated it with some data. I am now supposed to connect my ontology and it's data with DBPedia's, so that I can SPARQL for my artists' birthdate. I have successfully imported http://dbpedia.org/ontology/ and http://dbpedia.org/property/ into protégé, but I can't find a way to import the data (http://dbpedia.org/resource/).
Can anyone point me in the right direction?
DBpedia datasets are stored and available for download here (http://wiki.dbpedia.org/Downloads2015-04#h96554-1) but Protégé isn't going to perform well (or at all) when you load a dataset of this size into it.
You may be better downloading various entities by hand such as - http://dbpedia.org/data/Blink-182.ntriples and then loading into Protégé. You can go to any DBPedia page. E.G. http://dbpedia.org/page/Ras_Barker and find download links at the bottom.

Web scraping wikipedia data table, but from dbpedia, and examples/very basic, elementary tutorial resources to build queries

I wanted to ask about the Semantic Web part, in particular using DBpedia. In general, what DBpedia can and can’t do? I roughly understand the subject-verb-object model for something like DBpedia. Practically and concretely speaking, I want to web scrape the technical data (mass, thrust, etc.) found in the Wikipedia page of the Long March rocket family
Now, as of right now (i.e., as far as I know), to find what DBpedia has (i.e., how I’m using DBpedia to find data) is that I find what I’m interested in Wikipedia, copying the last part of the URL, and copy that into DBpedia (is there any method more sophisticated than that?), resulting in this page.
Looking at that page, I only see links to related articles, links, and the abstract.
Other than my smaller questions above, my main question is this: so does DBpedia not have the data table that I want?
Next, could someone help me give me some tips or pointers for building a SPARQL or query string for DBpedia? It seems to me that one wouldn't know how to build one as there's no "directory" for what could or couldn't be asked. Thanks.
DBpedia is an active project, and DBpedia extractors are continuing to evolve. Contributions that might help you would include adding infoboxes to Wikipedia pages, and data extractors to DBpedia. Check the DBpedia website for info, or write to dbpedia-discussion to get started.
As for finding DBpedia content, there are several interfaces you can work with --
Faceted Browse and Search
direct SPARQL query interface
iSPARQL, a drag-and-drop SPARQL query builder
SNORQL, another SPARQL query interface
so does dbpedia not have the data table that I want?
No, it doesn't. Usually, DBpedia gets its data from infoboxes. Your article doesn't have one, so DBpedia can't get much information out of it.

How to get part of a jena OntModel?

I‘m working on a project, which the ontology is very large and can not be separated into multi files. I need to analyse the ontology and show part of on a web page, but I am not sure how to get part of the ontology. I've saved the ontology into a OntModel class,should I use jena API or SPARQL to query part of it? It would be the best if the sub ontology is also can convert to OntModel class. Could you please give me an example please? Please help me,any answer would be appreciated!Thank you!

semantic web + linked data integration

i'm new to semantic web.
I'm trying to do a sample application where i can query data from different data sources in one query.
i have created a small rdf file which contains references to dbpedia resources for defining localities. my question is : how can i get the data contained in my file and other information which is in the description of the distant resource (for example : the name of the person from the local file, and the total poulation in a city dbpedia-owl:populationTotal from the distant rdf file).
i don't really understand the sparql query language, i tried to use the JENA ARQ API with the SERVICE keyword but it doesn't solve the problem.
Any help please?
I guess you are looking for something like the Semantic Web Client Library, which tries to leverage the GGG. Albeit, the standard exploration algorithm of this framework is that it follows rdfs:seeAlso links. Nevertheless, the general approach seems to be what your are looking for, i.e., you would create a local graph that consists of your seed graph and that traverse the relations up to a certain level, e.g., three steps, resolves the URIs and load that content into your local triple. Utilising advanced technologies like SPARQL federation might be something for later ;)
I have retrived data from two different sources using SPARQL query with named graphs.
I used jena-ARQ to execute the sparql query.

How can I do offline reasoning with Pellet?

I have an OWL ontology and I am using Pellet to do reasoning over it. Like most ontologies it starts by including various standard ontologies:
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:owl="http://www.w3.org/2002/07/owl#">
I know that some reasoners have these standard ontologies 'built-in', but Pellet doesn't. Is there any way I can continue to use Pellet when I am offline & can't access them? (Or if their URL goes offline, like dublincore.org did last week for routine maintenance)
Pellet recognizes all of these namespaces when loading and should not attempt to dereference the URIs. If it does, it suggests the application using Pellet is doing something incorrectly.
You may find more help on the pellet-users mailing list.
A generalized solution to this problem -- access to ontologies w/out public Web access -- is described in Local Ontology Repositories with Pellet. Enjoy.
Make local copies of the four files and replace the remote URLs with local URIs (i.e. file://... or serve them from your own box: http://localhost...).