Sparql Query on GeoNames Ontology using Nearby - sparql

I'm trying several queries on the site : http://geosparql.org/
I'm very interested in trying the clause : NEARBY, for example this query using NEARBY in this way:
PREFIX spatial:<http://jena.apache.org/spatial#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX geo:<http://www.w3.org/2003/01/geo/wgs84_pos#>
PREFIX gn:<http://www.geonames.org/ontology#>
Select *
WHERE{
?object spatial:nearby(40.74 -73.989 1 'mi').
?object rdfs:label ?label
}LIMIT 10
When i execute the query on site http://geosparql.org/ all is ok but now i desire download the GeoNames Ontology and execute it on my PC.
here i have found the ontology for download: http://www.geonames.org/ontology/documentation.html
He tells me that The Ontology for GeoNames is available in OWL :
http://www.geonames.org/ontology/ontology_v3.1.rd
I download it but when i open ontology with software Protegè or with Sparql Droid on my smarphone Android and execute same query I get no data maybe the ontology is empty?
How do I fill the ontology, in order to run this query?
Thank you very much to those who will help me.

The ontology is the vocabulary (i.e., definition of classes, properties, etc.) An ontology doesn't necessarily include the individuals (e.g., the places, locations, etc.) that you might be interested in. In this case, I think you've downloaded the ontology, which is relatively small, but you're probably interested in the data dumps that that page describes later. I think the fourth option is the one that you want:
Entry Points into the GeoNames Semantic Web
There are several ways how you can enter the GeoNames Semantic Web :
…
RDF dump with 10113356 features and about 150 mio rdf triples (2015 04 21). The dump has one rdf document per toponym on every line of the
file. Note: The file is pretty large. Make sure the tool you use to
uncompress is able to deal with the size and does not stop after 2GB,
an issue that happens with some old (windows) tool versions.

Open the web page geonames then Clic on the given OWl URI Geonames Ontology the geonames onto is then downloaded. open protégé then menu File> Open and import the owl document the ontology concepts is then added to protégé; use it to add your own instanc

Related

Using dotnetRDF library to query Large RDF file by SPARQL

i want to query an Ontology which is defined in an RDF file using SPARQL and dotnetRDF library. The problem is that the file is large, so it's not very practical to load the entire file in memory. What should i do ?
Thanks in advance
As AKSW says in the comment, the best approach would be to load your RDF file into a triple store and then run your SPARQL queries against that. dotNetRDF ships with support for several triple stores as listed at https://github.com/dotnetrdf/dotnetrdf/wiki/UserGuide-Storage-Providers. However, all you really need is a triple store that supports the SPARQL protocol and then you will be able to run your queries from dotNetRDF code using the SparqlRemoteEndpointclass as described at https://github.com/dotnetrdf/dotnetrdf/wiki/UserGuide-Querying-With-SPARQL#remote-query.
As for which triple store to use, Jena with Fuseki is probably a good open-source choice.

Plug in for Protege to Create/Edit SPIN Constraints and Constructors?

Is there a plug-in or other means to create and edit SPARQL/SPIN constraints and constructors in Protege?
As I understand it, to capture SPIN constraints in RDF, the SPARQL code for the ASK or CONSTRUCT queries needs to be parsed and encoded. It's not stored as an opaque string. Therefore, it would seem that some plugin with knowledge of SPARQL and SPIN would be required.
I've loaded RDF from Topbraid Composer including SPIN constraints into Protege 4.3.0, and it seems to see the constraints as annotations, but I cannot seem to find all of the details, critically including all of the underlying SPARQL code. I do see it when text editing the RDF file.
In the broad sense, I'm trying to find a way to create/edit SPIN constraints and constructors and load them into Sesame to have them operate on individuals instantiated from my classes. I posted another question about the path from TopBraid Composer into Sesame. I'm trying to keep my questions more specific since I'm a newbie on Stack Overflow.
BTW, no I don't want to use SWRL instead. I've had trouble expressing the constraints I need using SWRL. I've had success using SPARQL.
Thanks.
In some versions TopBraid Composer will store SPIN constraints in RDF by default. Given that the query is stored as RDF triples, there should be no problem storing them in any RDF data store. Applying the SPIN constraints is a different issue, as the system will need to know how to interpret the queries for different SPIN properties.
Are you certain you cannot "see" them in Protégé or Sesame? The constraints are defined on the class using the property spin:constraint and should appear as a bnode. Make sure you also import http://spinrdf.org/spin, or at least define a property named spin:constraint. In the very least, the following should always work to find your constraints:
SELECT ?constraint ?class
WHERE {
?class <http://spinrdf.org/spin#constraint> ?constraint
}
...where ?constraint is bound to a bnode representing the constraint in RDF and ?class is the class the constraint is defined for.
Also, if you would rather store the constraints as SPARQL strings, see Preferences > TopBraid Composer > SPIN and check one of the boxes in "Generate sp:text...". Then you can get the query text via the following query:
SELECT ?query ?class
WHERE {
?class <http://spinrdf.org/spin#constraint> ?constraint .
?constraint <http://spinrdf.org/sp#text> ?query
}

Inference over linked data SPARQL endpoints

When querying some linked data SPARQL endpoints via SPARQL queries, what is the type of reasoning provided (if any)?
For example, DBpedia SNORQL endpoint doesn't even provide the basic subclass inference (if A subClassOf B and B subClassOf C, then A subClassOf C). While FactForge SPARQL endpoint provides some inference (though it is not clear what kind of inference it is), and provides the possibility to switch that inference on and off.
My question:
How is it possible to identify the kind of inference applied? and if the inference support is limited, could it be extended using the endpoint only?
Inference controls will vary with the engine as well as the endpoint.
The public DBpedia SPARQL endpoint (powered by Virtuoso, from my employer, OpenLink Software) does provide various inference rules (accessible through the "Inference rules" link at the top right corner of the SPARQL endpoint query form page) which are controlled by pragmas in your SPARQL (not SNORQL, to which form you linked), such as --
DEFINE input:inference 'urn:rules.skos'
You can see the content of any predefined ruleset via SPARQL -- for the above
SELECT *
FROM <urn:rules.skos>
WHERE { ?s ?p ?o }
You can see the live query and results.
See this tutorial containing many examples.
While inference is not universally supported across SPARQL endpoints, most of the inferences supported by RDFS, RSFS+ and OWL 2 RL profiles are supported by SPARQL itself. For example, querying for instances of :A using your subClassOf entailment can be supported with SPARQL property paths:
SELECT ?inst
WHERE {
?cls rdfs:subClassOf* :A .
?inst a ?cls .
}
The first triple pattern gets all subclasses of :A, including :A (use + instead of * if you just want subclasses of :A), and the second triple finds all instances of all those classes.
To see how most of OWL 2 can be implemented with SPARQL, see Reasoning in OWL 2 RL and RDF Graphs using Rules. With a couple of exceptions, all of these can be implemented in SPARQL (and in fact you probably won't need some of them, such as eq-ref, (which is good for a computational lol that logicians may scoff at)).
There are few uses cases, beyond heavy-lifting classification problems, that can't be solved with a subset of the OWL 2 RL rules.
So, in the end, a recommendation is to understand what entailments you need. Chances are that OWL has totally overthought the issue and you can live with a few SPARQL patterns. And then you can hit the SPARQL endpoints without having to worry about whether specific inference profiles are supported.

Linking my own ontology with DBPedia's populated ontology

I have this assignment where I have created an rdfs ontology with artists, their songs, their genres, etc using Protégé and populated it with some data. I am now supposed to connect my ontology and it's data with DBPedia's, so that I can SPARQL for my artists' birthdate. I have successfully imported http://dbpedia.org/ontology/ and http://dbpedia.org/property/ into protégé, but I can't find a way to import the data (http://dbpedia.org/resource/).
Can anyone point me in the right direction?
DBpedia datasets are stored and available for download here (http://wiki.dbpedia.org/Downloads2015-04#h96554-1) but Protégé isn't going to perform well (or at all) when you load a dataset of this size into it.
You may be better downloading various entities by hand such as - http://dbpedia.org/data/Blink-182.ntriples and then loading into Protégé. You can go to any DBPedia page. E.G. http://dbpedia.org/page/Ras_Barker and find download links at the bottom.

semantic web + linked data integration

i'm new to semantic web.
I'm trying to do a sample application where i can query data from different data sources in one query.
i have created a small rdf file which contains references to dbpedia resources for defining localities. my question is : how can i get the data contained in my file and other information which is in the description of the distant resource (for example : the name of the person from the local file, and the total poulation in a city dbpedia-owl:populationTotal from the distant rdf file).
i don't really understand the sparql query language, i tried to use the JENA ARQ API with the SERVICE keyword but it doesn't solve the problem.
Any help please?
I guess you are looking for something like the Semantic Web Client Library, which tries to leverage the GGG. Albeit, the standard exploration algorithm of this framework is that it follows rdfs:seeAlso links. Nevertheless, the general approach seems to be what your are looking for, i.e., you would create a local graph that consists of your seed graph and that traverse the relations up to a certain level, e.g., three steps, resolves the URIs and load that content into your local triple. Utilising advanced technologies like SPARQL federation might be something for later ;)
I have retrived data from two different sources using SPARQL query with named graphs.
I used jena-ARQ to execute the sparql query.