Does Jena's model.add method save to SDB using SPARQL? I mean how to save triple to SDB.
I'm just asking out of curiosity.
I know that SDB's project has been suspended.
Related
Is it possible to infer new knowledge about an ontology only from a query in SPARQL?
I have a question about the use of the SPARQL language about ontologies. So far I have thought that SPARQL is the equivalent to the SQL language in the relational databases, that is to say, that with SPARQL it is only possible to consult the data that are explicitly in the ontology, without having access to the data that can be inferred , leaving the responsibility of the inference to the reasoners.
However, I have read documents from which I infer that SPARQL does have the capacity to infer implicit and non-explicit knowledge in the ontology. Is my inference true? That is, is it possible to infer knowledge through a SPARQL query without the need for a reasoner? If the answer is true, then what advantages does the use of a reasoner have over the use of SPARQL?
Greetings, Manuel Puebla.
Yes, on-the-fly inference may be a feature of the SPARQL processor, so you can get the benefits of inference/reasoning directly from a SPARQL query. (See Virtuoso SPARQL endpoints inference rules for some discussion of how this is done in Virtuoso, for example.)
i want to query an Ontology which is defined in an RDF file using SPARQL and dotnetRDF library. The problem is that the file is large, so it's not very practical to load the entire file in memory. What should i do ?
Thanks in advance
As AKSW says in the comment, the best approach would be to load your RDF file into a triple store and then run your SPARQL queries against that. dotNetRDF ships with support for several triple stores as listed at https://github.com/dotnetrdf/dotnetrdf/wiki/UserGuide-Storage-Providers. However, all you really need is a triple store that supports the SPARQL protocol and then you will be able to run your queries from dotNetRDF code using the SparqlRemoteEndpointclass as described at https://github.com/dotnetrdf/dotnetrdf/wiki/UserGuide-Querying-With-SPARQL#remote-query.
As for which triple store to use, Jena with Fuseki is probably a good open-source choice.
I have a pizza ontology that defines different types of pizzas, ingredients and relations among them.
I just want to understand several basic things:
Is it correct that I should apply SPARQL if I want to obtain information without
reasoning? E.g. which pizzas contain onion?
What is the difference between SPARQL and reasoning algorithms
like Pellet? Which queries cannot be answered by SPARQL, while can
be answered by Pellet? Some examples of queries (question-like) for the pizza ontology would be helpful.
As far as I understand to use SPARQL from Java with Jena, I
should save my ontology in RDF/XML format. However, to use Pellet
with Jena, which format do I need to select? Pellet uses OWL2...
SPARQL is a query language, that is, a language for formulating questions in. Reasoning, on the other hand, is the process of deriving new information from existing data. These are two different, complementary processes.
To retrieve information from your ontology you use SPARQL, yes. You can do this without reasoning, or in combination with a reasoner, too. If you have a reasoner active it means your queries can be simpler, and in some cases reasoners can derive information that is not really retrievable at all with just a query.
Reasoners like Pellet don't really answer queries, they just reason: they figure out what implicit information can be derived from the raw facts, and can do things like verifying that things are consistent (i.e. that there are no logical contradictions in your data). Pellet can figure out that that if you own a Toyota, which is of type Car, you own a Vehicle (because a Car is a type of Vehicle). Or it can figure out that if you define a pizza to have the ingredient "Parmesan", you have a pizza of type "Cheesy" (because it knows Parmesan is a type of Cheese). So you use a reasoner like Pellet to derive this kind of implicit information, and then you use a query language like SPARQL to actually ask: "Ok, give me an overview of all Cheesy pizzas that also have anchovies".
APIs like Jena are toolkits that treat RDF as an abstract model. Which syntax format you save your file in is immaterial, it can read almost any RDF syntax. As soon as you have it read in a Jena model you can execute the Pellet reasoner on it - it doesn't matter which syntax your original file was in. Details on how to do this can be found in the Jena documentation.
I am new with Jena and I am implementing an application in order to manipulate RDF data. I have already implemented some basic fonctions to see Classes, Properties and other stuffs.
And I wonder if it could be possible to load multiple ontologies with Jena and then have some SPARQL queries on those. (I already know that is possible because Protégé does it).
Thank's for your interest.
I'm open for any questions or precision.
currently, I found out that I can query using model (Model) syntax in Jena in a rdf after loading the model from a file, it gives me same output if I apply a sparql query. So, I want to know that , is it a good way to that without sparql? Though I have tested it with a small rdf file. I also want to know if I use Virtuoso can i manipulate using model syntax without sparql?
Thanks in Advance.
I'm not quite sure if I understand your question. If I can paraphrase, I think you're asking:
Is it OK to query and manipulate RDF data using the Jena Model API instead of using
SPARQL? Does it make a difference if the back-end store is Virtuoso?
Assuming that's the right re-phrasing of the question, then the first part is definitively yes: you can manipulate RDF data through the Model and OntModel APIs. In fact, I would say that's what the majority of Jena users do, particularly for small queries or updates. I find personally that going direct to the API is more succinct up to a certain point of complexity; after that, my code is clearer and more concise if I express the query in SPARQL. Obviously circumstances will have an effect: if you're working with a mixture of local stores and remote SPARQL endpoints (for which sending a query string is your only option) then you may find the consistency of always using SPARQL makes your code clearer.
Regarding Virtuoso, I don't have any direct experience to offer. As far as I know, the Virtuoso Jena Provider fully implements the features of the Model API using a Virtuoso store as the storage layer. Whether the direct API or using SPARQL queries gives you a performance advantage is something you should measure by benchmark with your data and your typical query patterns.