A SPARQL query returns a result with restrictions with allValuesFrom and unionOf. I need do concat these values, but, when I use bind or str functions, the result is blank.
I tried bind, str and group_concat functions, but, all of it was unsuccessful. Group_concat return a blank node.
SELECT DISTINCT ?source ?is_succeeded_by
WHERE {
?source rdfs:subClassOf ?restriction .
?restriction owl:onProperty j.0:isSucceededBy .
?restriction owl:allValuesFrom ?is_succeeded_by .
FILTER (REGEX(STR(?source), 'gatw-Invoice_match'))
}
Result of SPARQL query in Protegé:
You can hardly obtain strings like 'xxx or yyy' programmatically in Jena,
since it is Manchester Syntax, an OWL-API native format, and it is not supported by Jena.
Any class expression is actually b-node, there are no such builtin symbols like 'or' in raw RDF.
To represent any anonymous class expression as a string, you can use ONT-API,
which is a jena-based OWL-API, and, therefore, both SPARQL and Manchester Syntax are supported there.
Here is an example based on pizza ontology:
// use pizza, since no example data provided in the question:
IRI pizza = IRI.create("https://raw.githubusercontent.com/owlcs/ont-api/master/src/test/resources/ontapi/pizza.ttl");
// get OWLOntologyManager instance from ONT-API
OntologyManager manager = OntManagers.createONT();
// as extended Jena model:
OntModel model = manager.loadOntology(pizza).asGraphModel();
// prepare query that looks like the original, but for pizza
String txt = "SELECT DISTINCT ?source ?is_succeeded_by\n" +
"WHERE {\n" +
" ?source rdfs:subClassOf ?restriction . \n" +
" ?restriction owl:onProperty :hasTopping . \n" +
" ?restriction owl:allValuesFrom ?is_succeeded_by .\n" +
" FILTER (REGEX(STR(?source), 'Am'))\n" +
"}";
Query q = new Query();
q.setPrefixMapping(model);
q = QueryFactory.parse(q, txt, null, Syntax.defaultQuerySyntax);
// from owlapi-parsers package:
OWLObjectRenderer renderer = new ManchesterOWLSyntaxOWLObjectRendererImpl();
// from ont-api (although it is a part of internal API, it is public):
InternalObjectFactory iof = new SimpleObjectFactory(manager.getOWLDataFactory());
// exec SPARQL query:
try (QueryExecution exec = QueryExecutionFactory.create(q, model)) {
ResultSet res = exec.execSelect();
while (res.hasNext()) {
QuerySolution qs = res.next();
List<Resource> vars = Iter.asStream(qs.varNames()).map(qs::getResource).collect(Collectors.toList());
if (vars.size() != 2)
throw new IllegalStateException("For the specified query and valid OWL must not happen");
// Resource (Jena) -> OntCE (ONT-API) -> ONTObject (ONT-API) -> OWLClassExpression (OWL-API)
OWLClassExpression ex = iof.getClass(vars.get(1).inModel(model).as(OntClass.class)).getOWLObject();
// format: 'class local name' ||| 'superclass string in ManSyn'
System.out.println(vars.get(0).getLocalName() + " ||| " + renderer.render(ex));
}
}
The output:
American ||| MozzarellaTopping or PeperoniSausageTopping or TomatoTopping
AmericanHot ||| HotGreenPepperTopping or JalapenoPepperTopping or MozzarellaTopping or PeperoniSausageTopping or TomatoTopping
Used env: ont-api:2.0.0, owl-api:5.1.11, jena-arq:3.13.1
Related
I am running some SPARQL queries using the class ForwardChainingRDFSInferencer, which basic constructs an inferencer. For my examples I use the schema.org ontology.
My code looks like the following example
MemoryStore store = new MemoryStore();
ForwardChainingRDFSInferencer inferencer = new ForwardChainingRDFSInferencer(store); //the inference class
Repository repo = new SailRepository(inferencer);
repo.initialize();
URL data = new URL("file:/home/user/Documents/schemaorg-current-https.nt");
RDFFormat fileRDFFormat = RDFFormat.NTRIPLES;
RepositoryConnection con = repo.getConnection();
con.add(data, null, fileRDFFormat);
System.out.println("Repository loaded");
String queryString =
" PREFIX schema: <https://schema.org/>" +
" SELECT DISTINCT ?subclassName_one" +
" WHERE { " +
" ?type rdf:type rdfs:Class ." +
" ?type rdfs:subClassOf/rdfs:subClassOf? schema:Thing ." +
" ?type rdfs:label ?subclassName_one ." +
" }";
TupleQuery tupleQuery = con.prepareTupleQuery(QueryLanguage.SPARQL, queryString);
TupleQueryResult result = tupleQuery.evaluate();
while (result.hasNext()) {
BindingSet bindingSet = result.next();
System.out.println(bindingSet.toString());
stdout.println(bindingSet.toString());
}
repo.close()
What I want is to print two subclass levels down the class Thing. So if for example we have,
Thing > Action (sub-class 1) > ConsumeAction (sub-class 2) > DrinkAction
What I want is to return the class CosumeAction which is two levels (subclasses) down from class Thing, while using the inference java class.
The current query, as given in the code sample above, returns all the classes and subclasses from every class of the schema.org ontology. Thus, there is something that I do wrong while using the inference class.
You could remove the classes you are not interested in with FILTER NOT EXISTS:
FILTER NOT EXISTS {
?type rdfs:subClassOf+/rdfs:subClassOf/rdfs:subClassOf schema:Thing .
}
Examples
For DrinkAction (level 3):
FILTER NOT EXISTS {
?type
rdfs:subClassOf+ / # schema:ConsumeAction
rdfs:subClassOf / # schema:Action
rdfs:subClassOf schema:Thing .
}
For WearAction (level 4):
FILTER NOT EXISTS {
?type
rdfs:subClassOf+ / # schema:UseAction / schema:ConsumeAction
rdfs:subClassOf / # schema:Action
rdfs:subClassOf schema:Thing .
}
I want to get the sources of a dbo:Cheese. Some dbo:Cheese have dbo:Animal as dbp:source. Some others have only text. I would like to get the source information in the ?sources variable regardless of that the source is dbo:Animal or text . How should I do it ?
SELECT ?cheese
CONCAT(GROUP_CONCAT(DISTINCT ?source; SEPARATOR=", "), ", ", GROUP_CONCAT(DISTINCT ?source_label; SEPARATOR=", ")) AS ?sources
WHERE {
?cheese a dbo:Cheese .
optional { # some cheeses don't have a source informed
?cheese dbp:source ?source .
}
optional { # some sources are dbo:Animal, some others are xsd:string
?source a dbo:Animal ;
rdfs:label ?source_label .
FILTER(langMatches(lang(?source_label), "EN"))
}
}
LIMIT 10
I tried this. Problem is that I have both http://dbpedia.org/resource/Sheep and Sheep in ?sources.
Can any one please point me to some simple examples of semantic tagging and querying semantically tagged documents in MarkLogic?
I am fairly new in this area,so some beginner level examples will do.
When you say "semantically tagged" do you mean regular XML documents that happen to have some triples in them? The discussion and examples at http://docs.marklogic.com/guide/semantics/embedded are pretty good for that.
Start by enabling the triple index in your database. Then insert a test doc. This is just XML, but the sem:triple element represents a semantic fact.
xdmp:document-insert(
'test.xml',
<test>
<source>AP Newswire</source>
<sem:triple date="1972-02-21" confidence="100">
<sem:subject>http://example.org/news/Nixon</sem:subject>
<sem:predicate>http://example.org/wentTo</sem:predicate>
<sem:object>China</sem:object>
</sem:triple>
</test>)
Then query it. The example query is pretty complicated. To understand what's going on I'd insert variations on that sample document, using different URIs instead of just test.xml, and see how the various query terms match up. Try using just the SPARQL component, without the extra cts query. Try cts:search with no SPARQL, just the cts:query.
xquery version "1.0-ml";
import module namespace sem = "http://marklogic.com/semantics"
at "/MarkLogic/semantics.xqy";
sem:sparql('
SELECT ?country
WHERE {
<http://example.org/news/Nixon> <http://example.org/wentTo> ?country
}
',
(),
(),
cts:and-query((
cts:path-range-query( "//sem:triple/#confidence", ">", 80) ,
cts:path-range-query( "//sem:triple/#date", "<", xs:date("1974-01-01")),
cts:or-query((
cts:element-value-query( xs:QName("source"), "AP Newswire"),
cts:element-value-query( xs:QName("source"), "BBC"))))))
In case you are talking about enriching your content using semantic technology, that is not directly provided by MarkLogic.
You can enrich your content externally, for instance by calling a public service like the one provided by OpenCalais, and then insert the enrichments to the content before insert.
You can also build lists of lookup values, and then using cts:highlight to mark such terms within your content. That could be as simple as:
let $labels := ("MarkLogic", "StackOverflow")
return
cts:highlight($doc, cts:word-query($labels), <b>{$cts:text}</b>)
Or with a more dynamic replacement using spraql:
let $labels := map:new()
let $_ :=
for $result in sem:sparql('
PREFIX demo: <http://www.marklogic.com/ontologies/demo#>
SELECT DISTINCT ?label
WHERE {
?s a demo:person.
{
?s demo:fullName ?label
} UNION {
?s demo:initialsName ?label
} UNION {
?s demo:email ?label
}
}
')
return
map:put($labels, map:get($result, 'label'), 'person')
return
cts:highlight($doc, cts:word-query(map:keys($labels)),
let $result := sem:sparql(concat('
PREFIX demo: <http://www.marklogic.com/ontologies/demo#>
SELECT DISTINCT ?s ?p
{
?s a demo:', map:get($labels, $cts:text), ' .
?s ?p "', $cts:text, '" .
}
'))
return
if (map:contains($labels, $cts:text))
then
element { xs:QName(fn:concat("demo:", map:get($labels, $cts:text))) } {
attribute subject { map:get($result, 's') },
attribute predicate { map:get($result, 'p') },
$cts:text
}
else ()
)
HTH!
I can get the algebra form from sparql query string using ARQ algebra (com.hp.hpl.jena.sparql.algebra):
String queryStr =
"PREFIX foaf: <http://xmlns.com/foaf/0.1/>" +
"SELECT DISTINCT ?name ?nick" +
"{?x foaf:mbox <mailt:person#server> ." +
"?x foaf:name ?name" +
"OPTIONAL { ?x foaf:nick ?nick }}";
Query query = QueryFactory.create(queryStr);
Op op = Algebra.compile(query);
Print the returned value of op:
(distinct
(project (?name ?nick)
(join
(bgp
(triple ?x <http://xmlns.com/foaf/0.1/mbox> <mailt:person#server>)
(triple ?x <http://xmlns.com/foaf/0.1/name> ?nameOPTIONAL)
)
(bgp (triple ?x <http://xmlns.com/foaf/0.1/nick> ?nick)))))
Returned value is an Op type, but I can't find any direct methods that can parse the op into elements, e.g., basic graph patterns of s, p, o, and the relations between these graph patterns.
Any hint is appreciated, thanks.
Why serialise out the algebra at all?
If your aim is to walk the algebra tree and extract the BGPs then you can do this using the OpVisitor interface of which there are various implementations of that will get you started. The particular method you would care about is visit(OpBgp opBgp) since then you can access the methods of the OpBgp class to extract the pattern information
It might be too late but still as I am not seeing final answer so I am writing the code for printing all BGP.
For that create the class as follows that extends OpVisitorBase and override the public void visit(final OpBGP) function as given below in the code.
And from your code simply call the function:
MyOpVisitorBase.myOpVisitorWalker(op);
public class MyOpVisitorBase extends OpVisitorBase
{
public static void myOpVisitorWalker(Op op)
{
OpWalker.walk(op, this);
}
#Override
public void visit(final OpBGP opBGP) {
final List<Triple> triples = opBGP.getPattern().getList();
int i = 0;
for (final Triple triple : triples) {
System.out.println("Triple: "+triple.toString());
}
}
}
i have some rdf & rdfs files and i want to use jena sparql implementation to query it and my code look like :
//model of my rdf file
Model model = ModelFactory.createMemModelMaker().createDefaultModel();
model.read(inputStream1, null);
//model of my ontology (word net) file
Model onto = ModelFactory.createOntologyModel( OntModelSpec.RDFS_MEM_RDFS_INF);
onto.read( inputStream2,null);
String queryString =
"PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#> "
+ "PREFIX wn:<http://www.webkb.org/theKB_terms.rdf/wn#> "
+ "SELECT ?person "
+ "WHERE {"
+ " ?person rdf:type wn:Person . "
+ " }";
Query query = QueryFactory.create(queryString);
QueryExecution qe = QueryExecutionFactory.create(query, ????);
ResultSet results = qe.execSelect();
ResultSetFormatter.out(System.out, results, query);
qe.close();
and i have a wordNet Ontology in rdf file and i want to use this ontology in my query to do Inferencing automaticly (when i query for person the query should return eg. Man ,Woman)
so how to link the ontology to my query? please help me.
update: now i have tow models : from which i should run my query ?
QueryExecution qe = QueryExecutionFactory.create(query, ????);
thanks in advance.
The key is to recognise that, in Jena, Model is the one of the central abstractions. An inferencing model is just a Model, in which some of the triples are present because they are entailed by inference rules rather than read in from the source document. Thus you only need to change the first line of your example, where you create the model initially.
While you can create inference models directly, it's often easiest just to create an OntModel with the required degree of inference support:
Model model = ModelFactory.createOntologyModel( OntModelSpec.RDFS_MEM_RDFS_INF );
If you want a different reasoner, or OWL support, you can select a different OntModelSpec constant. Be aware that large and/or complex models can make for slow queries.
Update (following edit of original question)
To reason over two models, you want the union. You can do this through OntModel's sub-model factility. I would change your example as follows (note: I haven't tested this code, but it should work):
String rdfFile = "... your RDF file location ...";
Model source = FileManager.get().loadModel( rdfFile );
String ontFile = "... your ontology file location ...";
Model ont = FileManager.get().loadModel( ontFile );
Model m = ModelFactory.createOntologyModel( OntModelSpec.RDFS_MEM_RDFS_INF, ont );
m.addSubModel( source );
String queryString =
"PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#> "
+ "PREFIX wn:<http://www.webkb.org/theKB_terms.rdf/wn#> "
+ "SELECT ?person "
+ "WHERE {"
+ " ?person rdf:type wn:Person . "
+ " }";
Query query = QueryFactory.create(queryString);
QueryExecution qe = QueryExecutionFactory.create(query, m);
ResultSet results = qe.execSelect();
ResultSetFormatter.out(System.out, results, query);
qe.close();