loemlin with AWStune lamfgbhbda function - tinkerpop

Im trying to leamlin with AWS neptfdsune lambda in javascript.
the problem is i dont know how to import "has" & "both" in the above query? or is there any other way to do this?

When you have an "anonymous" traversal (not connected to other steps by a dot) you need to prefix the step with __. (double under bar). So your query becomes:
g.V().
has("person", "first_name", fromName).
until(__.has("person", "first_name", toName)).
repeat(__.both("friends").simplePath() )
In case you need help finding the common imports for the Gremlin JavaScript client, some key ones are listed here: https://tinkerpop.apache.org/docs/current/reference/#gremlin-javascript-imports

Related

How to run SPARQL queries in R (WBG Topical Taxonomy) without parallelization

I am an R user and I am interested to use the World Bank Group (WBG) Topical Taxonomy through SPARQL queries.
This can be done directly on the API https://vocabulary.worldbank.org/PoolParty/sparql/taxonomy but it can be done also through R by using the functions load.rdf (to load the taxonomy.rdf rdfxml file downloaded from https://vocabulary.worldbank.org/ ) and after using the sparql.rdf to perform the query. These functions are available on the "rrdf" package
These are the three lines of codes:
taxonomy_file <- load.rdf("taxonomy.rdf")
query <- "SELECT DISTINCT ?nodeOnPath WHERE {http://vocabulary.worldbank.org/taxonomy/435 http://www.w3.org/2004/02/skos/core#narrower* ?nodeOnPath}"
result_query_1 <- sparql.rdf(taxonomy_file,query)
What I obtain from result_query_1 is exactly the same I get through the API.
However, the load.rdf function uses all the cores available on my computer and not only one. This function is somehow parallelizing the load task over all the core available on my machine and I do not want that. I haven't found any option on that function to specify its serialized usege.
Therefore, I am trying to find other solutions. For instance, I have tried "rdf_parse" and "rdf_query" of the package "rdflib" but without any encouraging result. These are the code lines I have used.
taxonomy_file <- rdf_parse("taxonomy.rdf")
query <- "SELECT DISTINCT ?nodeOnPath WHERE {http://vocabulary.worldbank.org/taxonomy/435 http://www.w3.org/2004/02/skos/core#narrower* ?nodeOnPath}"
result_query_2 <- rdf_query(taxonomy_file , query = query)
Is there any other function that perform this task? The objective of my work is to run several queries simultaneously using foreach.
Thank you very much for any suggestion you could provide me.

Elasticsearch connector and owl:sameAs on graphdb

im using ruleset OWL-RL optimized and using elasticsearch connector for search.
All i want is to recoginize the entity has same value and merge all values into one document in es.
Im doing this by:
Person - hasPhone - Phone and have InverseFunctionalProperty on relation hasPhone
Example:
http://example.com#1 http://example.com#hasPhone http://example.com#111.
http://example.com#2 http://example.com#hasPhone http://example.com#111.
=> #1 owl:sameAs #2
when i search by ES, i receive two result both #1, #2 . But when i repair connector i get only one result (that what i want).
1./ I want to ask is there a way that ES connector auto merge doc and delete previous doc ?, because i dont want to repair connector all the time. When i set manageIndex:false, it always get two results when searching.
2./ How to receive only one record, exculding the others have owl:sameAs with this record by SPARQL.
3./ Is there a better ruleset for owl:sameAs and InverseFunctionalProperty for reference ?
The connector watches for changes to property paths (as specified by the index definition) but I don't think it can detect the merging (smushing) caused by sameAs, that's why you need the rebuilding. If this is an important case, I can post an improvement issue for you, but please email graphdb-support and vladimir.alexiev at ontotext with a description of your business case (and link to this question)
If you have "sameAs optimization" enabled for the repo (which it is by default) and do NOT have "expand sameAs URLs" in the query, you should get only 1 result for queries like ?x <http://example.com#hasPhone> <http://example.com#111>
OWL-RL-Optimized is good for your case. (The rulesets supporting InverseFunctionalProperty are OWL-RL, OWL-QL, rdfsPlus and their optimized variants.)

Creating Titan indexed types for elasticsearch

I am having problems getting the elastic search indexes to work correctly with Titan Server. I currently have a local Titan/Cassandra setup using Titan Server 0.4.0 with elastic search enabled. I have a test graph 'bg' with the following properties:
Vertices have two properties, "type" and "value".
Edges have a number of other properties with names like "timestamp", "length" and so on.
I am running titan.sh with the rexster-cassandra-es.xml config, and my configuration looks like this:
storage.backend = "cassandra"
storage.hostname = "127.0.0.1"
storage.index.search.backend = "elasticsearch"
storage.index.search.directory = "db/es"
storage.index.search.client-only= "false"
storage.index.search.local-mode = "true"
This configuration is the same in the bg config in Rexter and the groovy script that loads the data.
When I load up Rexster client and type in g = rexster.getGraph("bg"), I can perform an exact search using g.V.has("type","ip_address") and get the correct vertices back. However when I run the query:
g.V.has("type",CONTAINS,"ip_")
I get the error:
Data type of key is not compatible with condition
I think this is something to do with the type "value" not being indexed. What I would like to do is make all vertex and edge attributes indexable so that I can use any of the string matching functions on them as necessary. I have already tried making an indexed key using the command
g.makeKey("type").dataType(String.class).indexed(Vertex.class).indexed("search",Vertex.class).make()
but to be honest I have no idea how this works. Can anyone help point me in the right direction with this? I am completely unfamiliar with elastic search and Titan type definitions.
Thanks,
Adam
the Wiki page Indexing Backend Overview should answer every little detail of your questions.
Cheers,
Daniel

SPARQL-DL query with owl-api

I'm writing an application using OWL-API and Hermit Reasoner. I would like to query data using SPARQL-DL by submitting query like:
PREFIX wine: <http://www.w3.org/TR/2003/PR-owl-guide-20031209/wine#>
SELECT ?i
WHERE { Type(?i, wine:PinotBlanc) }
OR WHERE { Type(?i, wine:DryRedWine) }
Can I Do this directy with owl-api or should I use an external library (http://www.derivo.de/en/resources/sparql-dl-api/ ) ? ( I need something like
queryEngine.query(my_query); )
As in July 2013, the OWL-API does not support natively SPARQL-DL. You need to plug a third party library in order to make it work.
I am aware of two implementations (there's maybe more): One by Derivo (your link) and another one by Pellet.
I used OWL-API with Hermit and Pellet; both worked fine. The advantage of Pellet over Hermit is that it supports built-ins.
i.e. In Pellet, for some class Teenager, you can get seventeen year old persons using the following query:
Person and (hasAge value "17.0"^^double)
If you (or somebody) are still interested in, I can provide the Java class for it.
A pure OWL-API-impl cannot provide non-workaround way to support SPARQL since it is not graph based solution.
Now, starting v5, there is ONT-API which is jena-based OWL-API impl.

Endeca UrlENEQuery java API search

I'm currently trying to create an Endeca query using the Java API for a URLENEQuery. The current query is:
collection()/record[CONTACT_ID = "xxxxx" and SALES_OFFICE = "yyyy"]
I need it to be:
collection()/record[(CONTACT_ID = "xxxxx" or CONTACT_ID = "zzzzz") and
SALES_OFFICE = "yyyy"]
Currently this is being done with an ERecSearchList with CONTACT_ID and the string I'm trying to match in an ERecSearch object, but I'm having difficulty figuring out how to get the UrlENEQuery to generate the or in the correct fashion as I have above. Does anyone know how I can do this?
One of us is confused on multiple levels:
Let me try to explain why I am confused:
If Contact_ID and Sales_Office are different dimensions, where Contact_ID is a multi-or dimension, then you don't need to use EQL (the xpath like language) to do anything. Just select the appropriate dimension values and your navigation state will reflect the query you are trying to build with XPATH. IE CONTACT_IDs "ORed together" with SALES_OFFICE "ANDed".
If you do have to use EQL, then the only way to modify it (provided that you have to modify it from the returned results) is via string manipulation.
ERecSearchList gives you ability to use "Search Within" functionality which functions completely different from the EQL filtering, though you can achieve similar results by using tricks like searching only specified field (which would be separate from the generic search interface") I am still not sure what's the connection between ERecSearchList and the EQL expression above?
Having expressed my confusion, I think what you need to do is to use String manipulation to dynamically build the EQL expression and add it to the Query.
A code example of what you are doing would be extremely helpful as well.