I have a script that works when i run it in the local instance of graphdb.
But when i try to use it on the online graphdb at https://cloud.ontotext.com/, i get a org.eclipse.rdf4j.query.QueryEvaluationException but nothing indicating which part of the script that caused it.
The only change from the script that works locally, is the
SERVICE <https://rdf.ontotext.com/4136450524/pedtermsDB/repositories/pedagogyrepo>{
which was
SERVICE <http://localhost:7200/rdf-bridge/1967855032375>{
Any help is appreciated.
The script:
PREFIX spif: <http://spinrdf.org/spif#>
PREFIX pdb:<http://example.org/pedtermsDB/>
# CLEAR GRAPH <http://example.org/pedtermsDB/>
# INSERT query that maps the raw RDF data from OntoRefine to user-specified
# RDF data (different IRIs, types, property names and dates as date-typed literals)
# and inserts the data into the current GraphDB repository.
INSERT { graph <http://example.org/pedtermsDB/> {
?subtermIRI a pdb:SubTerm ;
pdb:label ?subterm;
pdb:replyList ?sb_id;
pdb:hasParent ?termIRI.
?termIRI a pdb:Term ;
pdb:label ?term;
pdb:replyList ?sb_id;
pdb:topic ?topicIRI.
?topicIRI a pdb:Topic;
pdb:label ?topic;
pdb:mapping ?sb_topic;
pdb:replyList ?sb_id;
pdb:subject ?subjectIRI.
# ?sb_topicIRI a pdb:StudyBotTopic;
# pdb:mapping ?term;
# pdb:label ?sb_topic.
?subjectIRI a pdb:Subject ;
pdb:label ?subject;
pdb:field ?fieldIRI.
?fieldIRI a pdb:Field;
pdb:label ?field;
pdb:faculty ?facultyIRI.
?facultyIRI a pdb:Faculty;
pdb:label ?faculty;
}
} where {
# Uses SERVICE to fetch the raw RDF data from OntoRefine
SERVICE <https://rdf.ontotext.com/4136450524/pedtermsDB/repositories/pedagogyrepo>{
?termRow a pdb:Row;
pdb:rowNumber ?termRowNumber.
OPTIONAL {?termRow pdb:term ?term}
OPTIONAL {?termRow pdb:sb_id ?sb_id}
OPTIONAL {?termRow pdb:sb_topic ?sb_topic}
OPTIONAL {?termRow pdb:subterm ?subterm}
OPTIONAL {?termRow pdb:topic ?topic}
?termRow pdb:subject ?subject.
?termRow pdb:field ?field.
?termRow pdb:faculty ?faculty.
BIND(iri(concat("http://example.org/pedtermsDB/terms/", ?term)) as ?termIRI)
BIND(iri(concat("http://example.org/pedtermsDB/subterms/", ?subterm)) as ?subtermIRI)
# BIND(iri(concat("http://example.org/pedtermsDB/studybotTopics/", ?sb_topic)) as ?sb_topicIRI)
# BIND(IF(?topic=?term,iri(concat("http://example.org/pedtermsDB/topics/", ?topic)),"") as ?topicIRI)
BIND(iri(concat("http://example.org/pedtermsDB/topics/", ?topic)) as ?topicIRI)
BIND(iri(concat("http://example.org/pedtermsDB/subjects/", ?subject)) as ?subjectIRI)
BIND(iri(concat("http://example.org/pedtermsDB/fields/", ?field)) as ?fieldIRI)
BIND(iri(concat("http://example.org/pedtermsDB/faculties/", ?faculty)) as ?facultyIRI)
}
}
There are two different types of SPARQL endpoints mixed in your query:
GraphDB endpoint - the repository endpoint serving indexed RDF triples:
http://localhost:7200/repositories/repositories/pedagogyrepo - the local endpoint
https://rdf.ontotext.com/4136450524/pedtermsDB/repositories/pedagogyrepo - the cloud endpoint
OntoRefine endpoint - virtual endpoint to serve OpenRefine's internal tabular model
http://localhost:7200/rdf-bridge/1967855032375 - the local endpoint
https://*rdf-bridge// - the cloud endpoint created for this project
You have to find what's the correct OntoRefine endpoint on the cloud through the Workbench interface.
Related
I followed the instructions on https://graphdb.ontotext.com/documentation/9.4/free/shacl-validation.html, and it worked as documented. However, once this is done, I found no way to inspect the Shape graph configured for my repository.
The special graph <http://rdf4j.org/schema/rdf4j#SHACLShapeGraph> is nowhere to be found; it does not appear in the 'Graphs overview` screen, it is not accessible via SPARQL queries.
Shape graphs currently cannot be queried with SPARQL inside GraphDB as it is not part of the data. One way to inspect the graph is using a RDF4J Client to connect to the GraphDB repository. You can find all the statements inside the shape graph with the following code snippet:
HTTPRepository repository = new HTTPRepository("http://address:port/", "repositoryname");
try (RepositoryConnection connection = repository.getConnection()) {
Model statementsCollector = new LinkedHashModel(connection.getStatements(null, null, null, RDF4J.SHACL_SHAPE_GRAPH)
.stream()
.collect(Collectors.toList()));
}
For more information regarding accessing and updating Shacl shape graphs you can also take a look here https://rdf4j.org/documentation/programming/shacl/ .
In Amazon Neptune I would like to run multiple Gremlin commands in Java as a single transactions. The document says that tx.commit() and tx.rollback() is not supported. It suggests this - Multiple statements separated by a semicolon (;) or a newline character (\n) are included in a single transaction.
Example from the document show that Gremlin is supported in Java but I don't understand how to "Multiple statements separated by a semicolon"
GraphTraversalSource g = traversal().withRemote(DriverRemoteConnection.using(cluster));
// Add a vertex.
// Note that a Gremlin terminal step, e.g. next(), is required to make a request to the remote server.
// The full list of Gremlin terminal steps is at https://tinkerpop.apache.org/docs/current/reference/#terminal-steps
g.addV("Person").property("Name", "Justin").next();
// Add a vertex with a user-supplied ID.
g.addV("Custom Label").property(T.id, "CustomId1").property("name", "Custom id vertex 1").next();
g.addV("Custom Label").property(T.id, "CustomId2").property("name", "Custom id vertex 2").next();
g.addE("Edge Label").from(g.V("CustomId1")).to(g.V("CustomId2")).next();
The doc you are referring is for using the "string" mode for query submission. In your approach you are using the "bytecode" mode by using the remote instance of the graph traversal source (the "g" object). Instead you should submit a string script via the client object
Client client = gremlinCluster.connect();
client.submit("g.V()...iterate(); g.V()...iterate(); g.V()...");
Gremlin sessions
Java Example
After getting the cluster object,
String sessionId = UUID.randomUUID().toString();
Client client = cluster.connect(sessionId);
client.submit(query1);
client.submit(query2);
.
.
.
client.submit(query3);
client.close();
When you run .close() all the mutations get committed.
You can also capture the response from the Query reference.
List<Result> results = client.submit(query);
results.stream()...
You can also use the SessionedClient, which will run all queries in the same transaction upon close().
More information is here: https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-sessions.html#access-graph-gremlin-sessions-glv
Lets say I have applicationA that has 3 property files:
-> applicationA
- datasource.properties
- security.properties
- jms.properties
How do I move all properties to a spring cloud config server and keep them separate?
As of today I have configured the config server that will only read ONE property file as this seems to be the standard way. This file the config server picks up seems to be resolved by using the spring.application.name. In my case it will only read ONE file with this name:
-> applicationA.properties
How can I add the other files to be resolved by the config server?
Not possible in the way how you requested. Spring Cloud Config Server uses NativeEnvironmentRepository which is:
Simple implementation of {#link EnvironmentRepository} that uses a SpringApplication and configuration files located through the normal protocols. The resulting Environment is composed of property sources located using the application name as the config file stem (spring.config.name) and the environment name as a Spring profile.
See: https://github.com/spring-cloud/spring-cloud-config/blob/master/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/environment/NativeEnvironmentRepository.java
So basically every time when client request properties from Config Server it creates ConfigurableApplicationContext using SpringApplicationBuilder. And it is launched with next configuration property:
String config = application;
if (!config.startsWith("application")) {
config = "application," + config;
}
list.add("--spring.config.name=" + config);
So possible names for property files will be only application.properties(or .yml) and config client application name that is requesting configuration - in your case applicationA.properties.
But you can "cheat".
In config server configuration you can add such property
spring:
cloud:
config:
server:
git:
search-paths: '{application}, {application}/your-subdirectory'
In this case Config Server will search for same property file names but in few directories and you can use subdirectories to keep your properties separate.
So with configuration above you will be able to load configuration from:
applicationA/application.properies
applicationA/your-subdirectory/application.properies
This can be done.
You need to create your own EnvironmentRepository, which loads your property files.
org.springframework.cloud.config.server.support.AbstractScmAccessor#getSearchLocations
searches for the property files to load :
for (String prof : profiles) {
for (String app : apps) {
String value = location;
if (app != null) {
value = value.replace("{application}", app);
}
if (prof != null) {
value = value.replace("{profile}", prof);
}
if (label != null) {
value = value.replace("{label}", label);
}
if (!value.endsWith("/")) {
value = value + "/";
}
output.addAll(matchingDirectories(dir, value));
}
}
There you could add custom code, that reads the required property files.
The above code matches exactly the behaviour described in the spring docs.
The NativeEnvironmentRepository does NOT access GIT/SCM in any way, so you should use
JGitEnvironmentRepository as base for your own implementation.
As #nmyk pointed out, NativeEnvironmentRepository boots a mini app in order to collect the properties by providing it with - sort of speak - "hardcoded" {appname}.* and application.* supported property file names. (#Stefan Isele - prefabware.com JGitEnvironmentRepository ends up using NativeEnvironmentRepository as well, for that matter).
I have issued a pull request for spring-cloud-config-server 1.4.x, that supports defining additional file names, through a spring.cloud.config.server.searchNames environment property, in the same sense one can do for a single springboot app, as defined in the Externalized Configuration.Application Property Files section of the documentation, using the spring.config.name enviroment property. I hope they review it soon, since it seems many have asked about this feature in stack overflow, and surely many many more search for it and read the currently advised solutions.
It worths mentioning that many ppl advise "abusing" the profile feature to achieve this, which is a bad practice, in my humble opinion, as I describe in this answer
I'm new to Graphs in general.
I'm attempting to store a TinkerPopGraph that I've created dynamically to gremlin server to be able to issue gremlin queries against it.
Consider the following code:
Graph inMemoryGraph;
inMemoryGraph = TinkerGraph.open();
inMemoryGraph.io(IoCore.graphml()).readGraph("test.graphml");
GraphTraversalSource g = inMemoryGraph.traversal();
List<Result> results =
client.submit("g.V().valueMap()").all().get();
I need some glue code. The gremlin query here is issued against the modern graph that is a default binding for the g variable. I would like to somehow store my inMemoryGraph so that when I run a gremlin query, its ran against my graph.
All graph configurations in Gremlin Server must occur through its YAML configuration file. Since you say you're connected to the modern graph I'll assume that you're using the default "modern" configuration file that ships with the standard distribution of Gremlin Server. If that is the case, then you should look at conf/gremlin-server-modern.yaml. You'll notice that this:
graphs: {
graph: conf/tinkergraph-empty.properties}
That creates a Graph reference in Gremlin Server called "graph" which you can reference from scripts. Next, note this second configuration:
org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: {files: [scripts/generate-modern.groovy]}}}
Specifically, pay attention to scripts/generate-modern.groovy which is a Gremlin Server initialization script. Opening that up you will see this:
// an init script that returns a Map allows explicit setting of global bindings.
def globals = [:]
// Generates the modern graph into an "empty" TinkerGraph via LifeCycleHook.
// Note that the name of the key in the "global" map is unimportant.
globals << [hook : [
onStartUp: { ctx ->
ctx.logger.info("Loading 'modern' graph data.")
org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerFactory.generateModern(graph)
}
] as LifeCycleHook]
// define the default TraversalSource to bind queries to - this one will be named "g".
globals << [g : graph.traversal()]
The comments should do most of the explaining. The connection here is that you need to inject your graph initialization code into this script and assign your inMemoryGraph.traversal() to g or whatever variable name you wish to use to identify it on the server. All of this is described in the Reference Documentation.
There is a way to make this work in a more dynamic fashion, but it involves extending Gremlin Server through its interfaces. You would have to build a custom GraphManager - the interface can be found here. Then you would set the graphManager key in the server configuration file with the fully qualified name of your instance.
I am currently using the following plugin for Gremlin: GitHub -- It basically converts SPARQL to Gremlin. It works perfectly fine in the console but I am trying to execute commands via REST.
Is there a workaround when prepending a command with ":>" via REST?
Gremlin Console:
gremlin> :> SELECT * WHERE { }
==> ...
==> ...
.
.
.
Gremlin REST:
POST
{"gremlin": ":> SELECT * WHERE {}"}
RESPONSE
{"message": "startup failed:\nScript7.groovy: 1: unexpected token: : # line 1, column 1.\n :> SELECT * WHERE {}\n ^\n\n1 error\n",
"Exception-Class": "org.codehaus.groovy.control.MultipleCompilationErrorsException"}
Gremlin Server doesn't know how to process raw SPARQL and I don't think that the plugin you referenced supports the server in any way. As a result, your attempts to send SPARQL to Gremlin Server are failing. The plugin would need to be modified in some way to make that work.