Storing graph to gremlin server from in memory graph - tinkerpop

I'm new to Graphs in general.
I'm attempting to store a TinkerPopGraph that I've created dynamically to gremlin server to be able to issue gremlin queries against it.
Consider the following code:
Graph inMemoryGraph;
inMemoryGraph = TinkerGraph.open();
inMemoryGraph.io(IoCore.graphml()).readGraph("test.graphml");
GraphTraversalSource g = inMemoryGraph.traversal();
List<Result> results =
client.submit("g.V().valueMap()").all().get();
I need some glue code. The gremlin query here is issued against the modern graph that is a default binding for the g variable. I would like to somehow store my inMemoryGraph so that when I run a gremlin query, its ran against my graph.

All graph configurations in Gremlin Server must occur through its YAML configuration file. Since you say you're connected to the modern graph I'll assume that you're using the default "modern" configuration file that ships with the standard distribution of Gremlin Server. If that is the case, then you should look at conf/gremlin-server-modern.yaml. You'll notice that this:
graphs: {
graph: conf/tinkergraph-empty.properties}
That creates a Graph reference in Gremlin Server called "graph" which you can reference from scripts. Next, note this second configuration:
org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: {files: [scripts/generate-modern.groovy]}}}
Specifically, pay attention to scripts/generate-modern.groovy which is a Gremlin Server initialization script. Opening that up you will see this:
// an init script that returns a Map allows explicit setting of global bindings.
def globals = [:]
// Generates the modern graph into an "empty" TinkerGraph via LifeCycleHook.
// Note that the name of the key in the "global" map is unimportant.
globals << [hook : [
onStartUp: { ctx ->
ctx.logger.info("Loading 'modern' graph data.")
org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerFactory.generateModern(graph)
}
] as LifeCycleHook]
// define the default TraversalSource to bind queries to - this one will be named "g".
globals << [g : graph.traversal()]
The comments should do most of the explaining. The connection here is that you need to inject your graph initialization code into this script and assign your inMemoryGraph.traversal() to g or whatever variable name you wish to use to identify it on the server. All of this is described in the Reference Documentation.
There is a way to make this work in a more dynamic fashion, but it involves extending Gremlin Server through its interfaces. You would have to build a custom GraphManager - the interface can be found here. Then you would set the graphManager key in the server configuration file with the fully qualified name of your instance.

Related

ANSYS Mechanical Workbench Scripting - Accessing Parameters

Ansys gurus,
My project is a static structural analysis using ANSYS workbench mechanical. I have created the parametrized geometry (via Design modeler) and material property in workbench, and used ACT scripting to configure the model. However, I don't find too much information on how to access the parameters via ACT scripting.
I have confirmed that the geometric parameters are successfully created in the workbench, e.g.
ID
Paramater Name
Value
Unit
P1
diameter
50
um
The documentation LINK suggests that I can obtain parameter ID using Analysis.GetParameter(), however, the following code didn't work for me and resulted in the error as below.
Code:
STATIC_STRUCTURAL = ExtAPI.DataModel.AnalysisByName("Static Structural")
HEIGHT = STATIC_STRUCTURAL.GetParameter('height')
Error:
Property not found.
Do you have any suggestions on the cause of such error, is it because the Parameters were not imported from workbench "project schematic" to "Model", or the code I tried to retrieve the parameters was incorrect. In either cases, could you advise the correct method to access the parameters? Thank you!
hawkoli1987
If you want to access a parameter from the "project schematic" page you can create a list. If you than want to do something with this inside of mechanical, you have to send the commands to your model:
# Access the geometric parameters
allParameters = Parameters.GetAllParameters()
for parameter in allParameters:
print parameter.DisplayText
if parameter.DisplayText == 'height':
heigthParameter = parameter
# Loop over all systems in the project
for system in GetAllSystems():
# Get Model Container
model = system.GetContainer(ComponentName="Model")
# edit model component in batch mode
system.Refresh()
model.Edit(Interactive=True)
# code to be sent to ansys mechanical
cmd ='''
here goes your ACT script as string. You have to make sure, that there are no leading spaces or tabs.
'''
# send code and exit mechanical
model.SendCommand(Language='Python',Command=cmd)
model.Exit()
print "Finished script execution."

Can I inspect the shape graph

I followed the instructions on https://graphdb.ontotext.com/documentation/9.4/free/shacl-validation.html, and it worked as documented. However, once this is done, I found no way to inspect the Shape graph configured for my repository.
The special graph <http://rdf4j.org/schema/rdf4j#SHACLShapeGraph> is nowhere to be found; it does not appear in the 'Graphs overview` screen, it is not accessible via SPARQL queries.
Shape graphs currently cannot be queried with SPARQL inside GraphDB as it is not part of the data. One way to inspect the graph is using a RDF4J Client to connect to the GraphDB repository. You can find all the statements inside the shape graph with the following code snippet:
HTTPRepository repository = new HTTPRepository("http://address:port/", "repositoryname");
try (RepositoryConnection connection = repository.getConnection()) {
Model statementsCollector = new LinkedHashModel(connection.getStatements(null, null, null, RDF4J.SHACL_SHAPE_GRAPH)
.stream()
.collect(Collectors.toList()));
}
For more information regarding accessing and updating Shacl shape graphs you can also take a look here https://rdf4j.org/documentation/programming/shacl/ .

Building Terraform Custom Variables when using Count for AWS Lambda Functions

I am trying to generate a variable for a Lambda Function that is based on the setting from an API Gateway created at the same time using Terraform. I am using trimprefix and trimsuffix to modify the setting I get from the api gateway, which I then set as an Environment Variable to be used by the Lambda Function Code.
I had this working initially using output statements as I was originally using modules. I have since decided to move away from modules to simplify the code. However my real issue is how do I perform the trimprefix and trimsuffix actions when I am also using the count feature.
Here is my original code when I was still using modules, and it successfully created the final "invoke_url" after trimming "https://" from the beginning, and "/default" from the end.
## Obtain the rest_api_id
output "rest_api_id" {
value = aws_api_gateway_deployment.retaildiscount[count.index].rest_api_id
}
## Trim the https prefix from the invoke URL and store in var.invoke_url_tmp
output "invoke_url_tmp" {
value = trimprefix(aws_api_gateway_deployment.retaildiscount[count.index].invoke_url, "https://")
}
## Trim the /default suffix from var.invoke_url_tmp and output as var.invoke_url to be used
## by the retailorderprice function
output "invoke_url" {
value = trimsuffix(var.invoke_url_tmp, "/default")
}
I am now trying to do the same, but when using "count" to create multiple copies of the same Lambda Functions and API Gateways (this is to create multiple instances for a lab style workshop, each Function will have a unique name pulled from a aito.tfvars file
For the life of me I cannot work out how to generate the modified variable and link it back to the appropriate function

How to run multiple gremlin commands as a single transaction?

In Amazon Neptune I would like to run multiple Gremlin commands in Java as a single transactions. The document says that tx.commit() and tx.rollback() is not supported. It suggests this - Multiple statements separated by a semicolon (;) or a newline character (\n) are included in a single transaction.
Example from the document show that Gremlin is supported in Java but I don't understand how to "Multiple statements separated by a semicolon"
GraphTraversalSource g = traversal().withRemote(DriverRemoteConnection.using(cluster));
// Add a vertex.
// Note that a Gremlin terminal step, e.g. next(), is required to make a request to the remote server.
// The full list of Gremlin terminal steps is at https://tinkerpop.apache.org/docs/current/reference/#terminal-steps
g.addV("Person").property("Name", "Justin").next();
// Add a vertex with a user-supplied ID.
g.addV("Custom Label").property(T.id, "CustomId1").property("name", "Custom id vertex 1").next();
g.addV("Custom Label").property(T.id, "CustomId2").property("name", "Custom id vertex 2").next();
g.addE("Edge Label").from(g.V("CustomId1")).to(g.V("CustomId2")).next();
The doc you are referring is for using the "string" mode for query submission. In your approach you are using the "bytecode" mode by using the remote instance of the graph traversal source (the "g" object). Instead you should submit a string script via the client object
Client client = gremlinCluster.connect();
client.submit("g.V()...iterate(); g.V()...iterate(); g.V()...");
Gremlin sessions
Java Example
After getting the cluster object,
String sessionId = UUID.randomUUID().toString();
Client client = cluster.connect(sessionId);
client.submit(query1);
client.submit(query2);
.
.
.
client.submit(query3);
client.close();
When you run .close() all the mutations get committed.
You can also capture the response from the Query reference.
List<Result> results = client.submit(query);
results.stream()...
You can also use the SessionedClient, which will run all queries in the same transaction upon close().
More information is here: https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-sessions.html#access-graph-gremlin-sessions-glv

How to set Neo4J config keys in gremlin-scala?

When running a Neo4J database server standalone (on Ubuntu 14.04), configuration options are set for the global installation in etc/neo4j/neo4j.conf or possibly $NEO4J_HOME/conf/neo4j.conf.
However, when instantiating a Neo4j database from Java or Scala using Apache's Neo4jGraph class (org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph), there is no global installation, and the constructor does not (as far as I can tell) look for any configuration files.
In particular, when running the test suite for my application, I end up with many simultaneous instances of Neo4jGraph, which ends up throwing a java.net.BindException: Address already in use because all of these instances are trying to communicate over a small range of ports for online backup, which I don't actually need. These channels are set with config options dbms.backup.address (default value: 127.0.0.1:6362-6372) and dbms.backup.enabled (default value: true).
My problem would be solved by setting dbms.backup.enabled to false, or expanding the port range.
Things that have not worked:
Creating /etc/neo4j/neo4j.conf containing the line dbms.backup.enabled=false.
Creating the same file in my project's src/main/resources directory.
Creating the same file in src/main/resources/neo4j.
Manually setting the configuration property inside the Scala code:
val db = new Neo4jGraph(dataDirectory)
db.configuration.addProperty("dbms.backup.enabled",false)
or
db.configuration.addProperty("neo4j.conf.dbms.backup.enabled",false)
or
db.configuration.addProperty("gremlin.neo4j.conf.dbms.backup.enabled",false)
How should I go about setting this property?
Neo4jGraph configuration through TinkerPop is accomplished by a pass-through of configuration keys. In TinkerPop 3.x, that would mean that all Neo4j keys prefixed with gremlin.neo4j.conf that are provided via Configuration object to Neo4jGraph.open() or GraphFactory.open() will be passed down directly to the Neo4j instance. You can see examples of this here in the TinkerPop documentation on high availability configuration.
In TinkerPop 2.x, the same approach was taken however the key prefix was instead blueprints.neo4j.conf.* as discussed here.
Manipulating db.configuration after the database connection had already been opened was definitely futile.
stephen mallette's answer was on the right track, but this particular configuration doesn't appear to pass through in the way his linked example does. There is a naming mismatch between the configuration keys expected in neo4j.conf and those expected in org.neo4j.backup.OnlineBackupKernelExtension. Instead of dbms.backup.address and dbms.backup.enabled, that class looks for config keys online_backup_server and online_backup_enabled.
I was not able to get these keys passed down to the underlying Neo4jGraphAPI instance correctly. What I had to do, instead, was the following:
import org.neo4j.tinkerpop.api.impl.Neo4jFactoryImpl
import scala.collection.JavaConverters._
val factory = new Neo4jFactoryImpl()
val config = Map(
"online_backup_enabled" -> "true",
"online_backup_server" -> "0.0.0.0:6350-6359"
).asJava
val db = Neo4jGraph.open(factory.newGraphDatabase(dataDirectory,config))
With this initialization, the instance correctly listened for backups on port 6350; changing "true" to "false" disabled backup listening.
Using Neo4j 3.0.0 the following disables port listening for me (Java code)
import org.apache.commons.configuration.BaseConfiguration;
import org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph;
BaseConfiguration conf = new BaseConfiguration();
conf.setProperty(Neo4jGraph.CONFIG_DIRECTORY, "/path/to/db");
conf.setProperty(Neo4jGraph.CONFIG_CONF + "." + "dbms.backup.enabled", "false");
graph = Neo4jGraph.open(config);