How to intercept remote nodes in Riak using riak_test module? - testing

I have a problem when trying Erlang testing module riak_test to simulate connections among remote nodes.
It is possible to connect remote nodes within a test to local nodes (deployed by rt:deploy_nodes) but it is impossible to call functions of rt module, especially to add interceptors for the remote nodes without error.
Is there some solution or method to intercept also remote nodes using Riak testing module?
I need to use interceptors on remote nodes to retrieve some information about Riak node states.
More specifically: riak#10.X.X.X is my remote referenced node.
In the test it is possible to connect this node to local devX#127.0.0.1 nodes deployed in the test but in my test program I have: rt_intercept:add(riak#10.X.X.X, {}) I get error:
{{badmatch,
{badrpc,
{'EXIT',
{undef,
[{intercept,add,
[riak_kv_get_fsm,riak_kv_get_fsm_intercepts,
[{{waiting_vnode_r,2},waiting_vnode_r_tracing},
{client_info,3},client_info_tracing},
{execute,2},execute_preflist}]],
[]},
{rpc,'-handle_call_call/6-fun-0-',5,
[{file,"rpc.erl"},{line,203}]}]}}}},
[{rt_intercept,add,2,[{file,"src/rt_intercept.erl"},{line,57}]},
{remoteRiak,'-confirm/0-lc$^2/1-2-',1,
[{file,"tests/remoteRiak.erl"},{line,49}]},
{remoteRiak,'-confirm/0-lc$^2/1-2-',1,
[{file,"tests/remoteRiak.erl"},{line,49}]},
{remoteRiak,confirm,0,[{file,"tests/remoteRiak.erl"},{line,49}]}]}

the rt_intercept:add function is going to use rpc:call to run the intercept:add function in the target node's VM. This means that the target node must either have the intercept module loaded or in the code path. You can add a path using add_paths in the config for the target node.

Related

What to do in a PyMQI program that does an IBM MQ PUT to remotely connect via the MQSERVER environment variable?

[pymqi] What to do in simple program that does an IBM MQ PUT to remotely connect via the MQSERVER environment variable?
.
Hello! I am able to find short samples that use PyMQI and the connect method has hardcoded the name of the queue manager, the server-connection channel and the host(port). However, I want to have a generic connect method and instead, pass the connection information by setting the MQSERVER environment variable.
.
Is there anything that needs to be configured or setup? I looked at the available documentation but I could not find references about how to do this.
.
My hope is that if the MQSERVER issue is resolved, then I would like to use the 2 environment variables for reading a CCDT file: MQCHLLIB and MQCHLTAB.
.
Thank you in advance!
I setup the MQSERVER variable to point to a remote queue manager. This works fine with a C-based example amqsputc.
But I experimented with different ways to specify the connect() method in my small pymqi program and I have captured the MQ client traces and I see that the MQSERVER is read, but the contents is ignored by the connect().

Naming rabbitmq node with a preconfigured name

I am setting up a Rabbitmq single node container built form a docker image. The Image is configured to persist to nfs mounted disc.
I ran into an issue when the image is restarted. Since every time a node restarted it gets unique name and the restarted node searching for the old nodes it’s reads from cluster_nodes.config file
Error dump shows:
Error during startup: {error,
{failed_to_cluster_with,
[rabbit#9c3bfb851ba3],
"Mnesia could not connect to any nodes."}}
How can I configure my image to use same name each time when it’s restarted instead of using arbitrary node name given by Kubernetes cluster?

How to get graph with transaction support from remote gremlin server?

I have next configuration: remote Gremlin server (TinkerPop 3.2.6) with Janus GraphDB
I have gremlin-console (with janus plugin) + conf in remote.yaml:
hosts: [10.1.3.2] # IP og gremlin-server host
port: 8182
serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
So I want to make connection through gremlin-server (not to JanusGraph directly by graph = JanusGraphFactory.build().set("storage.backend", "cassandra").set("storage.hostname", "127.0.0.1").open();) and get graph which supports transactions?
Is it possible? Because as I see all TinkerFactory graphs do not support transactions
As I understood to use the Janus graph through the gremlin server you should:
Define ip&port in the config file of the gremlin-console:
conf/remote.yaml
Connect by Gremlin-console to the gremlin server:
: remote connect tinkerpop.server conf/remote.yaml
==> Configured localhost/10.1.23.113: 8182
...and work in remote mode (using :> or :remote console), i.e. send ALL commands (or #script) to the gremlin-server.
:> graph.addVertex(...)
or
:remote console
==>All scripts will now be sent to Gremlin Server - [10.1.2.222/10.1.2.222:818]
graph.addVertex(...)
You don't need to define variables for the graph and the trawersal, but rather used
graph. - for the graph
g. - for the traversal
In this case, you can uses all graph features that are provided by the JanusGraphDB.
Tinkerpop provides Cluster object to keep the config of connection. Using Cluster object graphTraversalSource object can be spawned.
this.cluster = Cluster.build()
.addContactPoints("192.168.0.2","192.168.0.1")
.port(8082)
.credentials(username, password)
.serializer(new GryoMessageSerializerV1d0(GryoMapper.build().addRegistry(JanusGraphIoRegistry.getInstance())))
.maxConnectionPoolSize(8)
.maxContentLength(10000000)
.create();
this.gts = AnonymousTraversalSource
.traversal()
.withRemote(DriverRemoteConnection.using(cluster));
gts object is thread safe. With remote each query will be executed in separate transaction. Ideally gts should be a singleton object.
Make sure to call gts.close() and cluster.close() upon shutdown of application else it may lead to connection leak.
I believe that connecting a java application to a running gremlin server using withRemote() will not support transactions. I have had trouble finding information on this as well but as far as I can tell, if you want to do anything but read the graph, you need to use "embedded janusgraph" and have your remotely hosted persistent data stored in a "storage backend" that you connect to from your application as you describe in the second half of your question.
https://groups.google.com/forum/#!topic/janusgraph-users/t7gNBeWC844
Some discussion I found around it here ^^ makes a mention of it auto-committing single transactions in remote mode, but it doesn't seem to do that when I try.

I am trying OpenShift origin, I cannot create application

I am trying OO on a RHEL Atomic Host. I spun up OO master as a container following this guide https://docs.openshift.org/latest/getting_started/administrators.html
After attaching a shell to the Master Container, I cannot deploy an app.
# oc new-app openshift/deployment-example
error: can't look up Docker image "openshift/deployment-example": Internal error occurred: Get https://registry-1.docker.io/v2/: net/htt p: request canceled while waiting for connection error: no match for "openshift/deployment-example"
The 'oc new-app' command will match arguments to the following types:
1. Images tagged into image streams in the current project or the 'openshift' project
- if you don't specify a tag, we'll add ':latest'
2. Images in the Docker Hub, on remote registries, or on the local Docker engine
3. Templates in the current project or the 'openshift' project
4. Git repository URLs or local paths that point to Git repositories
--allow-missing-images can be used to point to an image that does not exist yet.
See 'oc new-app -h' for examples.
The host needs proxy to access Internet. I have configured proxy in /etc/sysconfig/docker and that is how I could pull the origin image in the same place.
I have tried setting proxy for master and node with luck
https://docs.openshift.org/latest/install_config/http_proxies.html
It is possible that your proxy is terminating the connection. you can test by creating an internal registry, push image to that and then use
"oc new-app your.internal.registry/openshift/deployment-example"

ClassNotFoundException while deploying new application version (with changed session object) in active Apache Ignite grid

We are currently integrating Apache Ignite in our application to share sessions in a cluster.
At this point we can successfully share sessions between two local tomcat instances, but there's one use case, which is not working so far.
When running the two local instances with the exact same code, it all works great. But when the Ignite logic is integrated in our production cluster, we'll encounter the following use case:
Node 1 and Node 2, runs version 1 of the application
At this point we'd like to deploy version 2 of the application
Tomcat is stopped at Node 1, version 2 is deployed, and at the end of the deployment Tomcat at Node 1 is started again.
We now have Node 1 with version 2 of the code and Node 2, still with version 1
Tomcat is stopped at Node 2, version 2 is deployed, and at the end of the deployment Tomcat at Node 2 is started again.
We now have Node 1 with version 2 of the code and Node 2, with version 2
Deployment is finished
When reproducing this use case locally with two tomcat instances in the same grid, the Ignite web session clustering fails. What I tested, was removing one 'String property' of a class (Profile) which resided in the users session. When starting Node 1 with this changed class, I get the following exception:
Caused by: java.lang.ClassNotFoundException:
Optimized stream class checksum mismatch
(is same version of marshalled class present on all nodes?) [expected=4981, actual=-27920, cls=class nl.package.profile.Profile]
This will be a common/regular use case for our deployments. My question is: how to handle this use cases? Are there ways in Ignite to resolve/workaround this kind of issue?
In my understanding your use case perfectly fits for Ignite Binary objects [1].
This feature allows to store objects in class-free format and to modify objects structure in runtime without full cluster restart when a version of an object is changed.
Your Person class should implement org.apache.ignite.binary.Binarylizable interface that will give you full control on serialization and deserialization logic. With this interface you can even have two nodes in the cluster that use different versions of Person class at both deserialization & serialization time by reading/writing only required fields from/to binary format.
[1] https://apacheignite.readme.io/docs/binary-marshaller