Error while trying to configure ArangoDB replication - replication

While trying to configure an async replication on an ArangoDB database (using the document https://docs.arangodb.com/3.3/Manual/Administration/Replication/Asynchronous/Components.html) I got this error:
JavaScript exception in file
'/usr/share/arangodb3/js/client/modules/#arangodb/replication.js' at 209,7:
ArangoError 1470: replication API is not supported on a coordinator
! throw err;
! ^
stacktrace: ArangoError: replication API is not supported on a coordinator
at Object.exports.checkRequestResult
(/usr/share/arangodb3/js/client/modules/#arangodb/arangosh.js:96:21)
at waitForResult
(/usr/share/arangodb3/js/client/modules/#arangodb/replication.js:207:16)
at setup
(/usr/share/arangodb3/js/client/modules/#arangodb/replication.js:310:10)
at Object.setupReplication
(/usr/share/arangodb3/js/client/modules/#arangodb/replication.js:313:51)
at <shell command>:1:34
Any idea what could have caused it? I'm on the latest 3.3.3 version with a cluster up and running on 3 different machines.

You are accessing a coordinator. The replication API resides only on DB servers in an ArangoDB cluster. You will see a different behavior if you move to DB servers.

Related

Keep ActiveMQ running when losing connection to database

I have an instance of ActiveMQ 5.16.4 running that is using MySQL as a persistent data storage. Recently the MySQL server had some issues, and ActiveMQ lost its connection to MySQL. That caused multiple Spring microservices to throw errors because ActiveMQ wasn't working.
Is it possible to have master/slave ActiveMQ running where master and slave uses separate persistence storage?
I have done some research and found "pure master slave", but it says that it is deprecated and not recommend to use and will be removed in 5.8. It says to use shared storage which I am trying to avoid (cause my problem is what if storage itself is down).
What are my options to keep running ActiveMQ if it loses connection to database?
If you're using ActiveMQ "Classic" (i.e. 5.x) then your only option is to use shared storage between the master and the slave. This could be a shared file system or a relational database. This, of course, is a single point of failure.
However, there are both file system and database technologies that can mitigate this risk. For example you could use a replicated file system (e.g. Ceph or GlusterFS) or a replicated database (e.g. MySQL).
You might also consider using ActiveMQ Artemis (i.e. the next-generation broker from ActiveMQ) which supports replication natively.

Dynamic GemFire Region Creation with PCC

I am using Pivotal GemFire 9.1.1 through Pivotal Cloud Cache 1.3.1 and ran into the following error while using the #EnableClusterConfiguration SDG annotation:
2018-11-17T16:30:35.279-05:00 [APP/PROC/WEB/0] [OUT] org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on ac62ca98-0ec5-4a30-606b-1cc9(:8:loner):47710:a6159523:: The function is not registered for function id CreateRegionFunction
2018-11-17T16:30:35.279-05:00 [APP/PROC/WEB/0] [OUT] at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:184)
Finally, I ran into this post - https://github.com/spring-projects/spring-boot-data-geode/issues/15
Is there any other annotation I can use with Spring Boot 2+ which will help me with GemFire Region creation, dynamically?
Thanks!
Unfortunately, NO; there is no other way to currently and "dynamically" push cluster/server-side configuration from a Spring/GemFire cache client to a cluster of PCC servers running in PCF, using SDG/SBDG at the moment.
This is now because of this underlying issue as well, SBDG Issue #16 - "HTTP client does not authenticate when pushing cluster config from client to server using #EnableClusterConfiguration with PCC 1.5."
For the time being, you must manually create Regions (and Indexes) using the documentation provided by PCC.
I am sorry for any inconvenience or trouble this has caused you. This will be resolved very soon.
This does work in a local, non-managed context, even when starting your cluster (servers) using Gfsh. It just does not work in PCF using PCC, yet.
Regards.

Meld error when setting up a new cluster

I am evaluating the DataStax OpsCenter on a virtual machine to start managing/monitoring cassandra. I am following the online docs to create cluster topology models via OpsCenter LCM, but the error message doesn't provide much information for me to continue. The jobs status are,
error- MeldError, 400 Client Error: Bad Request for url: http://[ip_address]:8888/api/v1/lcm/internal/nodes/6185c776-9034-45b4-a54f-6eb9511274a2/package_information
Meld failed on name="testnode1" ssh-management-address=[ip_address]" node-id="6185c776-9034-45b4-a54f-6eb9511274a2" node-name="testnode1" job-id="1b792c69-bcca-489f-ad12-a6285ba84d59" stdout=" Meld has started... " stderr=""
My question is what might be wrong and any hint how to resolve that?
I am new to the cassandra and DataStax communities, please forgive me if any silly question asked!
Q: I used to be a buildbot user and DataStax agent looks like a Buildbot's slave. Why we don't need agent setup on the remote machine to work with opscenter? The working directory of agent is configured in opscenter?
The opscenterd.log, https://pastebin.com/TJsvmr6t
According to the compatibility of the tools set mentioned in https://docs.datastax.com/en/landing_page/doc/landing_page/compatibility.html#compatibilityDocument__opsc-compatibility , I actually use the OpsCenter v5.2 for monitoring and basic db operations. After trial-and-error of .yaml of Agent and .conf of Cassandra 2.2, the Dashboard works!
Knowledge gained,
The OpsCenter 5.2 actually works with Cassandra 2.2 which is not listed in the compatibility table
For beginner, if not sure where to start, try to install all the components on one machine to get idea of the least viable working setup. And from there to configure the actual dev/test/production environment.

SAP HANA Vora distributed log service refused to start

I installed SAP HANA Vora on a 3 node MapR cluster. While trying to bring up Vora service via Vora Manager UI, I get the following error:
Error occurred while starting all services: vora-dlog refused to
start. Cannot continue Start All Jobs. Error: There are no health
checks registered for service vora-dlog.
The vora-manager log file displays the following error:
vora.vora-dlog: [c.xxxxxxx] : Error while creating dlog store.
nomad[xxxxx]: client: failed to query for node allocations: no known servers
nomad[xxxxx]: client:rpcproxy: No servers available.
All 3 nodes in the cluster have 2 IPs in different subnets. Can anyone suggest how to configure a health check for consul? And what else can be wrong here?
The messages from the VoraMgr log file are not sufficient to understand the actual problem. Are there other messages from dlog before 'Error while creating dlog store.'? I have seen that message e.g. if the disk was full and the dlog could not create its local persistency.
Also, the 2 different networks could cause an issue like you described. You can configure the use of different network interface names on different nodes. However, on each node all Vora services as well as the Vora Manager must use the same network interface name. If using 2 different subnets the configuration must allow network traffic between them. Could you give some additional info on your topology + network configuration?

Azure Cloud Service Deployment Error

I trying to Deploy a moderated size project in cloud as Service,
it giving me a fatal error , i am not able to figure out what the mean of error and cause is
Azure Deployment Stack Trace :
Role instances recycled for a certain amount of times during an update or upgrade operation.
This indicates that the new version of your service or the configuration settings you provided
when configuring the service prevent role instances from running.
The most likely reason for this is that your code throws an unhandled exception.
Please consider fixing your service or changing your configuration settings so that
role instances do not throw unhandled exceptions.
Then start another update or upgrade operation. Until you start another update or upgrade
operation, Windows Azure will continue trying to update your service to the new version or
configuration you provided
For this kind of deployment error, you better try to run your application first in the Azure Compute Emulator and then deploy in the Cloud. So you can get the unhandled exception information, also don't forget have try catch in your code.