Aerospike migrations issue - aerospike

We have a 10 node cluster with version 3.9 running with cold-start-empty false, in which we did the following activity :
Added a node 10.0.29.212 with version community build 3.13.0.10
Waited for migrations to finish (new cluster size 11). There were incoming Migrations on only 10.0.29.212 node as expected.
Added 2 nodes 10.0.29.190 , 10.0.29.135 simultaneously with version community build 3.13.0.10.
Waited for migrations to finish (new cluster size 13).Incoming Migrations on only these two nodes node as expected.
Added a node 10.0.29.214 after few hours with version community build 3.13.0.10.
Immediately after the node was added , the total master objects in the cluster dropped and incoming migrations started on all nodes and we started getting timeouts on cluster.

Related

Creating Table in Ignite using sqlline

I am new to Ignite and trying to setup ignite in my MacOs. Then want to create one table in Ignite using sqline.I am using fallowing Steps.
1.Ignite Download:
2. Set Ignite Path. export IGNITE_HOME=“/Users/username/apps/ apache-ignite-2.7.5-bin”
3. Start Ignite Node.
4. Redirected to ignite folder.cd /Users/username/apps/apache-ignite-2.7.5-bin
5. start ignite node.
apache-ignite-2.7.5-bin % bin/ignite.sh
I am able to see below logs in last 3 lines
[13:11:12] Ignite node started OK (id=c5598906)
[13:11:12] Topology snapshot [ver=1, locNode=c5598906,servers=1, clients=0, state=ACTIVE, CPUs=12, offheap=3.2GB, heap=3.6GB]
7.open one more new terminal.
pwd
/Users/username/apps/apache-ignite-2.7.5-bin
8. Start sqlline
apache-ignite-2.7.5-bin % ./sqlline.sh --verbose=true -u jdbc:ignite:thin://127.0.0.1/
But I am getting below Error Message
zsh: no such file or directory: ./sqlline.sh
apache-ignite-2.7.5-bin %
Could Someone Please guide me why I am not able to start sqlline. And how should I do that.
I updated the version from 2.7 to 2.8 and it worked.
Try specifying ./bin/sqlline.sh :)

Flink job on EMR runs only on one TaskManager

I am running EMR cluster with 3 m5.xlarge nodes (1 master, 2 core) and Flink 1.8 installed (emr-5.24.1).
On master node I start a Flink session within YARN cluster using the following command:
flink-yarn-session -s 4 -jm 12288m -tm 12288m
That is the maximum memory and slots per TaskManager that YARN let me set up based on selected instance types.
During startup there is a log:
org.apache.flink.yarn.AbstractYarnClusterDescriptor - Cluster specification: ClusterSpecification{masterMemoryMB=12288, taskManagerMemoryMB=12288, numberTaskManagers=1, slotsPerTaskManager=4}
This shows that there is only one task manager. Also when looking at YARN Node manager I see that there is only one container running on one of the core nodes. YARN Resource manager shows that the application is using only 50% of cluster.
With the current setup I would assume that I can run Flink job with parallelism set to 8 (2 TaskManagers * 4 slots), but in case that submitted job has set parallelism to more than 4, it fails after a while as it could not get desired resources.
In case the job parallelism is set to 4 (or less), the job runs as it should. Looking at CPU and memory utilisation with Ganglia it shows that only one node is utilised, while the other flat.
Why is application run only on one node and how to utilise the other node as well? Did I need to set up something on YARN that it would set up Flink on the other node as well?
In previous version of Flik there was startup option -n which was used to specify number of task managers. The option is now obsolete.
When you're starting a 'Session Cluster', you should see only one container which is used for the Flink Job Manager. This is probably what you see in the YARN Resource Manager. Additional containers will automatically be allocated for Task Managers, once you submit a job.
How many cores do you see available in the Resource Manager UI?
Don't forget that the Job Manager also uses cores out of the available 8.
You need to do a little "Math" here.
For example, if you would have set the number of slots to 2 per TM and less memory per TM, then submitted a job with parallelism of 6 it should have worked with 3 TMs.

Does Ignite on YARN support Node Labels?

I know Ignite still does not support setting up custom YARN queues from this JIRA ticket - https://issues.apache.org/jira/browse/IGNITE-2738 . I cannot find any information on whether Ignite supports running its containers within specified YARN node labels?
Currently in our cluster we have labelled all of our nodes and in attempting to start an Ignite applicaiton, the app is stuck in Pending stage because it is waiting for resources to be assigned from AM, with the AM container Node Label expression defaulting to <DEFAULT_PARTITION> .
Is there a way to supply node labels for Ignite on YARN?
ignite-yarn doesn't seem to set node labels.
Have you tried specifying them externally?

Ignite on YARN - IGNITE_RUN_CPU_PER_NODE not obeyed

When I start the Ignite on Yarn application with one CPU per node it works as expected and launches containers, however when I try to start it with 8 cores per node, the containers in YARN launch and get killed immediately, leaving only the AM running.
These settings is what works:
IGNITE_RUN_CPU_PER_NODE=1
IGNITE_MEMORY_PER_NODE=122880
IGNITE_NODE_COUNT=3
These settings won't work:
IGNITE_RUN_CPU_PER_NODE=8
IGNITE_MEMORY_PER_NODE=122880
IGNITE_NODE_COUNT=3
I have also tried other numbers of CPUs per node but it will only work with 1.
How can I make this work? The logs are not existent for each of the containers launched and killed. I can only see from the YARN RM that they were allocated and released.

JBoss Cluster setup with Hudson?

I want to have a Hudson setup that has two cluster nodes with JBoss. There is already a test machine with Hudson and it is running the nightly build and tests. At the moment the application is deployed on the Hudson box.
There are couple options in my mind. One could be to use SCPplugin for Hudson to copy the ear file over from master to the cluster nodes. The other option could be to setup Hudson slaves on cluster nodes.
Any opinions, experiences or other approaches?
edit: I set up a slave but it seems that I can't make a job to take place on more than one slave without copying the job. Am I missing something?
You are right. You can't run different build steps of one job on different nodes. However, a job can be configured to run on different slaves, Hudson than determines at execution time what node that job will run on.
You need to configure labels for you nodes. A node can have more than one label. Every job can also require more than one label.
Example:
Node 1 has label maven and db2
Node 2 has label maven and ant
Job 1 requires label maven
can run on Node 1 and Node 2
Job 2 requires label ant
can run on Node 2
Job 2 requires label maven and db2
can run on Node 1
If you need different build steps of one job to run on different nodes you have to create more than one job and chain them. You only trigger the first job who triggers the subsequent jobs. One of the following jobs can access the artifacts of the previous job. You can even run two jobs in parallel and when both are done automatically trigger the next job. You will need the Join Plugin for the parallel jobs.
If you want load balancing and central administration from Hudson (i.e. configuring projects, seeing what builds run ATM, etc.), you must run slaves.