JBoss Cluster setup with Hudson? - testing

I want to have a Hudson setup that has two cluster nodes with JBoss. There is already a test machine with Hudson and it is running the nightly build and tests. At the moment the application is deployed on the Hudson box.
There are couple options in my mind. One could be to use SCPplugin for Hudson to copy the ear file over from master to the cluster nodes. The other option could be to setup Hudson slaves on cluster nodes.
Any opinions, experiences or other approaches?
edit: I set up a slave but it seems that I can't make a job to take place on more than one slave without copying the job. Am I missing something?

You are right. You can't run different build steps of one job on different nodes. However, a job can be configured to run on different slaves, Hudson than determines at execution time what node that job will run on.
You need to configure labels for you nodes. A node can have more than one label. Every job can also require more than one label.
Example:
Node 1 has label maven and db2
Node 2 has label maven and ant
Job 1 requires label maven
can run on Node 1 and Node 2
Job 2 requires label ant
can run on Node 2
Job 2 requires label maven and db2
can run on Node 1
If you need different build steps of one job to run on different nodes you have to create more than one job and chain them. You only trigger the first job who triggers the subsequent jobs. One of the following jobs can access the artifacts of the previous job. You can even run two jobs in parallel and when both are done automatically trigger the next job. You will need the Join Plugin for the parallel jobs.

If you want load balancing and central administration from Hudson (i.e. configuring projects, seeing what builds run ATM, etc.), you must run slaves.

Related

What is the best practice to run 2 pipelines on Gitlab

What is the best practice to run 2 pipelines on the same project?
pipeline_1:
build jars jobs and run tests jobs
should be run each merge request.
pipeline_2:
build jars jobs and run e2e tests jobs
should be run every day.
Can I create 2 pipelines on the same project?
where one scheduled and second on each merge request and part of build jobs are common for both pipelines, but tests jobs are different.
Each "stage" in the .gitlab-ci.yml file is considered a pipeline, so this should just be a matter of adding the correct scripting for each stage.
On pipeline_2, you could set it to a pipeline schedule and make it dependent on the success of pipeline_1. That's what I would do.
Reference: https://docs.gitlab.com/ee/ci/parent_child_pipelines.html

How to interrupt triggered gitlab pipelines

I'm using a webhook to trigger my Gitlab pipeline. Sometimes, this trigger is triggered a bunch of times, but my pipelines only has to run the last one (static site generation). Right now, it will run as many pipelines as I have triggered. My pipelines takes 20 minutes so sometimes it's running the rest of the day, which is completely unnecessary.
https://docs.gitlab.com/ee/ci/yaml/#interruptible and https://docs.gitlab.com/ee/user/project/pipelines/settings.html#auto-cancel-pending-pipelines only work on pushed commits, not on triggers
A similar problem is discussed in gitlab-org/gitlab-foss issue 41560
Example of a use-case:
I want to always push the same Docker "image:tag", for example: "myapp:dev-CI". The idea is that "myapp:dev-CI" should always be the latest Docker image of the application that matches the HEAD of the develop branch.
However if 2 commits are pushed, then 2 pipelines are triggered and executed in paralell. Then the latest triggered pipeline often finishes before the oldest one.
As a consequence the pushed Docker image is not the latest one.
Proposition:
As a workaround for *nix you can get running pipelines from API and wait until they finished or cancel them with the same API.
In the example below script checks for running pipelines with lower id's for the same branch and sleeps.
jq package is required for this code to work.
Or:
Create a new runner instance
Configure it to run jobs marked as deploy with concurrency 1
Add the deploy tag to your CD job.
It's now impossible for two deploy jobs to run concurrently.
To guard against a situation where an older pipeline may run after a new one, add a check in your deploy job to exit if the current pipeline ID is less than the current deployment.
Slight modification:
For me, one slight change: I kept the global concurrency setting the same (8 runners on my machine so concurrency: 8).
But, I tagged one of the runners with deploy and added limit: 1 to its config.
I then updated my .gitlab-ci.yml to use the deploy tag in my deploy job.
Works perfectly: my code_tests job can run simultaneously on 7 runners but deploy is "single threaded" and any other deploy jobs go into pending state until that runner is freed up.

Flink job on EMR runs only on one TaskManager

I am running EMR cluster with 3 m5.xlarge nodes (1 master, 2 core) and Flink 1.8 installed (emr-5.24.1).
On master node I start a Flink session within YARN cluster using the following command:
flink-yarn-session -s 4 -jm 12288m -tm 12288m
That is the maximum memory and slots per TaskManager that YARN let me set up based on selected instance types.
During startup there is a log:
org.apache.flink.yarn.AbstractYarnClusterDescriptor - Cluster specification: ClusterSpecification{masterMemoryMB=12288, taskManagerMemoryMB=12288, numberTaskManagers=1, slotsPerTaskManager=4}
This shows that there is only one task manager. Also when looking at YARN Node manager I see that there is only one container running on one of the core nodes. YARN Resource manager shows that the application is using only 50% of cluster.
With the current setup I would assume that I can run Flink job with parallelism set to 8 (2 TaskManagers * 4 slots), but in case that submitted job has set parallelism to more than 4, it fails after a while as it could not get desired resources.
In case the job parallelism is set to 4 (or less), the job runs as it should. Looking at CPU and memory utilisation with Ganglia it shows that only one node is utilised, while the other flat.
Why is application run only on one node and how to utilise the other node as well? Did I need to set up something on YARN that it would set up Flink on the other node as well?
In previous version of Flik there was startup option -n which was used to specify number of task managers. The option is now obsolete.
When you're starting a 'Session Cluster', you should see only one container which is used for the Flink Job Manager. This is probably what you see in the YARN Resource Manager. Additional containers will automatically be allocated for Task Managers, once you submit a job.
How many cores do you see available in the Resource Manager UI?
Don't forget that the Job Manager also uses cores out of the available 8.
You need to do a little "Math" here.
For example, if you would have set the number of slots to 2 per TM and less memory per TM, then submitted a job with parallelism of 6 it should have worked with 3 TMs.

Jenkins - How to run two Jobs parallelly (1 FT Jobs and 1 Selenium Jobs) on same slave node

I want to run two jobs parallelly on the same Slave.
Job 1 is Functional Testing jobs doesn't require Browser and Job 2 is Selenium Job which requires Browser for testing.
As for running the job on the same slave, you can use the option Restrict where this project can be run, assuming you have the jenkins slave configured in your setup.
For running the jobs in parallel (are you trying to do this via Jenkinsfile or via freestyle jobs?). For jenkinsfile, you can use the parallel stages feature as described here. For freestyle jobs, I would suggest adding one more job (for example setup job) and use this job to trigger your two jobs at the same time. Here are few screenshots showing one of my pipeline triggering jobs in parallel.

Jenkins multiconfiguration project handle concurrent device usage

Case
I have a Jenkins slave witch run's calabash tests on mobile devices (android, ios). To separate on which machines (the mac for iOS or Linux for Android) the tests is run, I also use the Throttle Concurrent Builds Plug-in. This way I separate between the Android or Mac Jenkins slaves the devices are hooked to.
I use a mapping table and a self written bash script to call a device by name and execute a test on this specific slave. The mapping table map's the name to the device id (or IP for iOS).
The architecture is as follows:
[Master]--(Slave-iOS)---------iPhone6
| |--------------iPhone5
|
|--------(Slave-Android)-----HTCOne
|--------------Nexus
|--------------G4
To hand over the device to the bash script I use the Jenkins Matrix Project Plugin, which lets me create a list of devices and test cases like:
HTCOne Nexus G4
Run x x x
Delete x x x
CreateUser x x x
Sadly this list can only be executed sequentially. Now I also want to build tests on multiple devices in parallel and cross vice versa.
Question
I search for a Jenkins plugin which handles devices allocation. If one trigger needs a specific device it should wait until this one is accessible and the test can be executed. The plugin should integrate with the shell execution in Jenkins.
A big plus would be, if it can be combined with the Matrix Project Plugin!
What I looked into so far:
Exclusion-Plugin,
Throttle Concurrent Builds Plug-in, [used to specifiy the slave]
Locks and Latches plugin,
For all the listed ones so far, I don't know how to link them to the matrix configuration and get a device dynamically. I also don't know
how to get the locked resource information into my script.
Port Allocator Plugin, not tested but seems to have the same problem
External Resource Dispatcher, seem to allocate only one resource and is not finding anything if it is a matrix configuration.
Related questions I found, which helped but didn't solved the problem:
How to prevent certain Jenkins jobs from running simultaneously?
Jenkins: group jobs and limit build processors for this group
Jenkins to not allow the same job to run concurrently on the same node?
How do I ensure that only one of a certain category of job runs at once in Hudson?
Disable Jenkins Job from Another Job
If Throttle Concurrent Builds Plugin doesn't work as required in your multi-configuration project, try
Exclusion Plugin with a dynamic resource name, like: SEMAPHORE_MATRIX_${NODE_NAME}
Then add a Build step "Critical block start" (and an optional "Critical block end" step), which will hold this build block execution until SEMAPHORE_MATRIX_${NODE_NAME} is not in use on any other job, including the current Matrix child jobs.
(... Build steps to be run only when SEMAPHORE_MATRIX_${NODE_NAME} is available ...)