Jenkins multiconfiguration project handle concurrent device usage - testing

Case
I have a Jenkins slave witch run's calabash tests on mobile devices (android, ios). To separate on which machines (the mac for iOS or Linux for Android) the tests is run, I also use the Throttle Concurrent Builds Plug-in. This way I separate between the Android or Mac Jenkins slaves the devices are hooked to.
I use a mapping table and a self written bash script to call a device by name and execute a test on this specific slave. The mapping table map's the name to the device id (or IP for iOS).
The architecture is as follows:
[Master]--(Slave-iOS)---------iPhone6
| |--------------iPhone5
|
|--------(Slave-Android)-----HTCOne
|--------------Nexus
|--------------G4
To hand over the device to the bash script I use the Jenkins Matrix Project Plugin, which lets me create a list of devices and test cases like:
HTCOne Nexus G4
Run x x x
Delete x x x
CreateUser x x x
Sadly this list can only be executed sequentially. Now I also want to build tests on multiple devices in parallel and cross vice versa.
Question
I search for a Jenkins plugin which handles devices allocation. If one trigger needs a specific device it should wait until this one is accessible and the test can be executed. The plugin should integrate with the shell execution in Jenkins.
A big plus would be, if it can be combined with the Matrix Project Plugin!
What I looked into so far:
Exclusion-Plugin,
Throttle Concurrent Builds Plug-in, [used to specifiy the slave]
Locks and Latches plugin,
For all the listed ones so far, I don't know how to link them to the matrix configuration and get a device dynamically. I also don't know
how to get the locked resource information into my script.
Port Allocator Plugin, not tested but seems to have the same problem
External Resource Dispatcher, seem to allocate only one resource and is not finding anything if it is a matrix configuration.
Related questions I found, which helped but didn't solved the problem:
How to prevent certain Jenkins jobs from running simultaneously?
Jenkins: group jobs and limit build processors for this group
Jenkins to not allow the same job to run concurrently on the same node?
How do I ensure that only one of a certain category of job runs at once in Hudson?
Disable Jenkins Job from Another Job

If Throttle Concurrent Builds Plugin doesn't work as required in your multi-configuration project, try
Exclusion Plugin with a dynamic resource name, like: SEMAPHORE_MATRIX_${NODE_NAME}
Then add a Build step "Critical block start" (and an optional "Critical block end" step), which will hold this build block execution until SEMAPHORE_MATRIX_${NODE_NAME} is not in use on any other job, including the current Matrix child jobs.
(... Build steps to be run only when SEMAPHORE_MATRIX_${NODE_NAME} is available ...)

Related

how to run multiple configurations in Intellij in sequence

My application has configurations/SBT Tasks for
- building UI
- building server
- running UI tests
- running server-side tests
- create a distribution version of the application.
I want to create a configuration which runs these all in order but stop if any of the previous configuration fails.
I created a configuration which uses other configurations but it doesn't move to the next configuration after the 1st one.
I also tried to create a Compound configuration but it is not possible to specify order/sequence as it runs all configurations in parallel.
Is there a way to run multiple configurations in sequence depending on the outcome of previous configuration?

How to interrupt triggered gitlab pipelines

I'm using a webhook to trigger my Gitlab pipeline. Sometimes, this trigger is triggered a bunch of times, but my pipelines only has to run the last one (static site generation). Right now, it will run as many pipelines as I have triggered. My pipelines takes 20 minutes so sometimes it's running the rest of the day, which is completely unnecessary.
https://docs.gitlab.com/ee/ci/yaml/#interruptible and https://docs.gitlab.com/ee/user/project/pipelines/settings.html#auto-cancel-pending-pipelines only work on pushed commits, not on triggers
A similar problem is discussed in gitlab-org/gitlab-foss issue 41560
Example of a use-case:
I want to always push the same Docker "image:tag", for example: "myapp:dev-CI". The idea is that "myapp:dev-CI" should always be the latest Docker image of the application that matches the HEAD of the develop branch.
However if 2 commits are pushed, then 2 pipelines are triggered and executed in paralell. Then the latest triggered pipeline often finishes before the oldest one.
As a consequence the pushed Docker image is not the latest one.
Proposition:
As a workaround for *nix you can get running pipelines from API and wait until they finished or cancel them with the same API.
In the example below script checks for running pipelines with lower id's for the same branch and sleeps.
jq package is required for this code to work.
Or:
Create a new runner instance
Configure it to run jobs marked as deploy with concurrency 1
Add the deploy tag to your CD job.
It's now impossible for two deploy jobs to run concurrently.
To guard against a situation where an older pipeline may run after a new one, add a check in your deploy job to exit if the current pipeline ID is less than the current deployment.
Slight modification:
For me, one slight change: I kept the global concurrency setting the same (8 runners on my machine so concurrency: 8).
But, I tagged one of the runners with deploy and added limit: 1 to its config.
I then updated my .gitlab-ci.yml to use the deploy tag in my deploy job.
Works perfectly: my code_tests job can run simultaneously on 7 runners but deploy is "single threaded" and any other deploy jobs go into pending state until that runner is freed up.

how do i build openthread stack for thread leader role?

I am new to openthread. I am trying to build thread leader and end devices.
End devices should not have routing capability. I built the thread stack for nxp target with Border_ROUTER=1. Under the output directory there are 4 binaries (ot-cli-ftd ot-cli-mtd ot-ncp-ftd ot-ncp-mtd ot-ncp-radio). I would like to know which binary can be placed on thread leader and end device .
procedure followed:
./configure --enable-commissioner
make
make -f examples/Makefile-kw41z BORDER_ROUTER=1
If my procedure is wrong (I'm pretty sure it is) how do I build for thread leader and end device? What are switches to be used when I make?
All Thread Routers support the Leader role. The Full Thread Device (FTD) builds support the Router and Leader roles. The FTD binaries are generated using the default build configuration - no need to specify any additional build parameters.

Why would a native program run fine when executed directly, but fail with a seg fault when submitted through condor

I have a third party library that I'm attempting to incorporate into a simulation. We have the static library (.a), along with all of it's runtime dependencies (shared objects). I've created a very simple application (in C) that is linked against the library. All it does is call an initialization function that is part of the third party library's API, and exits. When I run this directly from the command line, it works fine. If I submit the executable to our Condor grid, it fails with a seg fault on strncpy (libc.so.6). I've forced condor to only run the executable on a particular machine, and if I run it directly on that machine, it works fine.
I'm mostly a Java programmer... limited amount of native coding experience. I'm familiar with tools such as nm, ldd, catchsegv, etc... to the point where I can run them. I don't really know where to start looking for an issue though.
I've run ldd directly on the executing machine, and via a script submitted through condor, along with my executable. ldd reports the same files in both cases.
I don't understand how running it directly would work, but it would fail being run by condor. The process that ultimately executes the program, condor_startd, is a process that starts as root, and changes its effective uid to the submitter. Perhaps this has something to do with it?
Don't know why this would cause an issue, but the culprit was the LANG environment variable. It was not set when running under Condor, but was set to US_EN.UTF-8 when running locally. Adding this value to the condor execution environment fixed the problem.

JBoss Cluster setup with Hudson?

I want to have a Hudson setup that has two cluster nodes with JBoss. There is already a test machine with Hudson and it is running the nightly build and tests. At the moment the application is deployed on the Hudson box.
There are couple options in my mind. One could be to use SCPplugin for Hudson to copy the ear file over from master to the cluster nodes. The other option could be to setup Hudson slaves on cluster nodes.
Any opinions, experiences or other approaches?
edit: I set up a slave but it seems that I can't make a job to take place on more than one slave without copying the job. Am I missing something?
You are right. You can't run different build steps of one job on different nodes. However, a job can be configured to run on different slaves, Hudson than determines at execution time what node that job will run on.
You need to configure labels for you nodes. A node can have more than one label. Every job can also require more than one label.
Example:
Node 1 has label maven and db2
Node 2 has label maven and ant
Job 1 requires label maven
can run on Node 1 and Node 2
Job 2 requires label ant
can run on Node 2
Job 2 requires label maven and db2
can run on Node 1
If you need different build steps of one job to run on different nodes you have to create more than one job and chain them. You only trigger the first job who triggers the subsequent jobs. One of the following jobs can access the artifacts of the previous job. You can even run two jobs in parallel and when both are done automatically trigger the next job. You will need the Join Plugin for the parallel jobs.
If you want load balancing and central administration from Hudson (i.e. configuring projects, seeing what builds run ATM, etc.), you must run slaves.