My application has configurations/SBT Tasks for
- building UI
- building server
- running UI tests
- running server-side tests
- create a distribution version of the application.
I want to create a configuration which runs these all in order but stop if any of the previous configuration fails.
I created a configuration which uses other configurations but it doesn't move to the next configuration after the 1st one.
I also tried to create a Compound configuration but it is not possible to specify order/sequence as it runs all configurations in parallel.
Is there a way to run multiple configurations in sequence depending on the outcome of previous configuration?
Related
I have a gitlab-ci.yml file which I have inherited. And I have a local gitlab server running on my laptop. I have managed to create several gitlab runners and kickoff this inherited pipeline -- which gets immediately stuck. The error I am getting is:
...because you dont have any active runners online or available with any of these tags assigned to them: sometag
I have pieced together that the gitlab-ci.yml file references several tags and if there is a runner with a given tag, the runner will pickup my pipeline --- but why do I need this control (or hassle, more like it). What does it matter which runner runs my pipeline? Do i need to closely examine the gitlab-ci.yml file and based on that make some special runner for it ?
After I have modified my runners and gave them the missing tags, I am still getting the same error. Looking at the runner API results, the results do show that where it says "online" it shows "null". What does it mean? How do I make this runner "online"
There may be several runner, which will have different executors set up and thus, have different functionalities. So, the best practice is to give tags in gitlab-ci.yml file to run the jobs on particular runner.
In order to bring your runner online, you can go in the server where that particular gitlab runner is installed, and in the restart the gitlab-runner service using gitlab-runner restart with the user you have installed the runner or root user.
Sometimes, it might happen that you have changed the tags or added some tags to the runner using Gitlab UI but the same tags has not been saved in config.toml file. (config.toml file stores the gitlab-runner configurations. More details here https://docs.gitlab.com/runner/configuration/advanced-configuration.html)
So, in this particular case, you have to go the server where the gitlab-runner is installed, and modify the tags in config.toml file and then restart the gitlab-runner service. If everything goes well, you can see the runner is online in Gitlab UI.
How can I run different CI deployment scripts on merge to master depending on the labels attached to the merge request?
I have a repository from which I build different versions of my software. I keep it in one repository as the systems share 90% of the code but there are differences that defitively need code modifications. On merge requests all versions are buildt and a suite of tests is run. Usually I want to deploy on accepting the merge request.
As not always the changes are relevant for all systems I would like to attach labels to the merge request that decide which deployments scripts are run on accepting the merge request. I already tried to automatically decide on the changed code parts but this is not possible as often I expand a shared library but this is only relevant for one of the systems.
I am aware of variables but I don't know how to apply them on merge accept in YML like this
deploy:
stage: deploy
script:
...
only:
- master
Update on strategy:
As CI_MERGE_REQUEST_LABELS is not available with only:master I will try to do a beta deployment depending on merge request labels in only:merge-request. In only:master I will deploy the betas that have changed. This most likely will fit my needs. I will add it as a solution once it works.
I finally solved it this way:
My YML script has three stages:
stages:
- buildtest
- createbeta
- deploy
buildtest:
stage: buildtest
script:
- ... run unit tests
- ... build all systems
- ... run scripted tests on all systems
only:
refs:
- merge_requests
createbeta:
stage: createbeta
script:
- ... run setup and update package creation with parameter $CI_MERGE_REQUEST_LABELS
- ... run update package tests with parameter $CI_MERGE_REQUEST_LABELS
- ... run beta deployment scripts with parameter $CI_MERGE_REQUEST_LABELS (see text)
only:
refs:
- merge_requests
deploy:
stage: deploy
script:
- ... run production deployment scripts (see text)
only:
refs:
- master
The first stages are run on merge request creation.
As changes to shared libraries might affect all systems all builds and tests are run in stage "buildtest".
The scripts in stage "createbeta" check for existance of the merge request label for the corresponding system and are skipped if the system is not involved by the labels.
The script for beta deployment creates a signal file "deploy_me" in the beta folder (important) if it runs
When the request is merged the deployment script runs in stage "deploy". It checks for the existance of the "deploy_me" file and only deploys and informs via mail if the file exists.
This way I can easily decide which system I want to deploy by applying a labes to the merge request. I can thorowly test the new feature with the beta version and make sure that changes do not break the other systems as unittests and system tests are run for all systems.
As the GitLab runner runs in a Windows environment (yes, this makes sense as I work with Delphi) here is the way I find the system label in a Windows cmd file for those who are interested. I use %* as the labels are separated by spaces and treated as individual command line parameters.
echo %* | findstr /i /c:"MyCoolSystem" > nul
if %ERRORLEVEL% EQU 0 goto runit
rem If the label is not supplied with the merge request, do nothing
goto ok
:runit
... content
:ok
Perhaps this helps someone with a similar environment and similar workflow.
I am a beginner at the field of Devops.
I have created a simple web application (jsp), using 3 Jenkins jobs to store the code in GIT, to deploy the this app into Tomcat, and also to site-monitor this app (respectively).
Now, I have been trying - with no success -to automate a simple test with 2 verifications of my app functionality, using Selenium IDE and triggering it via Jenkins job (the fourth one in my project).
In order to perform it , I created the following job on Jenkins, with the needed plugin added (which is SeleniumHQ htmlsuite Run).
Here is the job:
https://i.stack.imgur.com/6Pm6a.png
The job running has failed,giving the following error which I cant handle.
When I run it, I get the following error :
https://i.stack.imgur.com/hXGmI.png
Any help would be very appreciated
Just tick Delete workspace before build starts box under Build Environment stanza
If you don't have the option in your Jenkins job configuration - make sure that Workspace Cleanup Plugin is installed
If you're running your Selenium tests via Jenkins Pipeline - all you need to do is to put cleanWS() directive somewhere in your pipeline code or Jenkinsfile
Case
I have a Jenkins slave witch run's calabash tests on mobile devices (android, ios). To separate on which machines (the mac for iOS or Linux for Android) the tests is run, I also use the Throttle Concurrent Builds Plug-in. This way I separate between the Android or Mac Jenkins slaves the devices are hooked to.
I use a mapping table and a self written bash script to call a device by name and execute a test on this specific slave. The mapping table map's the name to the device id (or IP for iOS).
The architecture is as follows:
[Master]--(Slave-iOS)---------iPhone6
| |--------------iPhone5
|
|--------(Slave-Android)-----HTCOne
|--------------Nexus
|--------------G4
To hand over the device to the bash script I use the Jenkins Matrix Project Plugin, which lets me create a list of devices and test cases like:
HTCOne Nexus G4
Run x x x
Delete x x x
CreateUser x x x
Sadly this list can only be executed sequentially. Now I also want to build tests on multiple devices in parallel and cross vice versa.
Question
I search for a Jenkins plugin which handles devices allocation. If one trigger needs a specific device it should wait until this one is accessible and the test can be executed. The plugin should integrate with the shell execution in Jenkins.
A big plus would be, if it can be combined with the Matrix Project Plugin!
What I looked into so far:
Exclusion-Plugin,
Throttle Concurrent Builds Plug-in, [used to specifiy the slave]
Locks and Latches plugin,
For all the listed ones so far, I don't know how to link them to the matrix configuration and get a device dynamically. I also don't know
how to get the locked resource information into my script.
Port Allocator Plugin, not tested but seems to have the same problem
External Resource Dispatcher, seem to allocate only one resource and is not finding anything if it is a matrix configuration.
Related questions I found, which helped but didn't solved the problem:
How to prevent certain Jenkins jobs from running simultaneously?
Jenkins: group jobs and limit build processors for this group
Jenkins to not allow the same job to run concurrently on the same node?
How do I ensure that only one of a certain category of job runs at once in Hudson?
Disable Jenkins Job from Another Job
If Throttle Concurrent Builds Plugin doesn't work as required in your multi-configuration project, try
Exclusion Plugin with a dynamic resource name, like: SEMAPHORE_MATRIX_${NODE_NAME}
Then add a Build step "Critical block start" (and an optional "Critical block end" step), which will hold this build block execution until SEMAPHORE_MATRIX_${NODE_NAME} is not in use on any other job, including the current Matrix child jobs.
(... Build steps to be run only when SEMAPHORE_MATRIX_${NODE_NAME} is available ...)
I have a few EJBs compiled with Weblogic's EJBC complient with Weblogic 9.2.1.
Our customer uses Weblogic 9.2.3.
During server start Weblogic gives the following message:
<BEA-010087> <The EJB deployment named: YYY.jar is being recompiled within the WebLogic Server. Please consult the server logs if there are any errors. It is also possible to run weblogic.appc as a stand-alone tool to generate the required classes. The generated source files will be placed in .....>
Consequently, server start takes 1.5 hours instead of 20 min. The next server start takes exactly the same time, meaning Weblogic does not cache the products of the recompilation. Needless to say, we cannot recompile all our EJBs to 9.2.3 just for this specific customer, so we need an on-site solution.
My questions are:
1. Is there any way of telling Weblogic to leave those EJB jars as they are and avoid the re-compilation during server start?
2. Can I tell Weblogic to cache the recompiled EJBs to avoid prolonged restarts?
Our current workaround was to write a script that does this recompilation manually before the EAR's creation and deployment (by simply running java weblogic.appc <jar-name>), but we would rather avoid this solution being used in production.
I FIXED this problem by spending a great deal of time researching
and decompiling some classes.I encountered this when migrating from weblogic8 to 10
by this time you might have understood the pain in dealing with oracle weblogic tech support.
unfortunately they did not have a server configuration setting to disable this
You need to do 2 things
Step 1.You if you open the EJB jar files you can see
ejb-jar.xml=3435671213
com.mycompany.myejbs.ejb.DummyEJBService=2691629828
weblogic-ejb-jar.xml=3309609440
WLS_RELEASE_BUILD_VERSION_24=10.0.0.0
you see these hascodes for each of your ejb names.Make these hadcodes zero.
pack the jar file and deploy it on server.
com.mycompany.myejbs.ejb.DummyEJBService=0
weblogic-ejb-jar.xml=0
This is just a Marker file that weblogic.appc keeps in each ejb jar to trigger the recompilation
during server boot up.i automated this process of making these hadcodes to zero.
This hashcodes remain the same for each ejb even if you execute appc for more than once
if you add a new EJB class or delete a class those entries are added to this marker file
Note 1:
how to get this file?
if you open domains/yourdomain/servers/yourServerName/cache/EJBCompilerCache/XXXXXXXXX
you will see this file for each ejb.weblogic makes the hashcodes to zero after it recompiles
Note 2:
When you generate EJB using appc.generate them to a exploded directory using -output C:\myejb
instead of C:\myejb.jar.This way you can play around with the marker file
Step2.
Also you need a PATCH from weblogic.When you install the patch you see some message like this
"PATH CRXXXXXX installed successfully.Eliminate EJB recomilation for appc".
i dont remember the patch number but you can request weblogic for that.
You need to use both steps to fix the problem.The patch fixes only part of the problem
Goodluck!!
cheers
raj
the Marker file in EJBs is WL_GENERATED
Just to update the solution we went with - eventually we opted to recompile the EJBs once at the Customer's site instead of messing with the EJBs' internal markers (we don't want Oracle saying they cannot support problems derived from this scenario).
We created two KSH scripts - the first iterates over all the EJB jars, copies them to a temp dir and then re-compiles them in parallel by running several instances of the 2nd script which does only one thing: java -Drecompiler=yes -cp $CLASSPATH weblogic.appc $1 (With error handling of course :))
This solution reduced compilation time from 70min to 15min. After this we re-create the EAR file and redeploy it with the new EJBs. We do this once per several UAT environment creations, so we save quite a lot of time here (55min X num of envs per drop X num of drops)