How to use one container in pipeline? - gitlab-ci

the situation is such that we move from jenkins to gitlab ci. Every time a stage occurs in the pipeline, a new container is created, I would like to know if it is possible to make the container used by the previous one, that is, a single one. Gitlab Executer is docker.
I want to save condition of one container

No, this is not possible in a practical way with the docker executor. Each job is executed in its own container. There is no setting to change this behavior.
Keep in mind that jobs (even across stages) can run concurrently and that jobs can land on runners on completely different underlying machines. Therefore, this is not really practical.

Related

Can spinnaker prevent out-of-order deployments?

Currently
We use a CI platform to build, test, and release new code when a new PR is merged into master. The "release" step is quite simple/stupid, and essentially runs kubectl patch with the tag of the newly-pushed docker image.
The Problem
When two PRs merge at about the same time (ex: A, then B -- B includes A's commits, but not vice-versa), it may happen that B finishes its build/test first, and begins its release step first. When this happens, A releases second, even though it has older code. The result is a steady-state in which B's code has been effectively rolled-back by As deployment.
We want to keep our CI/CD as continuous as possible, ideally without:
serializing our CI pipeline (so that only one workflow runs at a time)
delaying/batching our deployments
Does Spinnaker have functionality or best-practice that solves for this?
Best practises for your issue are widely described in Message Ordering for Asynchronous systems. The simpliest solution would be to implement FIFO priciple for your CI/CD pipeline.
It will save you from implementing checks between CI and CD parts.

Is there a way to schedule the shutdown of Selenium nodes without breaking tests?

I have set up a Selenium Grid with 3 different servers running nodes as windows services. I need to restart those machines regularly to avoid memory leaks and under performance. To do this I need to schedule a job to shutdown the nodes on a server and restart while tests would be pushed to the remaining servers. Then repeat the same process with scheduled jobs on the other servers.
Is there a way to shut down a selenium node once the current test finishes ? Or to get the status of a particular node so I can check each one myself in a scheduled job to make sure the node is not running a test before I shut it down ?
You could check the number of active sessions by requesting each node with the /sessions command:
http://127.0.0.1:4444/wd/hub/sessions
Response :
{"state":"success","sessionId":null,"hCode":3217742,"value":[],"class":"org.openqa.selenium.remote.Response","status":0}
#Sh3mm
Sometime back I wrote up a blog post which basically talks about how to go about building a "Self Healing Grid" which is what you essentially are after.
You can read through my blog post on that from here.
We essentially used the same approach when we working on building the SeLion Grid. The SeLion Grid packs in a few more sophistications. Read more about it here
There's another flavor of essentially the same sort of functionalities that was built by GroupOn as part of their Grid Extras. You can take a look at it here

Running multiple Kettle transformation on single JVM

We want to use pan.sh to execute multiple kettle transformations. After exploring the script I found that it internally calls spoon.sh script which runs in PDI. Now the problem is every time a new transformation starts it create a separate JVM for its executions(invoked via a .bat file), however I want to group them to use single JVM to overcome memory constraints that the multiple JVM are putting on the batch server.
Could somebody guide me on how can I achieve this or share the documentation/resources with me.
Thanks for the good work.
Use Carte. This is exactly what this is for. You can startup a server (on the local box if you like) and then submit your jobs to it. One JVM, one heap, shared resource.
Benefit of that is then scalability, so when your box becomes too busy just add another one, also using carte and start sending some of the jobs to that other server.
There's an old but still current blog here:
http://diethardsteiner.blogspot.co.uk/2011/01/pentaho-data-integration-remote.html
As well as doco on the pentaho website.
Starting the server is as simple as:
carte.sh <hostname> <port>
There is also a status page, which you can use to query your carte servers, so if you have a cluster of servers, you can pick a quiet one to send your job to.

Multiple docker containers

I am reading about docker and I am trying to understand whether or not this is something I should learn to use.
From what I read best practices states that you should have one process per container. Now, this mean that I need one container for JBoss, one for database, one for file storage, build server, ...
Now would I manually have to start each of these containers? Or are there some kind of dependencies you can set up?
What about the order and requirements that one process in a container can have? JBoss needs the database to be started before it starts etc?
Is this handled?
one process per container
This advice is valid if you want to follow a microservices architecture. microservices have advantages but also drawbacks. Depending on your situation you might find it more convenient to have a container running multiple processes.
Running multiple containers on one single host
If you want to start multiple containers together on one single docker host, the easiest way is to use fig. The fig configuration file is very easy to understand as its syntax mimics docker commands. This video gives you a nice presentation of fig (by one of fig authors Aanand Prasad)
Note that tools such as fig AFAIK won't be able to wait for a first container to start and finish initializing before starting another container depending on the first one. The way to handle this is to have the 2nd container implement some kind of test and loop until the dependency is ready, then start its process. This can be achieved by different means (wrapper script, straight in your application code, ...)
Running multiple processes in one container
As a docker container will stop as soon as no process is running in the foreground, there are different techniques you can use (supervisor, running a first process as a daemon and a last one in the foreground, using phusion/baseimage, ...)

In Jenkins build flow plugin, terminate all parallel jobs if one of them failed

We are using the jenkins build flow plugin(https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin) to run our test cases by dividing them into small sub test cases and test them in parallel.
The current problem is even one of the job fails, the other parallel jobs and the hosting flow job will continue running, which is a big waste of resources.
I checked the doc there is no place to control the jobs inside the parallel {}. Any ideas how to deal with that?
Looking at the code, I don't see a way to achieve that. I would ask the user mailing list for help.
I am thinking to use Guard / Rescue imbedded in Parallel to do this.
Adding failFast: true within parallel block would cause the build to fail as soon as one of the parallel nodes fails.
You can view this as an example.