Run Tests within Container through the Bluemix DevOps service - testing

I am working on an application based on the bluemix container service. To deploy the application I use the IBM Bluemix DevOps service.
I would like to add a test stage before deployment. The problem is that my tests need to run within a docker container using the image built for the application. The application needs the image setup which contains libraries, database etc (libraries, scripts, etc).
However, the available "test" stage in the DevOps service does not seem to allow running tests within a docker container. I would like to run my tests with something like
if ic run --rm my_custom_image custom_test_script.sh
How could I do such a test run within the Bluemix DevOps service?

IDS doesn't include a place to run dedicated sub-containers, and the container service is really intended for longer running containers (i.e. -d daemon style). You could do it by setting up a persistent container there, then using cf ic cp to copy up the changed pieces (i.e. something specific to this run), then a cf ic exec -ti to force it to run there, perhaps?
Or if you'd rather, perhaps break it into a couple pieces - make the test into a "deploy the test container" step, then the test step using that container (or getting the results therefrom), then a cleanup of that container.

Related

Dynatrace one agent in ecs fargate containers stops but application container is running

Am trying to install one agent in my ECS fargate task. Along with application container i have added another container definition for one agent with image as alpine:latest and used run time injection.
While running the task, initially the one agent container is in running state and after a minute it goes to stopped state same time application container will be in running state.
In dynatrace the same host is available and keeps recreating after 5-10mins frequently.
Actually the issue that I had was task was in draining status because of application issue due to which in dynatrace it keeps recreating... And the same time i used run time injection for my ECS fargate so once the binaries are downloaded and injected to volume, the one agent container definition will stop while the application container keeps running and injecting logs in dynatrace.
I have the same problem and connected via ssh to the cluster I saw that the agent needs to be privileged. The only thing that worked for me was sending traces and metrics through Opentelemetry.
https://aws-otel.github.io/docs/components/otlp-exporter
Alternative:
use sleep infinity in the command field of your oneAgent container.

Update a single container in an Azure container instance with multiple container

We've got an Azure container instance running in Azure with multiple containers deployed to it (via a yaml file). When we run updates, we have to run the full yaml file every time, with some of the values (i.e. image id) amended.
We'd like to break up our code so that we have more of a microservice approach to development (separate repos, separate devops pipelines). Is it possible to instruct a container instance to update one container (from a set of 4 for example) without submitting values for all containers?
For example, if we could have each repo contain a pipeline that only updates one instance in the aci it would be great. Note, what I think might happen is that we may get an error when submitting an update for one container because aci thinks we are trying to raise 3 containers and update one of them (if we have a group of 4).
If it's not possible, is there any other way of achieving the same, without having to step up to Kubernetes? Ideally we'd like to not have to use Kubernetes just because of the management overhead required.
You cannot update a single container in a container group. All containers will restart whenever any part of the group is updated.
Every container you want to update separately needs to be in its own group. If you split the containers, the containers will no longer be running on the same host and you will lose the ability to access the other services via localhost (you will have to use the DNS name of the container group).
If some of your containers serve endpoints that are exposed as paths of a single server, you will need to set up something like Azure Front Door to enable path-based routing so traffic can hit the correct service via a single hostname.

Dynamically created Jenkins Slave using Jenkins Docker Plugin get removed in the middle of Job execution

I am using Jenkins-Docker-Pluginhttps://wiki.jenkins.io/display/JENKINS/Docker+Plugin to dynamically create containers and use them as Jenkins Slaves. This is working fine for some jobs. However for some longer running jobs (10mins >) docker container get removed in midway. Making job failed.
I have tried increasing various timeout options in plugin configuration, However no result. Can anyone please help.
I know I am quite late to post answer here. I am able to get the root cause of the issue. Problem was using two Jenkins instance with same Jenkins Home Directory. Seems Jenkins Docker plugin runs daemon to kill docker container associated with Jenkins Master. As we are running two Jenkins instance with same Jenkins Home directory (Copy of It) Docker containers started for CI work get deleted due to daemon of each other.

Push code from VSTS repository to on-prem TFS?

this is my first post on here so forgive me if I've missed an existing answer to this question.
Basically my company conducts off-site development for various clients in government. Internally, we use cloud VSTS, Octopus deploy and Selenium to ensure a continuous delivery pipeline in our internal Azure environments. We are looking to extend this pipeline into the on-prem environments of our clients to cut down on unnecessary deployment overheads. Unfortunately, due to security policies we are unable to use our VSTS/Octopus instances to push code directly into the client environment, so I'm looking for a way to get code from our VSTS environment into an on-prem instance of TFS hosted on their end.
What I'm after, really, is a system whereby the client logs into our VSTS environment, validates the code, then pushes some kind of button which will pull it to their local TFS, where a replica of our automated build and test process will manage the CI pipeline through their environments and into prod.
Is this at all possible? What are my options here?
There is not a direct way to achieve migrating source code with history from VSTS to a on-premise TFS. You would need 3rd party tool, like Commercial Edition of OpsHub (note it is not free).
It sounds like you need a new feature that is comming to Octopus Deploy, see https://octopus.com/blog/roadmap-2017 --> Octopus Release Promotions
I quote:
Many customers work in environment where releases must flow between more than one Octopus server - the two most common scenarios being:
Agencies which use one Octopus for dev/test, but then need an Octopus server at each of their customer's sites to do production deployments
I will suggest the following. Though it contains small custom script.
Add build agent to your vsts which is physically located on customer's premises. This is easy, just register agent with online endpoint.
Create build definition in vsts that gets code from vsts. But instead of building commits it to local tfs. You will need a small powershell code here. You can add it as custom powershell step in build definition.
Local tfs orchestrates the rest.
Custom code:
Say your agent is on d:/agent
1. Keep local tfs mapped to some directory (say c:/tfs)
The script copies new sources over some code from d:/agent/work/ to c:/tfs
Commits from c:/tfs to local tfs.
Note:You will need /force option (and probably some more) to prevent conflicts.
I believe this not as ugly as it sounds.

Fuse Fabric8 Clustering

I am noob in fabric8. I have a doubt regarding clustering with docker images.
I have pulled the docker image for fabric8 fabric8/fabric8. I just want to make the containers i launch to automatically fall into the same cluster without using fabric:create and fabric:join.
Say if i launch 3 containers of fabric8/fabric8 they should fall under the same cluster without manual configuration.
Please give some links are references. I'm lost.
Thanks in advance
In fabric8 v1 the idea was that you create a fabric, using the fabric:create command and then you spin docker container, using the docker container provider in pretty much the same way as you were doing with child containers (either using the container-create-docker command or using hawtio and selecting docker as the container type).