Gitlab-CI image and services - gitlab-ci

Within the yml file of the gitlab-ci pipeline there are image and services. How are they related? Can I build a docker image that can provide more for a gitlab stage via services? Does anyone of you have a dockerfile which can be integrated as an image and as services?
build_stage:
image: my_docker_image
services:
- service_from_docker_image
"The services keyword defines just another Docker image that is run during
your job and is linked to the Docker image that the image keyword defines.
This allows you to access the service image during build time."
The explanation on the gitlab page doesn't make that very clear to me yet.

Related

Send ECS Container Logs to CloudWatch

We've a PHP application that is pushed to ECR Fargate and we've configured an ECS task definition for it, and it works fine as a container in ECS.
I've configured aws-logs for the application and it sends the app logs normally to cloudwatch, but I'm wondering how to send the logs in a file inside the container in
"/var/www/html/app/var/dev.log"
to the same log group that I configured when created the task definition.
I found the answer on the following link:
https://aws.amazon.com/blogs/devops/send-ecs-container-logs-to-cloudwatch-logs-for-centralized-monitoring/
Just needed to install both syslog and awslogs on the php image, then use supervisord to start them with the container along with our php app. From Task definition side, create a volume and a mount point.

Copy / update the code in docker container without stopping container

I have a docker-composer setup in which i am uploading source code for server say flask api . Now when i change my python code, I have to follow steps like this
stop the running containers (docker-compose stop)
build and load updated code in container (docker-compose up --build)
This take a bit long time . Is there any better way ? Like update code in the running docker and then restarting Apache server without stopping whole container ?
There are few dirty ways you can modify file system of running container.
First you need to find the path of directory which is used as runtime root for container. Run docker container inspect id/name. Look for the key UpperDir in JSON output. You can edit/copy/delete files in that directory.
Another way is to get the process ID of the process running within container. Go to the /proc/process_id/root directory. This is the root directory for the process running inside docker. You can edit it on the fly and changes will appear in the container.
You can run the docker build while the old container is still running, and your downtime is limited to the changeover period.
It can be helpful for a couple of reasons to put a load balancer in front of your container. Depending on your application this could be a "dumb" load balancer like HAProxy, or a full Web server like nginx, or something your cloud provider makes available. This allows you to have multiple copies of the application running at once, possibly on different hosts (helps for scaling and reliability). In this case the sequence becomes:
docker build the new image
docker run it
Attach it to the load balancer (now traffic goes to both old and new containers)
Test that the new container works correctly
Detach the old container from the load balancer
docker stop && docker rm the old container
If you don't mind heavier-weight infrastructure, this sequence is basically exactly what happens when you change the image tag in a Kubernetes Deployment object, but adopting Kubernetes is something of a substantial commitment.

From custom dockerfile to kubernetes deploy with an apache started

I have a dockerfile where I build an apache web server with some custom configurations etc.
Executing the Dockerfile I create an image that could be used in a deployment yaml file using Kubernetes.
Everything is working properly but after deployment, my apache service is down in every container of every pod.
Obviously I can access in every container to execute an /etc/init.d/apache2 start but this solution is not very smart..
So my question is: how can I set my custom apache to be running during the execution of the deploy yaml file?
PS: I tried this solution: with the dockerfile I created a docker container then I accessed on it and I started apache. Then I created a new image from this container (dockerfile commit + gcloud image push) but when I deploy the application I always find apache down
Well, first things first - I would very much recommend just using the official apache2 image and then making your custom configurations from there. They're documentation states this in the following paragraph:
Configuration
To customize the configuration of the httpd server, just COPY your custom configuration in as /usr/local/apache2/conf/httpd.conf.
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
However if you're dead-set on building everything yourself; you'll notice that inside of the Dockerfile for the official image they are copying in a BASH script and then setting this as the CMD option. This works because when running a Docker container you should be running a single process; this is why, as you stated, running it from it's service is a bad idea.
You can find the script they're running here, it's very short at 7 lines - so you shouldn't have too much trouble figuring out where to go from here.
Best of luck!

Dockerized Gitlab Container Backup

I am using a GitLab docker image for integration testing of a service I'm helping to develop. Ideally, the image would be a preconfigured snapshot of GitLab with different users and repos available to run tests against. So the problem ends up being, what is a good way to automate the creation of 'snapshots' of GitLab (that can then be versioned etc.)?
My current solution to this problem is to use GitLab's built in backup utility via gitlab-rake gitlab:backup:create after getting GitLab to a state that I want. This then lets me use GitLab's gitlab-rake gitlab:backup:restore in a hook when the container is starting up to get the container back to the state that I expect (the backup having been ADDed in the Dockerfile for the image). This has the advantage of being relatively lightweight (backups are on the order of MBs) and the backups can be checked in to version control.
I have tried using docker export along with docker import to save the state of the container and then create an image based on that state. This has the advantage of being easy to automate since it is directly supported by Docker, but ends up being fairly expensive considering what the goal is (having users and repos available to test against). It also would require the images to be pushed to a registry of some kind in order to be easily distributed. Perhaps this is the best solution because it is well supported though.
I suppose my question is, what is the Docker way of approaching a problem like this?

What is the working directory of a Docker Golang application?

When I serve a Golang Application from within the official Docker Hub Repository I wonder what will be the default working directory the application starts up?
Background: I will have to map local Certificate Authority and server keys into the container to serve TLS https and I wonder where to map them to the application will be able to grab them in current working directory of the application from within the container?
If you are using the golang:1.X-onbuild image from DockerHub will be copied into(https://hub.docker.com/_/golang/)
/go/src/app
this means all files and directories from the directory where you run the
docker build
command will be copied into the container.
And the workdir of all images is
/go
Go will return the current working directory using
currdir, _ = filepath.Abs(filepath.Dir(os.Args[0]))
Executed within a golang container and right after startup, the pwd is set to
/go/src/app
The current working directory of a golang application starting up within a Docker container is thus /go/src/app. In order to map a file/directory into a container you will habe to use the -v-switch as described in the Documentation for run:
-v /local/file.pem:/go/src/app/file.pem
Will map a local file into the pwd of the dockerized golang app.