Building Docker images with GitLab CI/CD on existing git-runner - gitlab-ci

I have to build and push docker image via gitlab-ci. I have gone through the official document.
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
I want to adopt shell method but my issue is I already had a working gitrunner on my server machine. So what the procedure for it. If I tried to re-register the git runner on the same machine. will it impact the old one?
Thanks in advance.

Assuming that you installed gitlab-runner as a system service and not inside a container, you can easily register another shell runner on your server using the command gitlab-ci-multi-runner register.
This is indirectly confirmed by the advanced configuration documentation, which states that the config.toml of the gitlab-runner service may contain multiple [[runner]] sections.
Note: To allow the shell runner to build docker images, you will need to add the gitlab-runner user to the docker group, e.g.:
sudo gpasswd --add gitlab-runner docker

Related

Create a virtual machine using gitlab-ci

How can I create a virtual machine (Centos-8) using a gitlab-ci.yml file?
No software needed to build the VM should be installed on the Gitlab runner. Binary software should also not be checked into git.
I can create docker images - for example for the use of Packer - in a prebuild step. This image can be cached in Artifactory.
But how should I handle that for the Centos image?
During the installation using Packer, there`s a reboot required...

Up-to-date Ignite web console docker image

I'm wondering if there is a up-to-date docker image for the Ignite web console?
When I pull "docker pull apacheignite/web-console-standalone" I only get an outdated version that isn't compatible with current web agent.
Or is the Dockerfile available so I can build the image myself without starting from the ground?
Maybe there is even a Dockerfile that puts webagent and console in one?
Thanks for any help!
#dontequila.
apacheignite/web-console-standalone will be updated soon.
You can also build docker by yourself:
Checkout Apache Ignite master: Ignite Git: https://git-wip-us.apache.org/repos/asf/ignite
cd modules/web-console/docker/standalone/
./build.sh (you may need sudo).

Prepare Amazon Linux image for GitLab CI

I am running self-hosted GitLab CE with a GitLab-runner and a Docker executor. I want to build a binary for AWS Lambda, so I'm using the amazonlinux:latest image for my Docker executor.
Of course, not all packages that I need for building are available in the base amazonlinux image, so I install them via yum. Unfortunately, cmake is not available for Amazon Linux, so I build it from source.
At the moment, this takes place every time the pipeline runs, which is not optimal because cmake takes a relatively long time to build (compared to the binary I actually want to build).
My general question is: is there a clean and reproducible way to prepare an image for building, which is then used as base image for GitLab CI? Since I'm relatively new to Docker and friends, is the correct way to go to create an image locally on the runner host and use that in my gitlab-ci.yml? Or should I put it in a registry (probably even GitLab's own container registry?)
Yes there is.
Nothing is stopping you from creating an image through a Dockerfile that does all the yum installs and then pushes the image you build to a (private) Docker registry. Look at it as 'extending' the Amazon image and save it for future usage.
Since I don't expect it to be to exiting (it will not yet contain any application code) you can also store it for free on Docker Hub.
Custom image
So an example Dockerfile:
FROM amazonlinux:latest
RUN yum install <packages>
RUN <commands for cmake>
Then you build your custom amazonlinux-custom image with:
docker build -t mydockerhubuser/amazonlinux-custom:latest .
And push it to Docker Hub (after docker login):
docker push mydockerhubuser/amazonlinux-custom:latest
Gitlab CI usage
In your .gitlab-ci.yml you replace the image: amazonlinux:latest part that defines your job image with image: mydockerhubuser/amazonlinux-custom:latest so you don't have to install all your deps.
Note
Amazon will often rebuild its amazonlinux:latest image and push it to Docker Hub. Using a custom image based on theirs you will have to take in account the following:
You will need to rebuild your image often too, to stay up to date with patches etc.
It may be smarter to use e.g. a fixed version like FROM: amazonlinux:2017.09 to avoid major version changes you don't expect.

Docker support intellij

I was running docker inside a VM and was using the Docker integration plugin in IntelliJ (Intellij on my host machine, not vm). I upgraded my OS and now I am able to run my Docker containers directly on my host machine. I can't find how to use the Docker plugin anymore. How can I use the plugin when Docker is running natively? When it was running on a VM, I would go under Settings -> Build, Execution, Deployment -> Clouds and would enter MY_VM_IP:2376 under API URL, but now I have no idea what to put there (or even if that's where I configure it). I tried 127.0.0.1:2376 and also tried 192.168.99.100:2376. Both are giving me 'Network is unreachable' error.
I found how:
I edited the /etc/sysconfig/docker file and added "-H 0.0.0.0:2376 -H unix:///var/run/docker.sock" to OPTIONS. Then I put 127.0.0.1:2376 as API URL under Settings -> Build, Execution, Deployment -> Clouds and it's working.

View files in docker container

I created a docker container. I can ssh into the docker container. How can I view files in my docker container with a GUI (specifically the WebStorm IDE)?
I'm running a MAC on OSX Yosemite 10.10.5.
The usual pattern is to mount your source code into the container as a volume. Your IDE can work with the files on your host machine and the processes running in the container can see the files. docs
There might be a way to set up remote file access with WebStorm, but I'd recommend trying the other approach first.
docker run daemon -v /mycodedir:/mydockerdir {libraryname}/{imagename} if you mount your work directory and map it to the directory running files in the container you will be able to edit and see them in Web Storm.