I am running self-hosted GitLab CE with a GitLab-runner and a Docker executor. I want to build a binary for AWS Lambda, so I'm using the amazonlinux:latest image for my Docker executor.
Of course, not all packages that I need for building are available in the base amazonlinux image, so I install them via yum. Unfortunately, cmake is not available for Amazon Linux, so I build it from source.
At the moment, this takes place every time the pipeline runs, which is not optimal because cmake takes a relatively long time to build (compared to the binary I actually want to build).
My general question is: is there a clean and reproducible way to prepare an image for building, which is then used as base image for GitLab CI? Since I'm relatively new to Docker and friends, is the correct way to go to create an image locally on the runner host and use that in my gitlab-ci.yml? Or should I put it in a registry (probably even GitLab's own container registry?)
Yes there is.
Nothing is stopping you from creating an image through a Dockerfile that does all the yum installs and then pushes the image you build to a (private) Docker registry. Look at it as 'extending' the Amazon image and save it for future usage.
Since I don't expect it to be to exiting (it will not yet contain any application code) you can also store it for free on Docker Hub.
Custom image
So an example Dockerfile:
FROM amazonlinux:latest
RUN yum install <packages>
RUN <commands for cmake>
Then you build your custom amazonlinux-custom image with:
docker build -t mydockerhubuser/amazonlinux-custom:latest .
And push it to Docker Hub (after docker login):
docker push mydockerhubuser/amazonlinux-custom:latest
Gitlab CI usage
In your .gitlab-ci.yml you replace the image: amazonlinux:latest part that defines your job image with image: mydockerhubuser/amazonlinux-custom:latest so you don't have to install all your deps.
Note
Amazon will often rebuild its amazonlinux:latest image and push it to Docker Hub. Using a custom image based on theirs you will have to take in account the following:
You will need to rebuild your image often too, to stay up to date with patches etc.
It may be smarter to use e.g. a fixed version like FROM: amazonlinux:2017.09 to avoid major version changes you don't expect.
Related
If you are developing Python web services for local network (servers is totally offline from the web) and the only way to add files to the server is through Flash drivers so using pip for Python packages or npm for node packages is such a headache and gets in a lot of dependencies issues and build issues .. so what is the proper way of dealing with such environment so development and deployment would be easier?
there are 2 approaches which you can take:
download all your dependencies locally and ship them to the remote server. this includes all the pip and npm packages. pay attention to the python\nodejs\operating system versions and architecture.
use docker to create an image, which packs everything. then ship the image to the remote server and finally spin-up a container based on that image.
You can use Pypicache to run your own pip servers and let it to cache your dependencies wherever you have an internet connection (where you are developing the application).
Then you can copy the whole pypicache folder on your flash drive and run the server wherever you want and use the cached packages inside it. the good point is in some environments that you can get a network connection for a limited time, having a pypicache is useful because it can download whatever all of the dependencies that your python applications need, and each instance would download and install the dependencies from the offline pip server by providing a simple switch in the command line. Here is an example:
pip install -i http://localhost:8080/simple somepackage
More Information - pypicache
I'm attempting to use my WSL2 docker containers with VS Code, though I now regret this. I attempted to follow these directions to get everything installed and configured correctly.
After installing Docker Desktop, my previous containers and images are not shown with docker ls and docker images when run from WSL2. However, there are still many GB of data under /var/lib/docker. Is there some way to attempt to recover this?
How can I create a virtual machine (Centos-8) using a gitlab-ci.yml file?
No software needed to build the VM should be installed on the Gitlab runner. Binary software should also not be checked into git.
I can create docker images - for example for the use of Packer - in a prebuild step. This image can be cached in Artifactory.
But how should I handle that for the Centos image?
During the installation using Packer, there`s a reboot required...
I have to build and push docker image via gitlab-ci. I have gone through the official document.
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
I want to adopt shell method but my issue is I already had a working gitrunner on my server machine. So what the procedure for it. If I tried to re-register the git runner on the same machine. will it impact the old one?
Thanks in advance.
Assuming that you installed gitlab-runner as a system service and not inside a container, you can easily register another shell runner on your server using the command gitlab-ci-multi-runner register.
This is indirectly confirmed by the advanced configuration documentation, which states that the config.toml of the gitlab-runner service may contain multiple [[runner]] sections.
Note: To allow the shell runner to build docker images, you will need to add the gitlab-runner user to the docker group, e.g.:
sudo gpasswd --add gitlab-runner docker
I'm wondering if there is a up-to-date docker image for the Ignite web console?
When I pull "docker pull apacheignite/web-console-standalone" I only get an outdated version that isn't compatible with current web agent.
Or is the Dockerfile available so I can build the image myself without starting from the ground?
Maybe there is even a Dockerfile that puts webagent and console in one?
Thanks for any help!
#dontequila.
apacheignite/web-console-standalone will be updated soon.
You can also build docker by yourself:
Checkout Apache Ignite master: Ignite Git: https://git-wip-us.apache.org/repos/asf/ignite
cd modules/web-console/docker/standalone/
./build.sh (you may need sudo).