Kaniko: Build more than one dockerfile in pipeline - gitlab-ci

I have a gitlab repository that contains multiple dockerfiles. I know that this is not ideal.
I now want to use my gitlab pipeline to create one image per dockerfile using kaniko and pushing it to its corresponding AWS ECR. Of course I can define a job for each dockerfile but this results in duplication of code and overhead in runtime. Is it possible to build multiple dockerfiles at the same time in one job? A possible approach would be to pass a string array with the dockerfiles, and then iterate over the corresponding kaniko command. However, since kaniko only provides a busybox shell, all this is not really nice and easy. Ideas?

Related

Automatizing GitHub workflow that involves 3rd party repo

I have a GitHub repo myRepo that scans the contents of another repo theirRepo and converts them to JSON files. The details aren't really that important, just so much that myRepo uses nodeJS and holds theirRepo as a submodule. Licensewise this is not a problem.
What I'd like to achieve is that, when theirRepo merges into main, myRepo magically updates and builds the new files. I'd like to use existing infrastructures such as GitHub actions, Netlify build processes etc.
How would you approach this?
I don't expect a detailed solution for the magical part, but am rather looking for a few pointers, something that gets me started.
As GitHub Actions (AFAIK) does not currently allow to trigger events based on changes in other repositories (unless you control another repository’s workflows) one might have to hack a little bit.
File change in OtherRepo
I’m not familiar with Node, but, depending on the project culture following files might change during the new release/main branch update:
package-lock.json
CHANGELOG.md (for semantic versioning)
This is a rough approximation, you might also want to identify multiple files likely to change with each merged PR.
Cron based jobs
Run your job every N hours/minutes or another time interval to check for changes.
Use caching
Run your action only when files in another repo change, something along these lines:
steps:
- run: curl <path to file> -o output1
- run: curl <path to file2> -o output2
- name: Cache
uses: actions/cache#v3
id: cache
with:
key: ${{ hashFiles(”output1”, “output2”) }}
- name: Update repo
if: steps.cache.output.cache-hit != “true”
run: <do your stuff>

Does gitlab-ci have a way to download a script and store it on the local file system so it could be run?

Does gitlab-ci have a way to download a script and store it on the local file system so it could be run? Looks like other's have asked similar questions (see below).
One way to do it would be to use curl (but curl has to exist in the CI runner):
curl -o ./myscript -k https://example.com/myscript.sh
This was from https://stackoverflow.com/a/22800194/3281336.
If I have a script that I would like to use in multiple CI-pipelines, I'd like to have a way to download the script to the local file system to use in the pipeline. NOTE: I don't have the ability to create a custom runner or docker image in my given situation.
If the script were available via git or an https website, what are my alternatives?
Some search results
https://docs.gitlab.com/ee/ci/yaml/includes.html - Gitlab supports a way to include files, even from GIT repos. This might work I just haven't read how.
How to run a script from file in another project using include in GitLab CI? - Similar but the answer uses a multi-project pipeline and a trigger which is really (I think) a different answer.
.gitlab-ci.yml to include multiple shell functions from multiple yml files - Similar but the question is dealing with scripts in YAML files and I'm dealing with a stand alone script.
How to include a PowerShell script file in a GitLab CI YAML file - So far this is the closest to my question and some might consider it the same even though the question is asking about a powershell script. The answer said this wasn't possible to include a script (so maybe this is not possible using the GitLab CI syntax).
If it is possible, please let me know how to do this.

How to transfer a value from one build container to another in drone.io CI pipeline

I know I can write it to the mounted host file system which will be shared amongst the multiple build containers. But how can I make use of that file in a drone plugin container like docker-plugin?
Or, is there any other way to pass arbitrary data between build steps? Maybe through environment variables?
This is drone 0.5
It is only possible to share information between build steps via the filesystem. Environment variable are not an option because there is no clean way to share environment variables between sibling unix processes.
It is the responsibility of the plugin to decide how it wants to accept configuration parameters. Usually parameters are passed to the plugin as environment variables, defined in the yaml configuration file. Some plugins, notably the docker plugin [1], have the ability to read parameters from file. For example, the docker plugin will read docker tags from a .tags file in the root of your repository, which can be generated on the fly.
pipeline:
build:
image: golang
commands:
- go build
- echo ${DRONE_COMMIT:0:8} > .tags
publish:
image: plugins/docker
repo: octocat/hello-world
Not all plugins provide the option to read parameters from file. It is up to the plugin author to include this capability. If the plugin does not have this capability, or it is not something the plugin author is planning to implement, you can always fork and adjust the plugin to meet your exact needs.
[1] https://github.com/drone-plugins/drone-docker

How to restrict runners to a specific branch and lock the .gitlab-ci.yml from changes?

Right now, anyone that creates a branch in my project and adds a .gitlab-ci.yml file to it, can execute commands on my server using the runner. How can I make it so that only masters or owners can upload CI config files and make changes to them?
I'm using https://gitlab.com/gitlab-org/gitlab-ci-multi-runner running on bash.
The GitLab runner wasn't really designed for this scenario and thus you are unable to do this. What you could do instead is have a new project with just your .gitlab-ci.yml file and configure it so that it pulls the original repository. From there you can do all the other things you want to do with your repository.

Singularity: What is the difference between an image, a container, and an instance?

I am starting to learn Singularity for reproducible analysis of scientific pipelines. A colleague explained that an image was used to instantiate a container. However, in reading through the documentation and tutorials, the term instance is also used and the usage of image and container seems somewhat interchangeable. So, I am not sure I precisely understand the difference between an image, container, and instance. I do get that a recipe is a text file for building one of these (I think an image?).
For example, on this page it explains:
Now we can build the definition file into an image! Simply run build
and the image will be ready to go:
$ sudo singularity build url-to-pdf-api.img Singularity
Okay, so this uses the recipe Singularity to build an image, with the intuitive extension of .img. However, the help description of the build command states:
$ singularity help build
USAGE: singularity [...] build [build
options...]
The build command
compiles a container per a recipe (definition file) or based on a URI,
location, or archive.
So this seems to indicate we are building a container?
Then, there are image and instance sub-commands.
Are all these terms used interchangeably? It seems sometimes they are and sometimes there is a difference between them.
A container is the general concept of creating a sandboxed run environment and can be used as a general term to refer to either Docker or Singularity images. However it is sometimes used to also refer to the specific files being generated. This is probably not ideal, as it can clearly cause confusion to new users.
image is generally used to to refer to the actual files created by singularity build ...
instance refers to a specific way of running singularity images. Normally, if you singularity run some_image.sif or singularity some_image.sif some_command you can't easily access its environment while it's running. However, if you instead run singularity instance start some_image.sif some_instance1 it creates a persistent service that you can access like a docker container. The singularity service/instance documentation has some good examples of how instances are used differently than the basic exec and run commands.