How to transfer a value from one build container to another in drone.io CI pipeline - drone.io

I know I can write it to the mounted host file system which will be shared amongst the multiple build containers. But how can I make use of that file in a drone plugin container like docker-plugin?
Or, is there any other way to pass arbitrary data between build steps? Maybe through environment variables?
This is drone 0.5

It is only possible to share information between build steps via the filesystem. Environment variable are not an option because there is no clean way to share environment variables between sibling unix processes.
It is the responsibility of the plugin to decide how it wants to accept configuration parameters. Usually parameters are passed to the plugin as environment variables, defined in the yaml configuration file. Some plugins, notably the docker plugin [1], have the ability to read parameters from file. For example, the docker plugin will read docker tags from a .tags file in the root of your repository, which can be generated on the fly.
pipeline:
build:
image: golang
commands:
- go build
- echo ${DRONE_COMMIT:0:8} > .tags
publish:
image: plugins/docker
repo: octocat/hello-world
Not all plugins provide the option to read parameters from file. It is up to the plugin author to include this capability. If the plugin does not have this capability, or it is not something the plugin author is planning to implement, you can always fork and adjust the plugin to meet your exact needs.
[1] https://github.com/drone-plugins/drone-docker

Related

Run Time Argumnets in PCF

In order to run the application in my local, i need to provide some VM arguments(basically file path, where it is located). In similar way in PCF also I have to provide those arguments.
currently I am keeping in application.yml file like below.
jaas:
conf: /home/vcap/app/BOOT-INF/classes/nonprod_jaas.conf
krb5:
conf: /home/vcap/app/BOOT-INF/classes/krb5.conf
trustore:
conf: /home/vcap/app/BOOT-INF/classes/kafka_client_truststore.jks
When I deploy the application in PCF, will these files will be read from that location.
Basically I want to know this is correct way or not to provide the arguments in PCF.
how to check whether the file is present in that location, /home/vcap/app/BOOT-INF/classes/
You need to ssh into the container to check the location of the file.
cf ssh appname
In spring, #Value enables the use of the classpath: prefix to resolve the classpath (see this link) https://www.baeldung.com/spring-classpath-file-accessclasspath: It means you need to set this programmatically not via the variables in yml. Then you don't need to provide the path the way you are doing.
Also classpath: is a Spring specific convention, the JVM doesn't understand it which means you cannot use it directly in application.yml file. If you need to set in yml or as environment variable - you need to give it a full or relative path. On PCF, you can use /app or /home/vcap/app (the former is a symlink to the latter) as the path to the root of your application.

Kaniko: Build more than one dockerfile in pipeline

I have a gitlab repository that contains multiple dockerfiles. I know that this is not ideal.
I now want to use my gitlab pipeline to create one image per dockerfile using kaniko and pushing it to its corresponding AWS ECR. Of course I can define a job for each dockerfile but this results in duplication of code and overhead in runtime. Is it possible to build multiple dockerfiles at the same time in one job? A possible approach would be to pass a string array with the dockerfiles, and then iterate over the corresponding kaniko command. However, since kaniko only provides a busybox shell, all this is not really nice and easy. Ideas?

How to export a container in Singularity

I would like to move an already built container from one machine to another. What is the proper way to migrate the container from one environment to another?
I can find here the image.export command, but this is for an older version of the software. I am using version 3.5.2.
The container I wish to export is a --sandbox container. Is something like that possible?
Singularity allows you to easily convert between a sandbox and a production build.
For example:
singularity build lolcow.sif docker://godlovedc/lolcow # pulls and builds a container
singularity build --sandbox lolcow_sandbox/ lolcow.sif # converts from container to a writable sandbox
singularity build lolcow2 lolcow_sandbox/ # converts from sandbox to container
Once you have a production SIF or SIMG, you can easily transfer the file and convert as necessary.
singularity build generates a file that you can copy between computers just like any other file. The only things it needs is the singularity binary installed on the new host server.
The difference when using --sandbox is that you get a modifiable directory instead of single file. It can still be run elsewhere, but you may want to tar it up first so you're only moving a single file. Then you can untar it and run as normal on the new host.

GitLab CI use untracked files in repository

I'm evaluating GitLab CI/CD at the moment and trying to get the pipelines working, however am having a simple(?) problem.
I'm trying to automate the build process by automatically updating ISO images. However these ISO images are not tracked by the repository and ignored in a .gitignore file. But this leads to the issue of when I try run make that it can't find the ISO image...
I'm just using a simple .gitlab-ci.yml file:
stages:
- build
build:
stage: build
script:
- make
And when I try running this in the GitLab CI interface, it clones the repository (without the ISO images), and then fails, as there is no rule to that make target (as the ISO images are missing). I have tried moving the files into the "build" directory which GitLab creates, however that gives the error of saying it has failed to remove ...
How do I use the local repository rather than having GitLab clone the repository to a "build" location?
You can't use Gitlab CI with the files that are on your computer, at least you shouldn't. You could do it with an ssh-executor that will login to a machine that stores the iso files (and it should be a server rather than your development machine) or you could have the docker executor pull them from ftp/object store. For testing purposes you can run the builds locally on your machine.

Using secret api keys on travis-ci

I'd like to use travis-ci for one of my projects.
The project is an API wrapper, so many of the tests rely on the use of secret API keys. To test locally, I just store them as environment variables. What's a safe way to use those keys on Travis?
Travis has a feature to encrypt environment variables ("Encrypting environment variables"). This can be used to protect your secret API keys. I've successfully used this for my Heroku API key.
All you have to do is install the travis gem, encrypt the string you want and add the encrypted string in your .travis.yml. The encryption is only valid for one repository. The travis command gets your public key for your repo and can then decrypt the string during the build.
gem install --user travis
travis encrypt MY_SECRET_ENV=super_secret -r my_username/my_repo
This gives you the following output:
Please add the following to your .travis.yml file:
secure: "OrEeqU0z6GJdC6Sx/XI7AMiQ8NM9GwPpZkVDq6cBHcD6OlSppkSwm6JvopTR\newLDTdtbk/dxKurUzwTeRbplIEe9DiyVDCzEiJGfgfq7woh+GRo+q6+UIWLE\n3nowpI9AzXt7iBhoKhV9lJ1MROrnn4DnlKxAEUlHTDi4Wk8Ei/g="
according to this in travis ci documentation it's said that :
If you have both the Heroku and Travis CI command line clients installed, you can get your key, encrypt it and add it to your .travis.yml by running the following command from your project directory:
travis encrypt $(heroku auth:token) --add deploy.api_key
refer to the following tutorial to install heroku client according to your OS
You can also define secret variables in repository settings:
Variables defined in repository settings are the same for all builds, and when you restart an old build, it uses the latest values. These variables are not automatically available to forks.
Define variables in the Repository Settings that:
differ per repository.
contain sensitive data, such as third-party credentials.
To define variables in Repository Settings, make sure you’re logged in, navigate to the repository in question, choose “Settings” from the cog menu, and click on “Add new variable” in the “Environment Variables” section.
Use a different set of API keys and do it the same way. Your travis box gets setup for your build run and then completely torn down again after your build has finished. You have root access to your box during the build, so you can do whatever you want with it.