bamboo file path passed as string - bamboo

I am setting up a pipeline on bambool. It's to manage a prometheus repo. I was previously using drone.
On drone a docker container would spawn and essentially run
docker run --volume $PWD:/data --workdir /data --rm --entrypoint=promtool prom/prometheus:v2.15.2 check rules files/thanos-rules/*.rules
On bamboo, according to the logs, it seems to be running
docker run --volume $PWD:/data --workdir /data --rm --entrypoint=promtool prom/prometheus:v2.15.2 'check' 'rules' 'files/thanos-rules/*.rules'
where each argument is a string. This breaks the regex. How can I have the argument passed as a file path instead of a string? Here are some screenshots of what I have tried

setting the entrypoint to /bin/sh and then the command to -c "promtool check rules files/thanos-rules/*.rules" works

Related

gitlab-runner doesn't run ENTRYPOINT scripts in Dockerfile

I use gitlab-ci in my project. I have created an image and push it to gitlab container registry.
To create an image and register it to gitlab container registry, I have created a Dockerfile.
Dockerfile:
...
ENTRYPOINT [ "scripts/entry-gitlab-ci.sh" ]
CMD "app"
...
entry-gitlab-ci.sh:
#!/bin/bash
set -e
if [[ $# == 'app' ]]; then
echo "Initialize image"
rake db:drop
rake db:create
rake db:migrate
fi
exec "$#"
the image will be created successfully, but when the gitlab-runner pulls and execs the created image, doesn't run the **entry-gitlab-ci** script.
What is the problem?
Image entrypoints definitely run in GitLab CI with the docker executor, both for services and for jobs, so long as this has not been overwritten by the job configuration.
There's two key problems if you're trying to use this image in your job image:.
GitLab overrides the command for the image. So your if condition won't ever catch here.
Your entrypoint should be prepared to run a shell script. So, you should use something like exec /bin/bash not exec "$#" for a job image.
Per the documentation:
The runner expects that the image has no entrypoint or that the entrypoint is prepared to start a shell command.
So your entrypoint might look something like this:
#!/usr/bin/env bash
# gitlab-entrypoint-script
echo "doing something before running commands"
if [[ -n "$CI" ]]; then
echo "this block will only execute in a CI environment"
echo "now running script commands"
# this is how GitLab expects your entrypoint to end, if provided
# will execute scripts from stdin
exec /bin/bash
else
echo "Not in CI. Running the image normally"
exec "$#"
fi
This assumes you are using a docker executor and the runner is using a version of docker >= 17.06
You can also explicitly set the entrypoint for job images and service images in the job config image:. This may be useful, for example, if your image normally has an entrypoint and you don't want to build your image with consideration for GitLab-CI or if you wanted to use a public image that has a non-compatible entrypoint.
From my experience and struggles, I couldn't get Gitlab to use the EXEC automatically. Same with trying to get a login shell working easily to pick up environment variables. Instead, you have to run it manually from the CI.
# .gitlab-ci.yml
build:
image: your-image-name
stage: build
script:
- /bin/bash ./scripts/entry-gitlab-ci.sh

gitlab CI/CD: How to enter into a container for testing i.e getting an interactive shell

Like in docker we can enter a container by and have an interactive shell
docker-compose exec containername /bin/bash
Similary in the script in gitlab CI/CD can we enter into it. Like it provides an interactive shell
Eg:
build:
stage: build
script:
- pwd; ls -al
HERE I WANT TO HAVE AN INTERACTIVE SHELL SO THAT I CAN CHECK FEW THINGS
I think we need to do an small detour here and explain how jobs are working in GitLab CI.
Each job is an encapsulated docker container. The container only executes things you like to be executed within the script directive. By default the jobs on shared runners are using a ruby container image.
If you want to check, what you have available within your image, or you want try things out locally. You can do so running a container with this image locally and mounting your project folder into it.
docker run --rm -v "$(pwd):/build/project" -w "/build/project" -it <the job image> /bin/bash # or /bin/sh or whatever shell is available in the image.
# -v mounts the current directory int /build/project in your container
# -w changes the working directory to the mounting point
# /bin/bash starts the shell, it might be that there are others within the image
If you want to use a different docker image, lets say because you are running some other build tool, you can specify this with the image directive like:
build:
image: maven:latest
script:
- echo "some output"
You do have the functionality available within your job, which is provided by the image. As the job will run within a container of that image.
You can even use some tools like https://github.com/firecow/gitlab-ci-local to verify this locally. But in the end those are just docker images, and you can easily recreate the flow on your own.

Gitlab CI job failed: ERROR the input device is not a TTY

I've registered a GitLab Runner with shell executor installed on Ubuntu 18.04, and also set up a docker container with the command below
docker run -it --gpus '"device=0"' --net=host -v /home/autotest/Desktop/ai_platform:/app --name=ai_platform_system nvcr.io/nvidia/pytorch:20.10-py3 "bash"
Then I tried to execute the following command from the gitlab-ci.yml in Gitlab CI, but I got an error "The input device is not a TTY".
docker attach ai_platform_system
Any clues for this issue except using docker exec? I know docker exec works in Gitlab CI environment but it will create a new session in the container which is not desirable for me. Thanks!
According to this answer (for Jenkins but the same problem) you need to remove the -it flag and the tty.
docker run -T --gpus '"device=0"' --net=host -v /home/autotest/Desktop/ai_platform:/app --name=ai_platform_system nvcr.io/nvidia/pytorch:20.10-py3 "bash"

Add a link to docker run

I am making a test file. I need to have a docker image and run it like this:
docker run www.google.com
Everytime that url changes, I need to pass it into a file inside the docker. Is that possible?
Sure. You need a custom docker image but this is definitely possible.
Let's say you want to execute the command "ping -c 3" and pass it the parameter you send in the command line.
You can build a custom image with the following Dockerfile:
FROM alpine:latest
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT /entrypoint.sh
The entrypoint.sh file contains the following:
#!/bin/sh
ping -c 3 "$WEBSITE"
Then, you have to build you image by running:
docker build -t pinger .
Now, you can run your image with this command:
docker run --rm -e WEBSITE=www.google.com pinger
By changing the value of the WEBSITE env variable in the latest step you can get what you requested.
I just solved it by adding this:
--env="url=test"
to the docker run, but I guess your way of doing it, is better.
Thank you

Mounting user SSH key in container

I am building a script that will mount some local folders into the container, one of which is the user's ~/.ssh folder. That way, users can still utilize their SSH key for Git commits.
docker run -ti -v $HOME/.ssh/:$HOME/.ssh repo:tag
But that does not mount the SSH folder into the container. Am I doing it incorrectly?
The typical syntax is (from Mount a host directory as a data volume):
docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
(you can skip the command part, here 'app.py', if your image defines an entrypoint and default command)
(-d does not apply in your case, it did in the case of that python web server )
Try:
docker run -ti -v $HOME/.ssh:$HOME/.ssh repo:tag