Docker for Windows passing ENV variable to CMD in Dockerfile doesn't work - docker-for-windows

I have the following instructions in Dockerfile
ENV DB_CONN_STRING="Data Source=DbServer;Initial Catalog=Db;User ID=sa;Password=p#ssw0rd"
ENTRYPOINT []
CMD ["powershell", "c:\\RunAll.ps1 -NewConnString", "$DB_CONN_STRING"]
When I run this command
docker run --rm -d -e DB_CONN_STRING="Test" test
DB_CONN_STRING is always empty inside RunAll.ps1.
How can I pass ENV to CMD?
When I use CMD without parameters
CMD ["powershell", "c:\\RunAll.ps1"]
all works correctly.
RunAll.ps1 code:
param(
[string]$NewConnString = "Data Source=DbServer;Initial Catalog=db;User ID=sa;Password=p#ssw0rd"
)
New-Item "C:\start_log.txt" -type file -force -value $NewConnString
.\ChangeConnString.ps1 -NewConnString $NewConnString
New-Item "C:\end_log.txt" -type file -force -value $NewConnString
# Run IIS container entry point
.\ServiceMonitor.exe w3svc
I tried several approaches, exec and shell command styles, $DB_CONN_STRING, ${DB_CONN_STRING} and $(DB_CONN_STRING) styles.
Tried suggestions from these posts:
Use docker run command to pass arguments to CMD in Dockerfile
How do I use Docker environment variable in ENTRYPOINT array?
Nothing works for me.
Here is an example from Docker log:
[16:06:34.505][WindowsDockerDaemon][Info ] time="2017-03-27T16:06:34.505376100+02:00" level=debug msg="HCSShim::Container::CreateProcess id=0909937ce1130047b124acd7a5eb57664e05c6af7cbb446aa7c8015a44393232 config={\"ApplicationName\":\"\",\"CommandLine\":\"powershell c:\\\\RunAll.ps1 -NewConnString $DB_CONN_STRING\",\"User\":\"\",\"WorkingDirectory\":\"C:\\\\\",\"Environment\":{\"DB_CONN_STRING\":\"Test\"},\"EmulateConsole\":false,\"CreateStdInPipe\":true,\"CreateStdOutPipe\":true,\"CreateStdErrPipe\":true,\"ConsoleSize\":[0,0]}
Docker version 17.03.0-ce, build 60ccb22

The correct syntax for passing ENV to (in this case) ENTRYPOINT command will be
ENTRYPOINT powershell c:\RunAll.ps1 -NewConnString %DB_CONN_STRING%
So, you need to use the shell syntax and windows cmd.exe parameters format. Exec syntax of ENTRYPOINT haven't worked for me.
And to pass long string with spaces as parameter you will need to use double different quotes, for example
docker.exe run -d --rm -e DB_CONN_STRING="'Data Source=DB2;Initial Catalog=Db;User ID=sa;Password=p#ssw0rd'"

Related

gitlab-runner doesn't run ENTRYPOINT scripts in Dockerfile

I use gitlab-ci in my project. I have created an image and push it to gitlab container registry.
To create an image and register it to gitlab container registry, I have created a Dockerfile.
Dockerfile:
...
ENTRYPOINT [ "scripts/entry-gitlab-ci.sh" ]
CMD "app"
...
entry-gitlab-ci.sh:
#!/bin/bash
set -e
if [[ $# == 'app' ]]; then
echo "Initialize image"
rake db:drop
rake db:create
rake db:migrate
fi
exec "$#"
the image will be created successfully, but when the gitlab-runner pulls and execs the created image, doesn't run the **entry-gitlab-ci** script.
What is the problem?
Image entrypoints definitely run in GitLab CI with the docker executor, both for services and for jobs, so long as this has not been overwritten by the job configuration.
There's two key problems if you're trying to use this image in your job image:.
GitLab overrides the command for the image. So your if condition won't ever catch here.
Your entrypoint should be prepared to run a shell script. So, you should use something like exec /bin/bash not exec "$#" for a job image.
Per the documentation:
The runner expects that the image has no entrypoint or that the entrypoint is prepared to start a shell command.
So your entrypoint might look something like this:
#!/usr/bin/env bash
# gitlab-entrypoint-script
echo "doing something before running commands"
if [[ -n "$CI" ]]; then
echo "this block will only execute in a CI environment"
echo "now running script commands"
# this is how GitLab expects your entrypoint to end, if provided
# will execute scripts from stdin
exec /bin/bash
else
echo "Not in CI. Running the image normally"
exec "$#"
fi
This assumes you are using a docker executor and the runner is using a version of docker >= 17.06
You can also explicitly set the entrypoint for job images and service images in the job config image:. This may be useful, for example, if your image normally has an entrypoint and you don't want to build your image with consideration for GitLab-CI or if you wanted to use a public image that has a non-compatible entrypoint.
From my experience and struggles, I couldn't get Gitlab to use the EXEC automatically. Same with trying to get a login shell working easily to pick up environment variables. Instead, you have to run it manually from the CI.
# .gitlab-ci.yml
build:
image: your-image-name
stage: build
script:
- /bin/bash ./scripts/entry-gitlab-ci.sh

bamboo file path passed as string

I am setting up a pipeline on bambool. It's to manage a prometheus repo. I was previously using drone.
On drone a docker container would spawn and essentially run
docker run --volume $PWD:/data --workdir /data --rm --entrypoint=promtool prom/prometheus:v2.15.2 check rules files/thanos-rules/*.rules
On bamboo, according to the logs, it seems to be running
docker run --volume $PWD:/data --workdir /data --rm --entrypoint=promtool prom/prometheus:v2.15.2 'check' 'rules' 'files/thanos-rules/*.rules'
where each argument is a string. This breaks the regex. How can I have the argument passed as a file path instead of a string? Here are some screenshots of what I have tried
setting the entrypoint to /bin/sh and then the command to -c "promtool check rules files/thanos-rules/*.rules" works

wget authentication fails in docker-compose build but succeeds in docker build Dockerfile

I have a docker-compose.yml file which builds some node regarding to a Dockerfile. This Dockerfile has a wget command as RUN which needs authentication. The problem is that authentication step doesn't succeed. I tried echo to check command before execution and it was correct; it works fine in the shell, but still getting Username/Password Authentication Failed.
This is the command:
wget --user $USER --password $PASS $URL
Any idea what generates this?
EDIT 1:
I have no problem executing above command with docker build -t myimagename:myimagetag .
I've solved it. The issue was using special characters in the $PASS. I had to escape those characters with a \ while passing them to the docker build through the shell. I have set $PASS in my .env file with character escaping too, but it happened that the values contained in the .env file are rawly passed to the Dockerfile mentioned in docker-compose.yml.
Removing \ escaping in the .env file solved everything.

Add a link to docker run

I am making a test file. I need to have a docker image and run it like this:
docker run www.google.com
Everytime that url changes, I need to pass it into a file inside the docker. Is that possible?
Sure. You need a custom docker image but this is definitely possible.
Let's say you want to execute the command "ping -c 3" and pass it the parameter you send in the command line.
You can build a custom image with the following Dockerfile:
FROM alpine:latest
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT /entrypoint.sh
The entrypoint.sh file contains the following:
#!/bin/sh
ping -c 3 "$WEBSITE"
Then, you have to build you image by running:
docker build -t pinger .
Now, you can run your image with this command:
docker run --rm -e WEBSITE=www.google.com pinger
By changing the value of the WEBSITE env variable in the latest step you can get what you requested.
I just solved it by adding this:
--env="url=test"
to the docker run, but I guess your way of doing it, is better.
Thank you

Starting a service inside of a Dockerfile

I'm building a Docker container and in this container I am downloading the Apache service. Is it possible to automatically start the Apache service at some point? Systemctl start httpd does not work inside of the Dockerfile.
Basically, I want the apache service to be started when the docker container is started.
FROM centos:7
MAINTAINER me <me#me.com>
RUN yum update -y && yum install -y httpd php
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 80
EXPOSE 443
CMD ["/usr/sbin/init"]
Try using CMD ["/usr/sbin/httpd", "-DFOREGROUND"].
You also can run :
docker run -d <image name> /usr/sbin/httpd -DFOREGROUND
According to the Docker reference (Entrypoint reference), in the scenario you describe, you would use ENTRYPOINT, as you want your web server to "immutably" start. CMD is for commands or command line options that you are likely change/be overwritten:
Command line arguments to docker run will be appended after all elements in an exec form ENTRYPOINT, and will override all elements specified using CMD. This allows arguments to be passed to the entry point, i.e., docker run -d will pass the -d argument to the entry point.
If you must override an ENTRYPOINT, e.g. for testing/diagnostics, use the specific --entrypoint option.
Further:
You can use the exec form of ENTRYPOINT to set fairly stable default commands and arguments and then use either form of CMD to set additional defaults that are more likely to be changed.
So, ENTRYPOINT for the fixed services/application part, CMD for overrideable commands or options.
Using both ENTRYPOINT and CMD allows you to set a "fixed" commands part (including options) and a "variable" part. Like so:
FROM ubuntu
ENTRYPOINT ["top", "-b"]
CMD ["-c"]
Which means, in your case you may consider to have:
ENTRYPOINT ["/usr/sbin/httpd"]
CMD ["-DFOREGROUND"]
Which allows you do:
docker run -d <image name>
when you want to run your web server in the foreground, but
docker run -d <image name> -DBACKGROUND
if you want that same server to run with the -DBACKGROUND option overriding only the -DFOREGROUND part.