Error while running docker container - ssh

I am running a docker image using the following command.
docker run -it -p 8080:8080 -p 29418:29418 --rm \
-e AUTH_TYPE='DEVELOPMENT_BECOME_ANY_ACCOUNT' \
-v /home/gerrit-site:/home/gerrit/site \
-v /home/nidhi/.ssh/id_rsa.pub:/root/.ssh/id_admin_rsa.pub \
-v /home/nidhi/.ssh/id_rsa:/root/.ssh/id_admin_rsa \
-e GERRIT_ADMIN_USER='admin' \
-e GERRIT_ADMIN_EMAIL='admin#fabric8.io' \
-e GERRIT_ADMIN_FULLNAME='Administrator' \
-e GERRIT_ADMIN_PWD='mysecret' \
-e GERRIT_ADMIN_PRIVATE_KEY='/home/gerrit/ssh-keys/id_admin_rsa' \
-e GERRIT_PUBLIC_KEYS_PATH='/home/gerrit/ssh-keys' \
-v /home/nidhi/.ssh:/home/gerrit/ssh-keys \
--name gerrit admin_gerrit
I know the command is right cause I had used this command before and it worked perfectly fine. But now, when I run this command I get the following error,
Error response from daemon: Cannot start container 2c9514c3b0d953344e66525d083c7ec3921cb9cde2185f43ec3bec2579597485: stat /home/nidhi/.ssh/id_rsa: permission denied
I checked the permission for the ssh public and private keys. The permission is 700 and is owned by nidhi. Please can someone point out what my error is.

When docker runs, the uid in your container will likely not match the uid on the host. So with a host volume containing files with 700 permissions, that will not be readable by the uid inside the container. Three options come to mind:
To keep the 700 permissions and same image, you'd need to chown the file on the host to match the uid inside the container.
You can use a named volume instead of a host volume, add your credentials to that named volume, and then set permissions inside there to match the containers where you'll use the volume.
Or you can use a different image that's been rebuilt to change the uid to match your own on the host.

Related

How to make tensorflow-serving example work

I am trying out the tensorflow example from the tutorial page
at the third step
# Start TensorFlow Serving container and open the REST API port
docker run -t --rm -p 8501:8501 \
-v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
-e MODEL_NAME=half_plus_two \
tensorflow/serving &
I get the following error message
2020-07-19 11:54:52.858203: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:362] FileSystemStoragePathSource encountered a filesystem access error: /models/half_plus_two; Permission denied
This is continuously repeated. I have installed the demo model as mentioned in the tutorial.
git clone https://github.com/tensorflow/serving
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"
Can someone please help what am i missing? I am just starting off on the serving part.
Thanks
Krishnan
The problem could be with your -v parameter where you are binding the path.
Try (Change the source parameter):
docker run -p 8501:8501 --mount type=bind,\
source=/path/to/yourmodels/,\
target=/models/half_plus_two/1 \
-e MODEL_NAME=half_plus_two -t tensorflow/serving

Extracting ZAP report running in container on Jenkins agent (docker based)

My setup is as follows:
Jenkins pipeline script which triggers Jenkins job which runs inside a dokcer container.
ZAP is in containerzied mode
Commands used:
echo DEBUG - mkdir -p $PWD/out
mkdir -p $PWD/out
echo DEBUG - chmod 777 $PWD/out
chmod 777 $PWD/out
test -d ${PWD}/out \
&& docker run -v $(pwd)/out:/zap/wrk/:rw -t owasp/zap2docker-live zap-api-scan.py -t $TARGET_URL -f openapi -d -r zap_scan_report.html
Also tried: docker run --user $(id -u):$(id -g) -v $(pwd)/out:/zap/wrk/:rw -t owasp/zap2docker-live zap-api-scan.py -t $TARGET_URL -f openapi -d -r zap_scan_report.html
Scan works fine but report is not in the "out" directory.
This works fine on a VM environment
Any suggestions as I guess the mount is not working in a docker container

Running apache and cron in docker

I understood there should be only one process running on foreground in a docker container. Is there any chance of running both apache and cron together in foreground? A quick search says there is something called supervisord to achieve this. But is there any other method using Entrypoint script or CMD?
Here is my Dockerfile
FROM alpine:edge
RUN apk update && apk upgrade
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk /repositories
RUN apk add \
bash \
apache2 \
php7-apache2 \
php7 \
curl \
php7-mysqli \
php7-pdo \
php7-pdo_mysql
RUN cp /usr/bin/php7 /usr/bin/php
RUN mkdir /startup
COPY script.sh /startup
RUN chmod 755 /startup/script.sh
ENTRYPOINT ["/startup/script.sh"]
The content of script.sh is pasted below
#!/bin/bash
# start cron
/usr/sbin/crond -f -l 8
# start apache
httpd -D FOREGROUND
When the docker is run with this image only crond is running and most interestingly when I kill the cron then apache starts and running in the foreground.
I am using aws ecs ec2 to run the docker container using task definition and a service.
Docker container is running while main process inside it is running. So if you want to run two services inside docker container, one of them has to be run in a background mode.
I suggest to get rid of scrip.sh at all and replace it just with one CMD layer:
CMD ( crond -f -l 8 & ) && httpd -D FOREGROUND
The final Dockerfile is:
FROM alpine:edge
RUN apk update && apk upgrade
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk add \
bash \
apache2 \
php7-apache2 \
php7 \
curl \
php7-mysqli \
php7-pdo \
php7-pdo_mysql
RUN cp /usr/bin/php7 /usr/bin/php
CMD ( crond -f -l 8 & ) && httpd -D FOREGROUND
The problem is that you're running crond -f, without telling bash to run it in the background, basically keeping bash waiting for crond to exit to continue running the script. There's two solutions for this:
Remove the -f flag (that flag causes crond to run in the foreground).
Add & at the end of the crond line, after -l 8 (I wouldn't recommend this).
Also, I'd start apache with exec:
exec httpd -D FOREGROUND
Otherwise /startup/script.sh will remain running, while it's not doing anything useful anymore anyway. exec tells bash to replace the current process with the command to execute.

openshift/node docker container fails with HOST_ETC: unbound variable

After downloading openshift/node Docker container the container fails to run:
$ docker logs 64e3eeb60cbc
/usr/local/bin/origin-node-run.sh: line 15: HOST_ETC: unbound variable
This is on Windows 7 with Docker Quickstart Terminal. I ran it with
docker run -d openshift/node
Probably I need to set HOST_ETC in the command line or elsewhere, but I can find no documentation on using this Docker image, so would like some guidance on what to fix here, and any other additional settings that might be required but undocumented.
Thanks for any expert advice here.
The official documentation is telling to start the container this way:
$ sudo docker run -d --name "origin" \
--privileged --pid=host --net=host \
-v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v /var/lib/docker:/var/lib/docker:rw \
-v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes \
openshift/origin start

Docker HTTPS access - ONLYOFFICE3

I'm following the ONLYOFFICE Docker documentation
(GITHUB ONLYOFFICE docker HTTPS access) to get ONLYOFFICE
documentserver and communityserver running with HTTPS.
What I've tried:
1.
I've created the cert files (.crt, .key, .pem) like mentioned in the documentation. After that I created a file named env.list in my home dir /home/jw/data/ with the following content:
SSL_CERTIFICATE_PATH=/opt/onlyoffice/Data/certs/onlyoffice.crt
SSL_KEY_PATH=/opt/onlyoffice/Data/certs/onlyoffice.key
SSL_DHPARAM_PATH=/opt/onlyoffice/Data/certs/dhparam.pem
SSL_VERIFY_CLIENT=true
2.
After that I added the directory /home/jw/data/ to my $PATH environment
variable:
PATH=$PATH:/home/jw/data/; export PATH
3.
On the same shell I started the docker container like this:
sudo docker run -i -t -d --name onlyoffice-document-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/jw/data/env.list onlyoffice/documentserver
4.
The documentserver is running fine. After that I've started the
communityserver with:
sudo docker run -i -t -d --link onlyoffice-document-server:document_server --env-file /home/jw/data/env.list onlyoffice/communityserver
5.
With the command docker ps -a I see booth docker containers running fine:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f573111f2e5 onlyoffice/communityserver "/bin/sh -c 'bash -C " 29 seconds ago Up 28 seconds 80/tcp, 443/tcp, 5222/tcp lonely_mcnulty
23543300fa51 onlyoffice/documentserver "/bin/sh -c 'bash -C " 42 seconds ago Up 41 seconds 80/tcp, 0.0.0.0:443->443/tcp onlyoffice-document-server
But when I'm trying to access https://localhost there is an error "Secure
Connection Failed" in Firefox.
Did I miss something?
Okay got it:
I've changed the environment variables in env.list to:
SSL_CERTIFICATE_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.crt
SSL_KEY_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.key
SSL_DHPARAM_PATH=/var/www/onlyoffice/Data/certs/dhparam.pem
After that used the following command to run ONLY the documentserver:
sudo docker run -i -t -d --name onlyoffice-document-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/jw/data/env.list onlyoffice/documentserver
The ONLYOFFICE OnlineEditor API is now available over HTTPS:
https://localhost/OfficeWeb/apps/api/documents/api.js
If you want to use CommunityServer with HTTPS just change the run command above to:
sudo docker run -i -t -d --name onlyoffice-community-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/<username>/env.list onlyoffice/communityserver
Thank you anyway!