How to see error logs when docker-compose fails - error-handling

I have a docker image that starts the entrypoint.sh script
This script checks if the project is well configured
If everything is correct,
the container starts
otherwise I received this error:
echo "Danger! bla bla bla"
exit 1000
Now if i start the container in this mode:
docker-compose up
i see the error correctly:
Danger! bla bla bla
but i need to launch the container in daemon mode:
docker-compose up -d
How can I show the log only in case of error?

The -d flag in docker-compose up -d stands for detached mode and not deamon mode.
In detached mode, your service(s) (e.g. container(s)) runs in the background of your terminal. You can't see logs in this mode.
To see all service(s) logs you need to run this command :
docker-compose logs -f
The -f flag stands for "Follow log output".
This will output all the logs for each running service you have in your docker-compose.yml
From my understanding you want to fire up your service(s) with :
docker-compose up -d
In order to let service(s) run in the background and have a clean console output.
And you want to print out only the errors from the logs, to do so add a pipe operator and search for error with the grep command :
docker-compose logs | grep error
This will output all the errors logged by a docker service(s).
You'll find the official documentation related to the docker-compose up command here and to the logs command here. More info on logs-handling in this article.
Related answer here.

Related

Gitlab CI job fails even if the script/command is successful

I have a CI stage with the following command, which has to be executed remotely and checks if the mentioned file exists, if yes it creates a backup for it.
script: |
ssh ${USER}#${HOST} '([ -f "${PATH}/test_1.txt" ] && cp -v "${PATH}/test_1.txt" ${PATH}/test_1_$CI_COMMIT_TIMESTAMP.txt)'
The issue is, this job always fails whether the file exists or not with the following output:
ssh user#hostname '([ -f /etc/file/path/test_1.txt ] && cp -v /etc/file/path/test_1.txt /etc/file/path/test_1_$CI_COMMIT_TIMESTAMP.txt)'
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Running the same command manually, just works fine. So,
How can I make sure that this job succeeds as long as command logic is executed successfully and only fail incase there are some genuine failures?
There is no way for the job to know if the command you ran remotely worked or not. It can only know if the ssh instruction worked or not. You can force it to always succeed by appending || true to any instruction.
However, if you want to see and save the output of your remote instruction, you can do something like this:
ssh user#host command 2>&1 | tee ssh-session.log

docker run -v bindmount failing

I am rather new to docker images and am trying to set up a selenium/standalone-firefox image linked to a local folder.
I'm running Docker version 19.03.2, build 6a30dfc on Windows 10 and have unsuccessfully tried figuring out the correct working of the docker run -v syntax because it either is unspecific (i.e. too little context for me to make sense of it) or on the wrong platform).
Running docker as admin the the cmd, I used docker run -d -v LOCAL_PATH:C:\Users\Public.
This throws docker: Error response from daemon: invalid mode: \Users\Public as an error message.
I want to bind the running container to the folder C:\Users\Public (or another folder on the host machine - this is for illustration purposes).
Can someone point me to the (I fear obvious) mistake I'm making? I essentially want to achieve the container's output data (for later scraping) being stored in the host machine's folder C:\Users\Public. The container's output folder should be named myfolder.
** EDIT **
Digging around, I found this (see Volume Mapping).
I have thus tried the following code:
>docker run -d -p 4444:4444 --name selenium-hub selenium/hub
>docker run -d --link selenium-hub:hub -v C:/Users/Public:/home/seluser/Downloads selenium/node-chrome
while the former works fine (it only runs the container), the latter throws the error:
docker: Error response from daemon: Drive has not been shared.
Docker for Windows (and Mac) require you to share drives to be able to volume mount - https://docs.docker.com/docker-for-windows/ (Under Shared drives).
You should be able to find it under your Docker Settings > Shared Drives. Ensure your C:\ is selected and restart the daemon. After that, you can run:
docker run -d --link selenium-hub:hub -v C:/Users/Public:/home/seluser/Downloads selenium/node-chrome
base on the documation:
https://github.com/SeleniumHQ/docker-selenium
this path does not exist in container and its linux container.
"C:\Users\Public\Documents\TMP_DOCKERS\firefox selenium/standalone-firefo"

openthread/environment docker rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted

I am running openthread/environment:latest docker image (as of 2019-06-15)
When starting on a fresh ubuntu 18.04 with docker 18.09 using the command
ubuntu#ip-172-31-37-198:~$ docker run -it --rm openthread/environment bash
I get the following output
Stopping system message bus dbus [ OK ]
Starting system message bus dbus [ OK ]
Starting enhanced syslogd rsyslogd
rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted
rsyslogd: activation of module imklog failed [v8.32.0 try http://www.rsyslog.com/e/2145 ]
Anyone knows whether this is related to ubuntu setup or the docker container or how to fix.
#Reto's answer will work, but you will be editing that file every time you build your container. Put this in your Dockerfile and you're all set. The edit will be performed automatically while the container is being built.
RUN sed -i '/imklog/s/^/#/' /etc/rsyslog.conf
You will also get rid of this warning if you just comment out the line
module(load="imklog")
inside your Docker container (edit /etc/rsyslog.conf).
I doubt you want to read the kernel messages inside a container ;-)
Try adding the --privileged option.
For example:
docker run -it --rm --privileged openthread/environment bash

How to view detailed error message in failed build

So this is the only thing I see on failed build. When running npm scripts on a cli, you usually see more than the exit status. Is there some option to view the entire cli output instead of this pseudo log?
I contacted support and was told to cat the debug log in order to see the output.
#!/bin/bash
set -ex
cat $(find $HOME/.npm/_logs -name '*-debug.log')

Apache page retains after exiting docker container

I'm new to docker. I tried executing the following docker command to created a container,
docker run -d -p 9999:80 httpd
After this, when i visited the URL http://127.0.0.1:9999/, It loads with "It works!" message. So, to change the message, I went inside the httpd container and changed the value of /usr/local/apache2/htdocs/index.html (Hope, that is the correct location) to <html><body><h1>It works at port 9999!</h1></body></html>.
But still it is showing the same old message and the weird part is it is still showing up after removing the container. Am i doing anything wrong or is this coming from any cache or something?
Please help.
Edit: I found it is due to browser cache and nothing else.
You did a modification to a running container, but if you want to see it, you have to
docker commit -t my_apache_modified
your container, and then launch your new image, with a
docker run -d -p 9999:80 my_apache_modified
see the doc
https://docs.docker.com/reference/commandline/commit/
Keep in mind that the preferred way is to modify your Dockerfile, and build a new image
I can't reproduce your error here.
Try that:
docker run -d --name httpdtest -p 9999:80 httpd
docker exec -it httpdtest bash
echo "test" >> htdocs/index.html
And try to see in your browser. (http://127.0.0.1:9999)