Testing chaincode Using dev mode network issue - testing

I am running “dev mode” by leveraging pre-generated orderer and channel artifacts for a sample dev network
here cli require image: hyperledger/fabric-tools by default it is trying to pull latest tag image and showing errorlatest image. and it throwing error
Error response from daemon: manifest for hyperledger/fabric-tools:latest not found
so I pull image hyperledger/fabric-tools:x86_64-1.0.0, and rename it with hyperledger/fabric-tools:latest( not sure it is proper way or not ) by :
docker pull hyperledger/fabric-tools:x86_64-1.0.0
docker tag hyperledger/fabric-tools:x86_64-1.0.0 hyperledger/fabric-tools
My network is running successfully but unfortunately cli container is stopped running.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d10d170cd2fa hyperledger/fabric-tools:x86_64-1.0.0 "/bin/bash -c ./sc..." 29 seconds ago Exited (1) 27 seconds ago cli
163f494bb85f hyperledger/fabric-ccenv "/bin/bash -c 'sle..." 59 minutes ago Up About a minute chaincode
e96e86930d94 hyperledger/fabric-peer "peer node start -..." 59 minutes ago Up About a minute 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp peer
c568480e30d2 hyperledger/fabric-orderer "orderer" 59 minutes ago Up About a minute 0.0.0.0:7050->7050/tcp

You can use the tools container as the cli container.
docker exec -it d10d170cd2fa /bin/bash

Can you post logs of cli container by issuing command docker logs <containerId>? cli container exit doesn't necessarily mean there's any error about the e2e test.

If you started the services using docker-compose, you can run either of: docker-compose restart -f docker-compose-simple.yaml cli or docker-compose up -f docker-compose-simple.yaml cli.
However, if you started your network AFTER having tagged the fabric-tools image as above, you should examine the logs of your exited container with docker logs cli, to determine why it exited.

It can be because of previously running docker containers. In my case first time it worked correctly but it gave error in second time. Killing and removing created docker containers using
docker rm container_name
and starting containers again, solved the problem.

Related

openthread/environment docker rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted

I am running openthread/environment:latest docker image (as of 2019-06-15)
When starting on a fresh ubuntu 18.04 with docker 18.09 using the command
ubuntu#ip-172-31-37-198:~$ docker run -it --rm openthread/environment bash
I get the following output
Stopping system message bus dbus [ OK ]
Starting system message bus dbus [ OK ]
Starting enhanced syslogd rsyslogd
rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted
rsyslogd: activation of module imklog failed [v8.32.0 try http://www.rsyslog.com/e/2145 ]
Anyone knows whether this is related to ubuntu setup or the docker container or how to fix.
#Reto's answer will work, but you will be editing that file every time you build your container. Put this in your Dockerfile and you're all set. The edit will be performed automatically while the container is being built.
RUN sed -i '/imklog/s/^/#/' /etc/rsyslog.conf
You will also get rid of this warning if you just comment out the line
module(load="imklog")
inside your Docker container (edit /etc/rsyslog.conf).
I doubt you want to read the kernel messages inside a container ;-)
Try adding the --privileged option.
For example:
docker run -it --rm --privileged openthread/environment bash

Docker container immediately exits when started after system reboot

I'm starting my custom docker container (OpenSuse, PHP, Apache, some add-ons) this way:
docker build --build-arg http_proxy=http://user:pwd#ip:port -t prefix/myapp myapp
create --name=myapp --hostname=myapp-p 80:80 -v ${PWD}/myapp:/srv/www/myapp prefix/myapp
docker start myapp
This works perfectly. I can stop and later start the container. However, if I reboot my host system (Windows 10), I'm not able to start the container again. When I try to, the container immediately exits.
How can this be? As stated above, I use the -p and -v flags to map ports and mount a directory.
This is the output of...
docker logs myapp
-> httpd (pid 1) already running
May or may not be your problem (the logs will be telling), but I ran into an issue with docker on windows where the container tries to start before the file system is ready, which causes an error with the volume mounts. I never found a great solution aside from running a task that verifies the volume mount and restarts the container if it failed.

Getting Docker Windows Containers to start automatically on reboot

My OS is Windows 10 and I am running Docker version 17.06.0-ce-win19. I am trying to set up a container so that it will restart automatically on reboot.
When I use the command:
docker run -it microsoft/nanoserver --restart=always
I’m getting the following error:
docker: Error response from daemon: container 35046c88d2564523464ecabc4d48eb0550115e33acb25b0555224e7c43d21e74 encountered an error during CreateProcess: failure in a Windows system call: The system cannot find the file specified. (0x2) extra info: {"ApplicationName":"","CommandLine":"--restart=always","User":"","WorkingDirectory":"C:\","Environment":{},"EmulateConsole":true,"CreateStdInPipe":true,"CreateStdOutPipe":true,"CreateStdErrPipe":false,"ConsoleSize":[30,120]}.
whereas if I leave out the
--restart=always
everything works fine.
Is there something else I need to do to get --restart options working on Windows?
Parameters shall be coming before image:tag in CLI

Docker: Unable to view running container despite successful demo example

When I run the example from the Docker doc in the "Viewing our web application container" section, i.e.,
docker run -d -P training/webapp python app.py
...I'm able to view the "Hello World" output in a browser. Success. This seems to indicate that the network I'm on may not be the problem.
Now I'm trying to view a container that runs a webdriver suite (test automation of a browser). Based on the output in docker logs -f, the webdriver suite runs to completion. But when I try to point a browser at the webdriver container (which is running the browser), I get a error saying:
ERR_CONNECTION_REFUSED
Here are the steps I'm following:
Start webdriver container with this command
docker run -d -p 8080:5000 "/bin/bash" "-c" "/dir1/dir2/filename.sh $PARAMETER1 $PARAMETER2"
point a browser to:
http://subdomain.mydomain.com:5000
Docker output:
user#server$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2fa83fc0401a 65525ab9ad78 "/bin/bash -c '/opt/y" 55 minutes ago Up 55 minutes 2222/tcp, 0.0.0.0:8080->5000/tcp
user#server$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' 2fa83fc0401a
111.22.33.4444
Other info:
Server config: Ubuntu 14.04
Docker version: 1.8.1, build d12ea79
I've reviewed the following questions but I'm not running on a VM and I'm not running NodeJS.
Unable to view rails app running in docker container from browser
Docker: Unable to specify port for a running container
Does anyone have suggestions on how I might troubleshoot this problem? Any assistance gratefully accepted.
:) jay
Update 1:
Based on the NodeJS question noted above, I'm thinking that I'm not setting a port correctly in the Dockerfile. Maybe this is as simple as setting the correct port for Selenium?
Update 2: as #hunter noted, I had the ports in the wrong order, but switching the ports does not resolve the problem. I think the bigger problem is that I was assigning the wrong port. So, I changed docker run -d -p 8080:5000 to docker run -d -P. When I did that, I got the following output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
f375251b61d7 65525ab9ad78 "/bin/bash -c '/opt/y" About an hour ago Up About an hour 0.0.0.0:33073->2222/tcp
I then pointed the browser to that port: http://subdomain.mydomain.com:33073
But I still get the same error: ERR CONNECTION REFUSED
I think you're using the wrong port - the external port is 8080 not 5000.

Redis server fails to start in docker

I have a docker image 'redis_image' that installed redis in it. After I run a container as:
docker run --name test_redis -it redis_image bash
the redis server can start normally in the container using '/etc/init.d/redis start'.
But if I run the container with --net=host option, the redis server will fail to start in the container, it says "Starting redis-server: could not open session [Failed]". Is the problem related to the --net=host configuration when I run the container? Thanks.