Bluemix: service bound to container does not appear in VCAP_SERVICES - ssh

I'm trying to use IBM Containers for Bluemix to deploy a container and bind it to a Bluemix service.
I start with an existing Bluemix app, which is bound to the MongoDB service I want. I verify that its VCAP_SERVICES environment variable is correctly populated:
$ cf env mamacdon-app
Getting env variables for app mamacdon-app in org mamacdon#ca.ibm.com / space dev as mamacdon#ca.ibm.com...
OK
System-Provided:
{
"VCAP_SERVICES": {
"mongodb-2.4": [
{
"credentials": { /*private data hidden*/ },
"label": "mongodb-2.4",
"name": "mongodb-1a",
"plan": "100",
"tags": [ "nosql", "document", "mongodb" ]
}
]
}
...
Then I run my image in Bluemix using the ice command, with the --bind mamacdon-app argument to bind it to my CF app:
$ ice run --name sshparty \
--bind mamacdon-app \
--ssh "$(cat ~/.ssh/id_rsa.pub)" \ # For SSH access
--publish 22 \ # For SSH access
registry-ice.ng.bluemix.net/ibmliberty:latest
As the name suggests, the image is a trivial example based on the IBM Websphere Liberty docker image -- just enough to let me SSH in and poke around.
At this point, the Containers dashboard tells me that the service has been bound to my container:
But when I finally ssh into the container, the environment does not contain the VCAP_SERVICES variable:
$ ssh -i ~/.ssh/id_rsa root#129.41.232.212
root#instance-000123e2:~# env
TERM=xterm
SHELL=/bin/bash
SSH_CLIENT=[private data hidden]
SSH_TTY=/dev/pts/0
USER=root
LS_COLORS=[omitted]
MAIL=/var/mail/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/root
LANG=en_CA.UTF-8
SHLVL=1
HOME=/root
LOGNAME=root
SSH_CONNECTION=[private data hidden]
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/env
root#instance-000123e2:~#
I expected the VCAP_SERVICES variable to be injected. What am I doing wrong?

I think there is an issue with the way the ssh daemon is getting launched where it does not have visibility to the VCAP_SERVICES environment variable.
However, you can confirm that the container's command will see the variable with following test:
ice run registry-ice.ng.bluemix.net/ibmliberty --bind mamacdon-app --name vcap_services_party printenv; sleep 60
Then, confirm it in the printenv output with ice logs vcap_services_party

Could you give the following a try:
ice run registry-ice.ng.bluemix.net/lintest/tradelite --bind yourappname --name yournewcontainer name
Once the image comes up run the following.
# echo $VCAP_SERVICES
For more info check out the Containers Docs.

Related

Issue connection Selenoid GGR (Go Grid Router)

I am trying to run nightwatchjs test using selenoid ggr (Go Grid router) but I am getting the below error:
- Connecting to un:pwd#ip_add on port 4444...
POST http://un:pwd#ip_add:4444 /wd/hub/session - EAI_FAIL
Error: getaddrinfo EAI_FAIL un:pwd#ip_add
‼ Error connecting to un:pwd#ip_add on port 4444.
Below are the various container that's running on my Linux machine (IP_add)
Could you please support in identifying the issue?
Also, I am unable to navigate to ggr-ui
I was now able to run the test using GGR.
Following command were run to start selenoid, ggr, ggr-ui and selenoid UI:
./cm selenoid start --vnc --port 4445
docker run -d --name ggr -v /etc/grid-router/:/etc/grid-router:ro --net host aerokube/ggr:latest-release
docker run -d --name ggr-ui -p 8888:8888 -v /etc/grid-router/quota/:/etc/grid-router/quota:ro aerokube/ggr-ui:latest-release
docker run -d --name selenoid-ui --link ggr-ui -p 8080:8080 aerokube/selenoid-ui --selenoid-uri=http://ggr-ui:8888
To navigate to ggr dashboard:
http://'ip address':8080/
In nightwatch config use following selenoum config:
host: '192.168.5.241',
port: 4444,
'username': "test",
'access_key': "test-password",
start_process: false,
This way I was able to run the test
Well, to run tests on GGR for me was enough official documentations
https://aerokube.com/ggr/latest/
I configured 2 machines
On machine №1 (where was configure ggr + ggr(ui))
You can see my list of configured containers
list of containers screen
On machine №2
Was configured Selenoid and Selenoid UI
Also do not forget to configure quota file(see official documentation)
See common screen of configuration:
screen of common configuration

docker run -v bindmount failing

I am rather new to docker images and am trying to set up a selenium/standalone-firefox image linked to a local folder.
I'm running Docker version 19.03.2, build 6a30dfc on Windows 10 and have unsuccessfully tried figuring out the correct working of the docker run -v syntax because it either is unspecific (i.e. too little context for me to make sense of it) or on the wrong platform).
Running docker as admin the the cmd, I used docker run -d -v LOCAL_PATH:C:\Users\Public.
This throws docker: Error response from daemon: invalid mode: \Users\Public as an error message.
I want to bind the running container to the folder C:\Users\Public (or another folder on the host machine - this is for illustration purposes).
Can someone point me to the (I fear obvious) mistake I'm making? I essentially want to achieve the container's output data (for later scraping) being stored in the host machine's folder C:\Users\Public. The container's output folder should be named myfolder.
** EDIT **
Digging around, I found this (see Volume Mapping).
I have thus tried the following code:
>docker run -d -p 4444:4444 --name selenium-hub selenium/hub
>docker run -d --link selenium-hub:hub -v C:/Users/Public:/home/seluser/Downloads selenium/node-chrome
while the former works fine (it only runs the container), the latter throws the error:
docker: Error response from daemon: Drive has not been shared.
Docker for Windows (and Mac) require you to share drives to be able to volume mount - https://docs.docker.com/docker-for-windows/ (Under Shared drives).
You should be able to find it under your Docker Settings > Shared Drives. Ensure your C:\ is selected and restart the daemon. After that, you can run:
docker run -d --link selenium-hub:hub -v C:/Users/Public:/home/seluser/Downloads selenium/node-chrome
base on the documation:
https://github.com/SeleniumHQ/docker-selenium
this path does not exist in container and its linux container.
"C:\Users\Public\Documents\TMP_DOCKERS\firefox selenium/standalone-firefo"

openthread/environment docker rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted

I am running openthread/environment:latest docker image (as of 2019-06-15)
When starting on a fresh ubuntu 18.04 with docker 18.09 using the command
ubuntu#ip-172-31-37-198:~$ docker run -it --rm openthread/environment bash
I get the following output
Stopping system message bus dbus [ OK ]
Starting system message bus dbus [ OK ]
Starting enhanced syslogd rsyslogd
rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted
rsyslogd: activation of module imklog failed [v8.32.0 try http://www.rsyslog.com/e/2145 ]
Anyone knows whether this is related to ubuntu setup or the docker container or how to fix.
#Reto's answer will work, but you will be editing that file every time you build your container. Put this in your Dockerfile and you're all set. The edit will be performed automatically while the container is being built.
RUN sed -i '/imklog/s/^/#/' /etc/rsyslog.conf
You will also get rid of this warning if you just comment out the line
module(load="imklog")
inside your Docker container (edit /etc/rsyslog.conf).
I doubt you want to read the kernel messages inside a container ;-)
Try adding the --privileged option.
For example:
docker run -it --rm --privileged openthread/environment bash

Error:Unable to invoke action : The server is currently unavailable

I am doing a on prem setup of openwhisk using local couchdb installation on ubuntu 16.04 for which I downloaded the code from the github. I have followed all the steps of the setup, after the build, I have to run various playbooks
when is run the below playbook with the below command
ansible-playbook -i environments/local openwhisk.yml
I get error
"error": "The server is currently unavailable (because it is overloaded or down for maintenance).",
"code": 4
when I check I found it is coming while executing installRouteMgmt.sh from /openwhisk/ansible/roles/routemgmt/files
the line in the script which is throwing error is
enter code here`echo Installing routemgmt package.
$WSK_CLI -i -v --apihost "$APIHOST" package update --auth "$AUTH" --shared no "$NAMESPACE/routemgmt" \
-a description "This experimental package manages the gateway API configuration." \
-p gwUser "$GW_USER" \
-p gwPwd "$GW_PWD" \
-p gwUrl "$GW_HOST" \
-p gwUrlV2 "$GW_HOST_V2"
where
APIHOST=172.17.0.1
AUTH=path to auth.whisk.system
WSK_CLI= wsk path
NAMESPACE= whisk.system
This error comes when the DB host value is not resolvable from the controller container or when the DB which the controller trying to connect to is not created in the couch DB. Mine was the second case once __subjects db was there,
it was able to run

create a Docker Swarm v1.12.3 service and mount a NFS volume

I'm unable to get a NFS volume mounted for a Docker Swarm, and the lack of proper official documentation regarding the --mount syntax (https://docs.docker.com/engine/reference/commandline/service_create/) doesnt help.
I have tried basically this command line to create a simple nginx service with a /kkk directory mounted to an NFS volume:
docker service create --mount type=volume,src=vol_name,volume-driver=local,dst=/kkk,volume-opt=type=nfs,volume-opt=device=192.168.1.1:/your/nfs/path --name test nginx
The command line is accepted and the service is scheduled by Swarm, but the container never reaches "running" state and swarm tries to start a new instance every few seconds. I set the daemon to debug but no error regarding the volume shows...
Which is the right syntax to create a service with a NFS volume?
Thanks a lot
I found an article here that shows how to mount nfs share (and that works for me): http://collabnix.com/docker-1-12-swarm-mode-persistent-storage-using-nfs/
sudo docker service create \
--mount type=volume,volume-opt=o=addr=192.168.x.x,volume-opt=device=:/data/nfs,volume-opt=type=nfs,source=vol_collab,target=/mount \
--replicas 3 --name testnfs \
alpine /bin/sh -c "while true; do echo 'OK'; sleep 2; done"
Update:
In case you want to use it with docker-compose you can do it the following:
version: '3'
services:
alpine:
image: alpine
volumes:
- vol_collab:/mount
deploy:
mode: replicated
replicas: 2
command: /bin/sh -c "while true; do echo 'OK'; sleep 2; done"
volumes:
vol_collab:
driver: local
driver_opts:
type: nfs
o: addr=192.168.xx.xx
device: ":/data/nfs"
and then run it with
docker stack deploy -c docker-compose.yml test
you could also this in docker compose to create nfs volume
data:
driver: local
driver_opts:
type: "nfs"
o: addr=<nfs-Host-domain-name>,rw,sync,nfsvers=4.1
device: ":<path to directory in nfs server>"