AMC for the aerospike server running inside docker - aerospike

I have run the aerospike server inside docker container using below command.
$ docker run -d -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 -p 8081:8081 --name aerospike aerospike/aerospike-server
89b29f48c6bce29045ea0d9b033cd152956af6d7d76a9f8ec650067350cbc906
It is running succesfully. I verified it using the below command.
$ docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
89b29f48c6bc aerospike/aerospike-server "/entrypoint.sh asd"
About a minute ago Up About a minute 0.0.0.0:3000-3003->3000-3003/tcp, 0.0.0.0:8081->8081/tcp aerospike
I'm able to successfully connect it with aql.
$ aql
Aerospike Query Client
Version 3.13.0.1
C Client Version 4.1.6
Copyright 2012-2016 Aerospike. All rights reserved.
aql>
But when I launch the AMC for aerospike server in docker, it is hanging and it is not displaying any data. I've attached the screenshot.
Did I miss any configuration. Why it is not loading any data?

You can try the following:
version: "3.9"
services:
aerospike:
image: "aerospike:ce-6.0.0.1"
environment:
NAMESPACE: testns
ports:
- "3000:3000"
- "3001:3001"
- "3002:3002"
amc:
image: "aerospike/amc"
links:
- "aerospike:aerospike"
ports:
- "8081:8081"
Then go to http://localhost:8081 and enter in the connect window "aerospike:3000"

Related

Recv failure when I use docker-compose for set up redisDB

sorry but I'm new to redis and dockers and I'm getting stuck.
I want to connect redis to my localhost with docker-compose. When I use docker-compose my web and my redis shows that they are ON but when i try to make curl -L http://localhost:8081/ping for test it I get this message "curl: (56) Recv failure:"
I tryed to change my docker-compose.yaml but is not working
docker-compose:
version: '3'
services:
redis:
image: "redis:latest"
ports:
- "6379:6379"
web:
build: .
ports:
- "8081:6379"
environment:
REDIS_HOST: 0.0.0.0
REDIS_PORT: 6379
REDIS_PASSWORD: ""
depends_on:
- redis
Dockerfile
FROM python:3-onbuild
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
CMD ["python", "main.py"]
My expected results are this:
curl -L http://localhost:8081/ping
pong
curl -L http://localhost:8081/redis-status
{"redis_connectivity": "OK"}

Docker stack: Apache service will not start

Apache service in docker stack never starts (or, to be more accurate, keeps restarting). Any idea what's going on?
The containers are the ones in: https://github.com/adrianharabula/lampstack.git
My docker-compose.yml is:
version: '3'
services:
db:
image: mysql:5.7
volumes:
- ../db_files:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: toor
#MYSQL_DATABASE: testdb
#MYSQL_USER: docky
#MYSQL_PASSWORD: docky
ports:
- 3306:3306
p71:
# depends_on:
# - db
image: php:7.1
build:
context: .
dockerfile: Dockerfile71
links:
- db
volumes:
- ../www:/var/www/html
- ./php.ini:/usr/local/etc/php/conf.d/php.ini
- ./virtualhost-php71.conf:/etc/apache2/sites-available/001-virtualhost-php71.conf
- ../logs/71_error.log:/var/www/71_error.log
- ../logs/71_access.log:/var/www/71_access.log
environment:
DB_HOST: db:3306
DB_PASSWORD: toor
ports:
- "81:80"
pma:
depends_on:
- db
image: phpmyadmin/phpmyadmin
And I start it with:
docker stack deploy -c docker-compose.yml webstack
db and pma services start correctly, but p71 service keeps restarting
docker service inspect webstack_p71
indicates:
"UpdateStatus": {
"State": "paused",
"StartedAt": "2018-01-19T16:28:17.090936496Z",
"CompletedAt": "1970-01-01T00:00:00Z",
"Message": "update paused due to failure or early termination of task 45ek431ssghuq2tnfpduk1jzp"
}
As you can see by the docker-composer.yml I already commented out the service dependency to avoid failures if dependencies are not met at first run.
$ docker service logs -f webstack_p71
$ docker service ps --no-trunc webstack_p71
What should I do to get that Apache/PHP (p71) service running?
All the containers work when run independently:
$ docker build -f Dockerfile71 -t php71 .
$ docker run -d -p 81:80 php71:latest
First of all, depends_on option doesn't work in swarm mode of version 3.(refer this issue)
In short...
depends_on is a no-op when used with docker stack deploy. Swarm mode
services are restarted when they fail, so there's no reason to delay
their startup. Even if they fail a few times, they will eventually
recover.
Of course, even though depends_on doesn't work, p71 should work properly since it would restart after the failure.
Thus, I think there is some error while running p71 service. that would be why the service keeps restarting. however, I am not sure what is happening inside the service only with the information you offer.
You could check the trace by checking log.
$ docker service logs -f webstack_p71
and error message
$ docker service ps --no-trunc webstack_p71 # check ERROR column

create a Docker Swarm v1.12.3 service and mount a NFS volume

I'm unable to get a NFS volume mounted for a Docker Swarm, and the lack of proper official documentation regarding the --mount syntax (https://docs.docker.com/engine/reference/commandline/service_create/) doesnt help.
I have tried basically this command line to create a simple nginx service with a /kkk directory mounted to an NFS volume:
docker service create --mount type=volume,src=vol_name,volume-driver=local,dst=/kkk,volume-opt=type=nfs,volume-opt=device=192.168.1.1:/your/nfs/path --name test nginx
The command line is accepted and the service is scheduled by Swarm, but the container never reaches "running" state and swarm tries to start a new instance every few seconds. I set the daemon to debug but no error regarding the volume shows...
Which is the right syntax to create a service with a NFS volume?
Thanks a lot
I found an article here that shows how to mount nfs share (and that works for me): http://collabnix.com/docker-1-12-swarm-mode-persistent-storage-using-nfs/
sudo docker service create \
--mount type=volume,volume-opt=o=addr=192.168.x.x,volume-opt=device=:/data/nfs,volume-opt=type=nfs,source=vol_collab,target=/mount \
--replicas 3 --name testnfs \
alpine /bin/sh -c "while true; do echo 'OK'; sleep 2; done"
Update:
In case you want to use it with docker-compose you can do it the following:
version: '3'
services:
alpine:
image: alpine
volumes:
- vol_collab:/mount
deploy:
mode: replicated
replicas: 2
command: /bin/sh -c "while true; do echo 'OK'; sleep 2; done"
volumes:
vol_collab:
driver: local
driver_opts:
type: nfs
o: addr=192.168.xx.xx
device: ":/data/nfs"
and then run it with
docker stack deploy -c docker-compose.yml test
you could also this in docker compose to create nfs volume
data:
driver: local
driver_opts:
type: "nfs"
o: addr=<nfs-Host-domain-name>,rw,sync,nfsvers=4.1
device: ":<path to directory in nfs server>"

How do you set up selenium grid using docker on windows?

Steps I have taken already
1. Downloaded and installed Docker Toolbox for windows
2. Open Docker Quickstart terminal
3. Entered the below commands to pull the docker images from dockerhub and run them
docker pull selenium/hub
docker pull selenium/node-chrome
docker pull selenium/node-firefox
docker run -d -P \--name hub selenium/hub
docker run -d --link hub:hub -P \--name chrome selenium/node-chrome
docker run -d --link hub:hub -P \--name firefox selenium/node-firefox
It appears to be running when I type docker logs hub but I am unable to route my tests to the hub's address on the virtualbox VM using seleniumAddress in my conf.js file or see it using http://ipAddress:4444/grid/console .
Ideally I would like to use this set up to expand the amount of parallel test instances I can run.
Unfortunately the selenium docker image might be broken since 4 days ago but you can try my alternative one:
Pull the image and run as many containers as you need
docker pull elgalu/selenium
docker run -d --name=grid4 -p 4444:24444 -p 5904:25900 \
-v /dev/shm:/dev/shm -e VNC_PASSWORD=hola elgalu/selenium
docker run -d --name=grid5 -p 4445:24444 -p 5905:25900 \
-v /dev/shm:/dev/shm -e VNC_PASSWORD=hola elgalu/selenium
docker run -d --name=grid6 -p 4446:24444 -p 5906:25900 \
-v /dev/shm:/dev/shm -e VNC_PASSWORD=hola elgalu/selenium
Wait until the all the grids started properly before starting the tests (Optional but recommended)
docker exec grid4 wait_all_done 30s
docker exec grid5 wait_all_done 30s
docker exec grid6 wait_all_done 30s
After this, Selenium should be up and running at http://localhost:4444/wd/hub. Open the url in your browser to confirm it is running.
If you are using Mac (OSX) or Microsoft Windows localhost won't work! Find out the correct IP through boot2docker ip or docker-machine ip default.
So set the selenium port accordingly for each of your test:
1st test should connect to http://ipAddress:4444/wd/hub
2nd test to http://ipAddress:4445/wd/hub
3rd test to http://ipAddress:4446/wd/hub
You can run as many as your hardware can take.
Take a look at the Protractor Cookbook w/ Docker. The instructions are listed step-by-step using selenium-grid and docker compose. Docker-selenium issue #208 has been fixed.
So you'll need to pull down the latest images*:
docker pull selenium/hub:latest
docker pull selenium/node-chrome-debug:latest
Start the selenium grid:
docker run -d -p 4444:4444 --name selenium-hub selenium/hub:latest
Then add selenium nodes. I like to use the chrome-debug and firefox-debug versions to VNC to watch the tests.
docker run -d -p <port>:5900 --link selenium-hub:hub selenium/node-chrome-debug:latest
After linking your selenium grid, this should be enough to run your Protractor test using the seleniumAddress: 'http://localhost:4444/wd/hub'.
For debugging, find the VNC port for the container with:
docker port <container-name or container-id> 5900
and access it via VNC Viewer.
Note:
At the time of this writing, 'latest' appears to be tied to a ~2.53.1 version of selenium server. As of Protractor 4.0.11 (the latest version of Protractor), this is the supported version that should be used. Note that the instructions on the Selenium-docker GitHub appear to be for tailored for selenium server 3.0.1.
You can use below compose file to setup grid and access via VNC
**# To execute this docker-compose yml file use docker-compose -f up
**# Add the "-d" flag at the end for deattached execution****
version: '2'
services:
firefoxnode:
image: selenium/node-firefox-debug
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
HUB_HOST: hub
ports:
- "32772:5900"
chromenode:
image: selenium/node-chrome-debug
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
HUB_HOST: hub
ports:
- "32773:5900"
hub:
image: selenium/hub
ports:
- "4444:4444"
command I use:
docker-compose -f .\docker-compose.yml up -d
Source :
https://github.com/SeleniumHQ/docker-selenium

Coreos install stop on "Fetching user-data from datasource"

I try to install coreos on hyper-v on windows server 2008 r2.
I set up virtual machine, give it an coreos.iso, then wget my cloud-config.yaml
Then I try to sudo coreos-install -d /dev/sda -c cloud-config.yaml and it says
Checking availability of "local-file"
Fetching user-data from datasource of type "local-file"
And... that's all, it does no more
Here's my cloud-config.yaml
#cloud-config
hostname: dockerhost
coreos:
units:
- name: etcd.service
command: start
- name: fleet.service
command: start
users:
- name: core
ssh-authorized-keys:
- ssh-rsa somesshkey
- groups:
- sudo
- docker
FIY i'm using this tutorial
Figure it out
It was our proxy server, which I find out when I use an excellent command bash -x which gave me the full output
Command was proposed by #BrianReadbeard