Docker Redis CLI Timeout - redis

I have a Redis Service running inside a docker container but when connecting to it the cursor is not returned.
When using redis-cli the terminal just hangs with issuing commands, I hope someone can point out where I'm going wrong.
Instead of seeing regular redis-cli output like:
% redis-cli
redis 127.0.0.1:6379> set docker awesome
OK
redis 127.0.0.1:6379> get docker
"awesome"
redis 127.0.0.1:6379>
This is what I am seeing:
% redis-cli -p 49156
redis 127.0.0.1:49156> set docker awesome
There's no "OK" and the terminal just hangs until I Ctrl-C it.
I'm following the docker.io instructions from http://docs.docker.io/en/latest/examples/running_redis_service/
Here's my Dockerfile:
FROM ubuntu:12.10
RUN apt-get update
RUN apt-get -y install redis-server
EXPOSE 6379
ENTRYPOINT ["/usr/bin/redis-server"]
I build the image with:
sudo docker build -t rudijs/redis .
I run an instance of the image with:
sudo docker run -d -p 6379 -name redis rudijs/redis
% sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3888fa49b605 rudijs/redis:latest /usr/bin/redis-serve 5 seconds ago Up 4 seconds 0.0.0.0:49156->6379/tcp redis
The exposed container redis port is at:
% sudo docker port redis 6379
0.0.0.0:49156
% redis-cli -p 49156
redis 127.0.0.1:49156> set docker awesome
I've tried tinkering with different port bindings from the container to the host but the result is always the same - cli hang.
Issuing command like "help" seem to work fine
% redis-cli -p 49156
redis 127.0.0.1:49156> help
redis-cli 2.2.12
Type: "help #<group>" to get a list of commands in <group>
"help <command>" for help on <command>
"help <tab>" to get a list of possible help topics
"quit" to exit
redis 127.0.0.1:49156>
If I just let it sit I get a timeout:
% redis-cli -p 49156
redis 127.0.0.1:49156> set docker awesome
Error: Connection reset by peer
(248.52s)
redis 127.0.0.1:49156>
Any advice or tips with this problem much appreciated.
Thanks!

The fix for this was Firehol (iptables) rules were needed:
interface docker0 interface1 src "172.17.0.0/16" dst 172.17.42.1
server all accept
client all accept

Related

Enabling Redis API in Scylla DB

This is my first time asking here.
Can anyone help me how to enable the Redis API in ScyllaDB?
I can't find anything about enabling the Redis API.
Also where/how should I set the redis_port is it in the scylla.yaml?
Thank you in advance :)
Add
redis_port: 6379
somewhere in scylla.yaml
more here
http://scylla.docs.scylladb.com/master/design-notes/protocols.html#redis-client-protocol
The config option code:
https://github.com/scylladb/scylla/blob/master/db/config.cc#L789
Adding info on how to use Redis API with Scylla Docker:
run Scylla Docker with mapped Redis port
docker run -p 6379:6379 --name some-scylla -d scylladb/scylla --smp 1 --memory 750M --overprovisioned 1
update the scylla.yaml
docker exec -it some-scylla bash
vi /etc/scylla/scylla.yaml (add redis_port: 6379)
supervisorctl restart scylla
from the host server you can now use
redis-cli
127.0.0.1:6379> ping
PONG

redis - kill redis-server in google cloud platform

I am playing around Google Cloud Platform and Redis. But is is way more complicated than I expect.
I want to shutdown redis-server, in my local version i can just:
redis-cli shutdown
redis-cli ping // Could not connect to Redis at 127.0.0.1:6379: Connection refused
that means the redis-server no longer running.
But I cannot do that in GCP. I still can get PONG after the refis-cli shutdown.
I googled around and somebody suggest kill.
first find out what is the PID of the redis-server
ps -f -u redis
I will get:
which 1637 is the PID. so I do:
sudo kill 1637
and try refis-cli ping again, I still get PONG.
I tried ps -f -u redis again. I get:
It seems like for every kill, it will respawn it self with other PID.
How can I resolve this?
The redis-cli shutdown works on Mac OS. If you using Debian or Ubuntu, the easiest, way you can shutdown the server is to go into the server and type sudo service redis-server stop and service redis-server start to start it again.
Example
test-user#my-server:~$ sudo service redis-server stop
test-user#my-server:~$ ps -f -u redis
UID PID PPID C STIME TTY TIME CMD
test-user#my-server:~$
The question was answered in this community post. You may also see the following community tutorial on "How to Set Up Redis on Google Compute Engine"

How to Run YCSB for a redis Cluster on Ubuntu

I am new to YCSB and i want to benchmark Redis using more than one cluster. I have tried with only one Redis on my localhost with the following command.
./bin/ycsb load redis -p redis.host=localhost -p redis.port=6379 -P workloads/workloada -p recordcount=200000 -s > d.dat
I am getting the currect ops/sec and other data.
Now i need to know how can run YCSB for more than one cluster.
Can anybody give an answer (steps to run this).
And it would be helpful if anyone can help to run Couchbase YCSB too.
Thanx..!!
The following steps need to be done to perform YCSB bench marking with redis cluster
1) configure redis cluster with different nodes by referring to the document
http://redis.io/topics/cluster-tutorial
2) open the ycsb terminal,set up redis host ip address, port and specify the required parameters needed
./bin/ycsb load redis -p redis.host="ip address" -p redis.port="port" -P workloads/workloada -p recordcount=200000 -s > d.dat

Connect from one Docker container to another

I want to run rabbitmq-server in one docker container and connect to it from another container using celery (http://celeryproject.org/)
I have rabbitmq running using the below command...
sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
and running the celery via
sudo docker run -i -t markellul/celery /bin/bash
When I am trying to do the very basic tutorial to validate the connection on http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
I am getting a connection refused error:
consumer: Cannot connect to amqp://guest#127.0.0.1:5672//: [Errno 111]
Connection refused.
When I install rabbitmq on the same container as celery it works fine.
What do I need to do to have container interacting with each other?
[edit 2016]
Direct links are deprecated now. The new way to do link containers is docker network connect. It works quite similar to virtual networks and has a wider feature set than the old way of linking.
First you create your named containers:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
docker run --name celery -it markellul/celery /bin/bash
Then you create a network (last parameter is your network name):
docker network create -d bridge --subnet 172.25.0.0/16 mynetwork
Connect the containers to your newly created network:
docker network connect mynetwork rabbitmq
docker network connect mynetwork celery
Now, both containers are in the same network and can communicate with each other.
A very detailed user guide can be found at Work with networks: Connect containers.
[old answer]
There is a new feature in Docker 0.6.5 called linking, which is meant to help the communication between docker containers.
First, create your rabbitmq container as usual. Note that i also used the new "name" feature which makes life a litte bit easier:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
You can use the link parameter to map a container (we use the name here, the id would be ok too):
docker run --link rabbitmq:amq -i -t markellul/celery /bin/bash
Now you have access to the IP and Port of the rabbitmq container because docker automatically added some environmental variables:
$AMQ_PORT_5672_TCP_ADDR
$AMQ_PORT_5672_TCP_PORT
In addition Docker adds a host entry for the source container to the /etc/hosts file. In this example amq will be a defined host in the container.
From Docker documentation:
Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.
Just get your container ip, and connect to it from another container:
CONTAINER_IP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' $CONTAINER_ID)
echo $CONTAINER_IP
When you specify -p 5672, What docker does is open up a new port, such as 49xxx on the host and forwards it to port 5672 of the container.
you should be able to see which port is forwarding to the container by running:
sudo docker ps -a
From there, you can connect directly to the host IP address like so:
amqp://guest#HOST_IP:49xxx
You can't use localhost, because each container is basically its own localhost.
Create Image:
docker build -t "imagename1" .
docker build -t "imagename2" .
Run Docker image:
docker run -it -p 8000:8000 --name=imagename1 imagename1
docker run -it -p 8080:8080 --name=imagename2 imagename2
Create Network:
docker network create -d bridge "networkname"
Connect the network with container(imagename) created after running the image:
docker network connect "networkname" "imagename1"
docker network connect "networkname" "imagename2"
We can add any number of containers to the network.
docker network inspect ''networkname"
I think you can't connect to another container directly by design - that would be the responsibility of the host. An example of sharing data between containers using Volumes is given here http://docs.docker.io/en/latest/examples/couchdb_data_volumes/, but I don't think that that is what you're looking for.
I recently found out about https://github.com/toscanini/maestro - that might suit your needs. Let us know if it does :), I haven't tried it myself yet.
Edit. Note that you can read here that native "Container wiring and service discovery" is on the roadmap. I guess 7.0 or 8.0 at the latest.
You can get the docker instance IP with...
CID=$(sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server); sudo docker inspect $CID | grep IPAddress
But that's not very useful.
You can use pipework to create a private network between docker containers.
This is currently on the 0.8 roadmap:
https://github.com/dotcloud/docker/issues/1143

Setting up redis with docker

I have setup a basic redis image based on the following instructions: http://docs.docker.io/en/latest/examples/running_redis_service/
With my snapshot I have also edited the redis.conf file with requirepass.
My server runs fine and I am able to access it remotely using redis-cli however the authentication isn't working. I am wondering if the config file isn't being used but when I try starting the container with:
docker run -d -p 6379:6379 jwarzech/redis /usr/bin/redis-server /etc/redis/redis.conf
the container immediately crashes.
the default config of redis is set to be a daemon. You can't run a daemon within a docker container, otherwise, lxc will lose track of it and will destroy the namespace.
I just tried doing this within the container:
$>redis-server - << EOF
requirepass foobared
EOF
Now, I can connect to it and I will get a 'ERR operation not permitted'. When I connect with redis-cli -a foobared, then it works fine.