Unable to connect redis cluster outside docker container - redis

I have setup a dockerized redis cluster in windows 10. I could connect to docker cluster from container. But when i try to fetch cluster node info from my web application which is not running on docker it give me error
org.springframework.data.redis.ClusterStateFailureException: Could not retrieve cluster information. CLUSTER NODES returned with error.
I have created redis cluster in different ways but none of them worked for me.
Created redis cluster in bridge network mode and published respective port but was getting same error.
Tried different port mapping :
only mapped port skipped IP. docker inspect shows port mapping as(0.0.0.0:7000:6379)
Provided both IP and port. IP(provided machine host ip , 127.0.0.1, docker NAT ip). docker inspect shows port mapping as(127.0.0.1:7000->6379/tcp), (hostip:7000->6379/tcp),(dockerNAtIP:7000->6379/tcp)
Created redis cluster in --network=host mode.
a. Did not publish port as it id not ideal to do that.
b. Also published port in the same manner as mentioned above.
But i was getting same error.
Finally did setup without docker in windows and i was able to connect to redis cluster and fetch node info.
Is it not possible to connect dockerized redis cluster from application running outside docker?
Tried with all this steps:
Unable to connect to dockerized redis instance from outside docker
https://simplydistributed.wordpress.com/2018/08/31/redis-5-bootstrapping-a-redis-cluster-with-docker/
https://get-reddie.com/blog/redis4-cluster-docker-compose/
https://www.ionos.com/community/hosting/redis/using-redis-in-docker-containers/
https://medium.com/commencis/creating-redis-cluster-using-docker-67f65545796d

Related

How can I ssh into a container running inside an OpenShift/Kubernetes cluster?

I want to be able to ssh into a container within an OpenShift pod.
I do know that, I can simply do so using oc rsh. But this is based on the assumption that I have the openshift cli installed on the node where I want to ssh into the container from.
But what I want to actually achieve is, to ssh into a container from a node that does not have openshift cli installed. The node is on the same network as that of OpenShift. The node does have access to web applications hosted on a container (just for the sake of example). But instead of web access, I would like to have ssh access.
Is there any way that this can be achieved?
Unlike a server, which is running an entire operating system on real or virtualized hardware, a container is nothing more than a single Linux process encapsulated by a few kernel features: CGroups, Namespacing, and SELinux. A "fancy" process if you will.
Opening a shell session into a container is not quite the same as opening an ssh connection to a server. Opening a shell into a container requires starting a shell process and assigning it to the same cgroups and namespaces on the same host as the container process and then presenting that session to you, which is not something ssh is designed for.
Using oc exec, kubectl exec, podman exec, or docker exec cli commands to open a shell session inside a running container is the method that should be used to connect with running containers.

Jenkins selenium docker and application files

I have a docker hub and a docker node up and running.I have also a docker container which includes my application up and running with the same set up as my pc. I get the following error.
[ConnectionException] Can't connect to Webdriver at http://ip:4444/wd/hub. Please make sure that Selenium Server or PhantomJS is running.
The IP is correct since I see there the selenium grid as it should be. Which might be the problem. When I get inside the container that i have in jenkins it runs my tests also.
Have you explicitly instructed the Hub Docker Container to expose it's internal port 4444 as 4444 externally?
Instructing a container to expose ports does not enforce the same port numbers to be used. So in your case, while internally it is running on 4444, externally it could be whatever port Docker thought was the best choice when it started.
How did you start your container? If via the docker cmd line, then did you use -P or -p 4444:4444? (Note the difference in case). -P simply exposes ports but no guarantee of number, where as -p allows you to map as you wish.
There are many ways to orchestrate Docker which may allow you to control this in a different way.
For example, if you used Docker Compose that has the potential to allow your containers to communicate via 4444 even if those are not the actually exposed ports. It achieves this through some clever networking but is very simple to set up and use.

Glassfish 3.1 remote instance cant connect to database

in glassfish 3.1 I have two instance on two ssh node and they work fine in a cluster. I created the third ssh node and add the instances in a cluster. SO the cluster has three instances on three remote ssh node.
Web service running on the third node but web service cant connect to database. I believe the new instances has same connector and configuration, resources as other two since the instance is added into the cluster. So all sharing same cluster config.
I am new in glassfish, please help me out.
Thanks

Cannot connect to VM in bluemix UK Area via SSH

Has anybody tried out virtual machines in the UK area of bluemix?
I am able to start a vm but get an timeout when i try so connect to the vm via ssh.
I used the std Debian image that can be choosen on setup time and injected an ssh key for connecting. The security group I used was allow_all.
When trying to ping or to connect via ssh directly or the openstack cli the connection times out.
regards
Johannes
There was a bug in the setup of the vm, so I actually had no chance to access it.

Pod to Pod connection with using multiple port

I have a Google Cloud Container Engine cluster with 2 Pods, master and slave. Each of them runs RabbitMQ instance, that supposed to be joined into one cluster.
Ports exposed from Dockers aren't available from other machine, but could be accessed only through a Service. That's not a problem, I could establish a service for each instance (one-to-one, service-to-pod), and point each Pod to opposite service IP.
The problem that RabbitMQ uses more that one port for communications. That means that service IP should open all this ports from underlying Pod. But I cannot specify list of shared port for a Service, and if I create a new service for each port each of them will have own IP.
Is there any way to expose list of ports from same Docker/Pod on same internal IP address using Container Engine cluster? maybe some special routing configuration?
Your question is similar to this question, and unfortunately has the same response: Kubernetes / Google Container Engine does not currently have a way to expose a range of ports for a service at the current time. There is an open issue in GitHub to address this use case.