I have 1 VPC - under that 1 EC2 instance ( amazon ami ) and 1 Redis (cluster mode enabled) Cluster with Auth ( password) and with Security Group Open to all IP:Port ( only for testing sake ) - so very simple setup.
telnet works at port 6379 from my EC2 Instance
- Configuration EndPoint
- Shard>eachNode EndPoint
Not able to connect to Redis Server using Redis CLI - doesnt matter endpoint either Config or Node endpoint; Using Redis CLI of v.5.0.4 ;
Please Note - AWS ElastiCache Redis Cluster ( Cluster disabled ) or Single Server Node, provides Primary Endpoint, which works fine. Only when Cluster is enabled and get ConfigEndpoint/NodeEndPoints - then having problem.
Config EndPoint:
[root#ip-xx-xx-xx-xx src]# ./redis-cli -h clustercfg.xxxx.xxxxx.use1.cache.amazonaws.com -p 6379
Node EndPoint:
[root#ip-xx-xx-xx-xx src]# ./redis-cli -h xxxx-0001-0-01.xxxx.xxxxx.use1.cache.amazonaws.com -p 6379
Any help is appreciated!
thanks
After spending few days on this issue, I was able to find the solution - we need stunnel or any other equivalent that creates SSL tunnel, redis-cli doesn't support ssl or tls.
To access data from ElastiCache for Redis nodes enabled with in-transit encryption, you use clients that work with Secure Socket Layer (SSL). However, redis-cli doesn't support SSL or Transport Layer Security (TLS).
To work around this, you can use the stunnel command to create an SSL tunnel to the redis nodes. You then use redis-cli to connect to the tunnel to access data from encrypted Redis nodes.
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/in-transit-encryption.html
Related
I have created EKS cluster with API server endpoint access as "Private". Cluster is configured in private subnet. I'd like to allow kubectl access from local PC. I have created Client VPN, it has access to private network (verified that by SSH to an EC2 instance running in the same private subnet). But kubectl gets "unable to connect to the server: dial x.x.x.x:443 i/o timout". "aws eks update-kubeconfig" can see that cluster and updates local context properly. What could be the problem?
Found out what was was missing. 443 had to be enabled in authorization rules
I have implemented Kafka two way SSL authentication on a 17 node cluster. I have tested by running console consumer/producer commands from few nodes of the cluster. But when I try to do that from local network ( Laptop ) it doesn't work. I get SSL handshake error. I am suspecting it to be advertised listener issue as there is no adv. listener defined on server.properties. We are using private ips/private dns in all our configurations. From the local network below command works ( ip address is private ip of one of the brokers)
openssl c_client -connect 10.97.33.111:9093
My server.properties file has below entries
listeners=EXTERNAL://:9092,INTERNAL://:9091,CLIENT://:9093
listener.security.protocol.map=EXTERNAL:SSL,INTERNAL:SSL,CLIENT:SSL
## Inter Broker Listener Configuration
inter.broker.listener.name=INTERNAL
Please suggest what is required to fix this issue.
So I recently installed stable/redis-ha cluster (https://github.com/helm/charts/tree/master/stable/redis-ha) on my G-Cloud based kubernetes cluster. The cluster was installed as a "Headless Service" without a ClusterIP. There are 3 pods that make up this cluster one of which is elected master.
The cluster has installed with no issues and can be accessed via redis-cli from my local pc (after port-forwarding with kubectl).
The output from the cluster install provided me with DNS name for the cluster. Because the service is a headless I am using the following DNS Name
port_name.port_protocol.svc.namespace.svc.cluster.local (As specified by the documentation)
When attempting to connect I get the following error:
"redis.exceptions.ConnectionError: Error -2 connecting to
port_name.port_protocol.svc.namespace.svc.cluster.local :6379. Name does not
resolve."
This is not working.
Not sure what to do here. Any help would be greatly appreciated.
the DNS appears to be incorrect. it should be in the below format
<redis-service-name>.<namespace>.svc.cluster.local:6379
say, redis service name is redis and namespace is default then it should be
redis.default.svc.cluster.local:6379
you can also use pod dns, like below
<redis-pod-name>.<redis-service-name>.<namespace>.svc.cluster.local:6379
say, redis pod name is redis-0 and redis service name is redis and namespace is default then it should be
redis-0.redis.default.svc.cluster.local:6379
assuming the service port is same as container port and that is 6379
Not sure if this is still relevant. Just enhance the chart similar to other charts to support NodePort, e.g. rabbitmq-ha so that you can use any node ip and configured node port if you want to access redis from outside the cluster.
I setup a Redis Cluster (ver 3.2.0), not Sentinel, with 4 masters (each with a slave) and a Virtual IP randomly pointing to one of 4 Master servers' IPs.
VIP: 10.0.0.10:6379, connecting to M1, M2, M3, M4:
M1: 10.0.0.1:6379 -
S1: 10.0.0.5:6378
M2: 10.0.0.2:6379 -
S2: 10.0.0.6:6378
M3: 10.0.0.3:6379 -
S3: 10.0.0.7:6378
M4: 10.0.0.4:6379 -
S4: 10.0.0.8:6378
My client uses ServiceStack to connect to my cluster via VIP: 10.0.0.10:6379, but I get the error:
An exception of type 'ServiceStack.Redis.RedisResponseException' occurred in ServiceStack.Redis.dll but was not handled in user code
Additional information: MOVED 2872 10.0.0.3:6379
My current string:
<add key="REDIS_MANAGER" value="redsAuthEnt#10.0.0.10:6379?connectTimeout=10000" />
I think this happens because my ServiceStack string connects as standalone Redis not a Redis Cluster.
It's the same as when we have to use -c with the redis-cli command line.
Help me craft a connection string to my Redis Cluster using the ServiceStack client or any other solution to use Redis Cluster.
ServiceStack.Redis does not support Redis Cluster, you can vote for this feature request on UserVoice.
I am trying to run kubernetes on EC2 and I used CoreOs alpha channel ami.I configured Kubectl ssh tunnel for the communication between Kubectl client and Kubernetes API.
But when I try kubectl api-versions command, I am getting following error.
Couldn't get available api versions from server: Get http://MyIP:8080/api: dial tcp MyIP:8080: connection refused
MyIP - this has set accordingly.
What could be the reason for this?
Reason for this issue was that I haven't set the kubernetes_master environment variable properly. As there is a ssh tunnel between the kubectl client and API, kubernetes master environment variable should be set to localhost.