Connectivity to AWS EKS control plane via Client VPN - amazon-eks

I have created EKS cluster with API server endpoint access as "Private". Cluster is configured in private subnet. I'd like to allow kubectl access from local PC. I have created Client VPN, it has access to private network (verified that by SSH to an EC2 instance running in the same private subnet). But kubectl gets "unable to connect to the server: dial x.x.x.x:443 i/o timout". "aws eks update-kubeconfig" can see that cluster and updates local context properly. What could be the problem?

Found out what was was missing. 443 had to be enabled in authorization rules

Related

Cannot access the application via node ip and node port

I have to deploy an application via Helm by supplying a VM Ip address and node port. Its a BareMetal Kubernetes cluster. The kubernetes cluster has ingress controller installed (as node port, this value is supplied in helm command). The problem is: I am receiving a 404 not found error if I access the applciation as:
curl http://{NODE_IP}:{nodeport}/path
there is no firewall. I have "allow all ingresss traffic" policy. But not sure what is wrong. I have now tried anything possible but cannot find the root cause.

AWS CLI S3 list with a default endpoint

I'm using the following command on some ec2 instances in order to get some configuration files from an s3 bucket. The ec2 has an instance role attached with s3 full permissions:
aws s3 cp s3://bucket-name/file ./ --region eu-west-1
This work as expected on some instances provided by me with a default ami, but one some existing instances in the same region and AZ with the same instance role i'm facing the following error:
Connect timeout on endpoint URL: "https://bucket-name.eu-west-1.amazonaws.com/?list-type=2&delimiter=2%F&prefix=&encoding-type=url"
failed to run commands: exit status 255
My question is why the S3Uris is not prefixed with s3:// and returns the error with url string https:// ? it's clear that this aws cli version tries to reach s3 through https not by s3:// endpoint provided by me in the command. Is there anyway to overwrite this?
My question is why the S3Uris is not prefixed with s3:// and returns
the error with url string https:// ?
Behind the scenes aws cli call the AWS services using HTTPS so that why is why on timeout you see https://bucket-name.eu-west-1... timeout instead of s3:// .
By default, the AWS CLI sends requests to AWS services by using HTTPS on TCP port 443. To use the AWS CLI successfully, you must be able to make outbound connections on TCP port 443.
aws-cli-chap-using
The timeout on some instance might be because they are in private subnet without NAT gateway.
you can simply verify this by doing ping google.com if it not responding then the instance in the private subnet without NAT or has no outbound allowed traffic.

Not able to connect to Kafka cluster ( on AWS ) from local network after SSL implementation

I have implemented Kafka two way SSL authentication on a 17 node cluster. I have tested by running console consumer/producer commands from few nodes of the cluster. But when I try to do that from local network ( Laptop ) it doesn't work. I get SSL handshake error. I am suspecting it to be advertised listener issue as there is no adv. listener defined on server.properties. We are using private ips/private dns in all our configurations. From the local network below command works ( ip address is private ip of one of the brokers)
openssl c_client -connect 10.97.33.111:9093
My server.properties file has below entries
listeners=EXTERNAL://:9092,INTERNAL://:9091,CLIENT://:9093
listener.security.protocol.map=EXTERNAL:SSL,INTERNAL:SSL,CLIENT:SSL
## Inter Broker Listener Configuration
inter.broker.listener.name=INTERNAL
Please suggest what is required to fix this issue.

Run Kubernetes on EC2

I am trying to run kubernetes on EC2 and I used CoreOs alpha channel ami.I configured Kubectl ssh tunnel for the communication between Kubectl client and Kubernetes API.
But when I try kubectl api-versions command, I am getting following error.
Couldn't get available api versions from server: Get http://MyIP:8080/api: dial tcp MyIP:8080: connection refused
MyIP - this has set accordingly.
What could be the reason for this?
Reason for this issue was that I haven't set the kubernetes_master environment variable properly. As there is a ssh tunnel between the kubectl client and API, kubernetes master environment variable should be set to localhost.

SSH connection failed to EC2 instance after Port change

I am working on an Amazon EC2 web server. I have changed default ssh port to 8083. After restarting the sshd service, I cannot access to the server using new port and old port. How can I resolve the problem to connect to my server again?
You need to allow access to port 8803 in the EC2 instance security groups.
Check in the Amazon Ec2 Management Console under Network and Security.