Istio load balancer not working while application is running, pods are ok - amazon-eks

I have one application with multiple micro services which has exposed service type loadbalancer on port 443. On deployment on eks it generates a lb url which if I hit then it runs successfully.
Now I am try to do blue-green deployment with istio.
I installed istio. It created a load balancer in istio-system namespace. I did all setup of my application along with gateway and virtual service in some different namespace.
I used https 443 port in gateway manifest. All eks instances are 'in-service' yet the istio loadbalancer does not work. I am clueless how to debug.
Seeking help.
I am attaching code as image, sorry for that.
if I run application with LoadBalancer without involving istio then it runs successfully, but istio's loadbalancer somehow not working. I am it lost here

Related

Communication between AWS Fargate Services not working even with route53 setup

I have 2 services running in a single private vpc subnet (same available zone). Each service is based on a container here: https://github.com/spring-petclinic/spring-petclinic-microservices .
I've setup route53 service endpoints for both services.
When I run my tasks (each within their own service) service A times out calling service B over service B's route53 endpoint. Using localhost doesn't work because these containers are in separate services.
When I create a container for my task definition, I assign the port that my container is using (using port mapping field). However I notice in the console there is this note: "Host port mappings are not valid when the network mode for a task definition is host or awsvpc. To specify different host and container port mappings, choose the Bridge network mode."
Since I'm using Fargate I am using awsvpc mode. So is this telling my port mapping setting isnt doing anything ? Is that why my services are timing out ?
Then when I google bridge mode, this seems to tell me that awscpv networking mode support service discovery: https://aws.amazon.com/about-aws/whats-new/2018/05/amazon-ecs-service-discovery-supports-bridge-and-host-container-/
So how does "bridge mode" work here ? Why does port mapping field not work for awsvpc ?
Edit:
I read this How to communicate between Fargate services on AWS ECS? and he just says "I created a new service and things started working." That's a bit disheartening.
Edit2:
Yes my vpc has enabled dns resolution.
As it turns out the security group on my service was only allowing http on port 80. That is the inbound rules the default SG that the service wizard gives you. I updated it to allow traffic on my container ports and they seem to be talking to each other now.

Endpoint Paths for APIs inside Docker and Kubernetes

I am newbie on Docker and Kubernetes. And now I am developing Restful APIs which later be deployed to Docker containers in a Kubernetes cluster.
How the path of the endpoints will be changed? I have heard that Docker-Swarm and Kubernetes add some ords on the endpoints.
The "path" part of the endpoint URLs themselves (for this SO question, the /questions/53008947/... part) won't change. But the rest of the URL might.
Docker publishes services at a TCP-port level (docker run -p option, Docker Compose ports: section) and doesn't look at what traffic is going over a port. If you have something like an Apache or nginx proxy as part of your stack that might change the HTTP-level path mappings, but you'd probably be aware of that in your environment.
Kubernetes works similarly, but there are more layers. A container runs in a Pod, and can publish some port out of the Pod. That's not used directly; instead, a Service refers to the Pod (by its labels) and republishes its ports, possibly on different port numbers. The Service has a DNS name service-name.namespace.svc.cluster.local that can be used within the cluster; you can also configure the Service to be reachable on a fixed TCP port on every node in the service (NodePort) or, if your Kubernetes is running on a public-cloud provider, to create a load balancer there (LoadBalancer). Again, all of this is strictly at the TCP level and doesn't affect HTTP paths.
There is one other Kubernetes piece, an Ingress controller, which acts as a declarative wrapper around the nginx proxy (or something else with similar functionality). That does operate at the HTTP level and could change paths.
The other corollary to this is that the URL to reach a service might be different in different environments: http://localhost:12345/path in a local development setup, http://other_service:8080/path in Docker Compose, http://other-service/path in Kubernetes, https://api.example.com/other/path in production. You need some way to make that configurable (often an environment variable).

Asp.net core application is not accessible from an external load balanced Azure VM

I have created a VM behind an external load balancer in Azure and I am using IIS as the reverse proxy webserver to host the asp.net core application.
I am able to access the application inside the VM using localhost but not able to access the same from my client machine through the public ip configured for the loadbalancer.
I have configured loadbalancing rules for incoming traffic on port 80 and port 443 for the loadbalancer and specified appropriate NSGs for those ports.
Before deploying the asp.net core application I was able to access the defaultwebsite from my client machine. so I assume that inbound rules are taken in to account and working fine.
This is a self contained application and since I am able to access the application inside the VM through localhost I assume that the aspnet hosting module and other configuration required is proper.
Please let me know if there is anything else I can be missing.
I guess i have figured out what the issue is.
The Loadbalancer probe for the application is configured to be Http since its a webserver and is instructed to check at the default path "/" and since the application i have created does not serve anything on "/" its considering the node as unhealthy and does not respond or serve anything.
I changed the probe to tcp and it works just fine.
Thanks,
Teja

GCE LoadBalancer networking breaks when a Kubernetes pod is restarted

Using Google Container Engine (hosted k8s) with following version info:
Server Version: version.Info{
Major:"1", Minor:"4",
GitVersion:"v1.4.7",
GitCommit:"92b4f971662de9d8770f8dcd2ee01ec226a6f6c0",
GitTreeState:"clean",
BuildDate:"2016-12-10T04:43:42Z",
GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"
}
I've encountered a serious issue and not sure if I set up configuration properly or if this a Kubernetes bug. We have a replication controller that is responsible for our external REST API. The pods are exposed via a k8s service, specifically an external LoadBalancer. When k8s restarts the container, the service fails to redirect traffic to the restarted container. If all pods for an r.c. fail, the API is no longer externally accessible.
I'd expect the service should automatically redirect traffic to a restarted pod and the process should be seamless. I can confirm the pod restarts without hassle and the application pod is alive and well. However, the service completely fails to direct traffic, and instead returns a 502 error.
Here's the configuration for the service:
Should something be different in the service config to force the LoadBalancer to register a restarted container as active?

How can you publish a Kubernetes Service without using the type LoadBalancer (on GCP)

I would like to avoid using type: "LoadBalancer" for a certain Kubernetes Service, but still to be able to publish it on the Internet. I am using Google Cloud Platform (GCP) to run a Kubernetes cluster currently running on a single node.
I tried to us the externalIPs Service configuration and to give at turns, the IPs of:
the instance hosting the Kubernetes cluster (External IP; which also conincides with the IP address of the Kubernetes node as reported by kubernetes describe node)
the Kubernetes cluster endpoint (as reported by the Google Cloud Console in the details of the cluster)
the public/external IP of another Kubernetes Service of type LoadBalancer running on the same node.
None of the above helped me reach my application using the Kubernetes Service with an externalIPs configuration.
So, how can I publish a service on the Internet without using a LoadBalancer-type Kubernetes Service.
If you don't want to use a LoadBalancer service, other options for exposing your service publicly are:
Type NodePort
Create your service with type set to NodePort, and Kubernetes will allocate a port on all of your node VMs on which your service will be exposed (docs). E.g. if you have 2 nodes, w/ public IPs 12.34.56.78 and 23.45.67.89, and Kubernetes assigns your service port 31234, then the service will be available publicly on both 12.34.56.78:31234 & 23.45.67.89:31234
Specify externalIPs
If you have the ability to route public IPs to your nodes, you can specify externalIPs in your service to tell Kubernetes "If you see something come in destined for that IP w/ my service port, route it to me." (docs)
The cluster endpoint won't work for this because that is only the IP of your Kubernetes master. The public IP of another LoadBalancer service won't work because the LoadBalancer is only configured to route the port of that original service. I'd expect the node IP to work, but it may conflict if your service port is a privileged port.
Use the /proxy/ endpoint
The Kubernetes API includes a /proxy/ endpoint that allows you to access services on the cluster endpoint IP. E.g. if your cluster endpoint is 1.2.3.4, you could reach my-service in namespace my-ns by accessing https://1.2.3.4/api/v1/proxy/namespaces/my-ns/services/my-service with your cluster credentials. This should really only be used for testing/debugging, as it takes all traffic through your Kubernetes master on the way to the service (extra hops, SPOF, etc.).
There's another option: set the hostNetwork flag on your pod.
For example, you can use helm3 to install nginx this way:
helm install --set controller.hostNetwork=true nginx-ingress nginx-stable/nginx-ingress
The nginx is then available at port 80 & 443 on the IP address of the node that runs the pod. You can use node selectors or affinity or other tools to influence this choice.
There are a few idiomatic ways to expose a service externally in Kubernetes (see note#1):
Service.Type=LoadBalancer, as OP pointed out.
Service.Type=NodePort, this would exposing node's IP.
Service.Type=ExternalName, Maps the Service to the contents of the externalName field by returning a CNAME record (You need CoreDNS version 1.7 or higher to use the ExternalName type.)
Ingress. This is a new concept that expose eternal HTTP and/or HTTPS routes to services within the Kubernetes cluster, you can even map a route to multiple services. However, this only maps HTTP and/or HTTPS routes only. (See note#2)