GCE LoadBalancer networking breaks when a Kubernetes pod is restarted - load-balancing

Using Google Container Engine (hosted k8s) with following version info:
Server Version: version.Info{
Major:"1", Minor:"4",
GitVersion:"v1.4.7",
GitCommit:"92b4f971662de9d8770f8dcd2ee01ec226a6f6c0",
GitTreeState:"clean",
BuildDate:"2016-12-10T04:43:42Z",
GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"
}
I've encountered a serious issue and not sure if I set up configuration properly or if this a Kubernetes bug. We have a replication controller that is responsible for our external REST API. The pods are exposed via a k8s service, specifically an external LoadBalancer. When k8s restarts the container, the service fails to redirect traffic to the restarted container. If all pods for an r.c. fail, the API is no longer externally accessible.
I'd expect the service should automatically redirect traffic to a restarted pod and the process should be seamless. I can confirm the pod restarts without hassle and the application pod is alive and well. However, the service completely fails to direct traffic, and instead returns a 502 error.
Here's the configuration for the service:
Should something be different in the service config to force the LoadBalancer to register a restarted container as active?

Related

Istio load balancer not working while application is running, pods are ok

I have one application with multiple micro services which has exposed service type loadbalancer on port 443. On deployment on eks it generates a lb url which if I hit then it runs successfully.
Now I am try to do blue-green deployment with istio.
I installed istio. It created a load balancer in istio-system namespace. I did all setup of my application along with gateway and virtual service in some different namespace.
I used https 443 port in gateway manifest. All eks instances are 'in-service' yet the istio loadbalancer does not work. I am clueless how to debug.
Seeking help.
I am attaching code as image, sorry for that.
if I run application with LoadBalancer without involving istio then it runs successfully, but istio's loadbalancer somehow not working. I am it lost here

Endpoint Paths for APIs inside Docker and Kubernetes

I am newbie on Docker and Kubernetes. And now I am developing Restful APIs which later be deployed to Docker containers in a Kubernetes cluster.
How the path of the endpoints will be changed? I have heard that Docker-Swarm and Kubernetes add some ords on the endpoints.
The "path" part of the endpoint URLs themselves (for this SO question, the /questions/53008947/... part) won't change. But the rest of the URL might.
Docker publishes services at a TCP-port level (docker run -p option, Docker Compose ports: section) and doesn't look at what traffic is going over a port. If you have something like an Apache or nginx proxy as part of your stack that might change the HTTP-level path mappings, but you'd probably be aware of that in your environment.
Kubernetes works similarly, but there are more layers. A container runs in a Pod, and can publish some port out of the Pod. That's not used directly; instead, a Service refers to the Pod (by its labels) and republishes its ports, possibly on different port numbers. The Service has a DNS name service-name.namespace.svc.cluster.local that can be used within the cluster; you can also configure the Service to be reachable on a fixed TCP port on every node in the service (NodePort) or, if your Kubernetes is running on a public-cloud provider, to create a load balancer there (LoadBalancer). Again, all of this is strictly at the TCP level and doesn't affect HTTP paths.
There is one other Kubernetes piece, an Ingress controller, which acts as a declarative wrapper around the nginx proxy (or something else with similar functionality). That does operate at the HTTP level and could change paths.
The other corollary to this is that the URL to reach a service might be different in different environments: http://localhost:12345/path in a local development setup, http://other_service:8080/path in Docker Compose, http://other-service/path in Kubernetes, https://api.example.com/other/path in production. You need some way to make that configurable (often an environment variable).

HTTPS enabled Spring Boot App does not work when deployed on Beanstalk

I enabled HTTPS for my APIs using local generated SSL key following the instructions in Enable HTTPS in Spring Boot. The ssl settings are included in application.properties:
server.port=9443
server.ssl.key-store=classpath:server-keystore.jks
server.ssl.key-store-password=123456
server.ssl.keyAlias=server-keypair
server.ssl.key-store-type=JKS
It works with https when tested locally. Then I packaged it as a Jar and deployed on Amazon Beanstalk environment. When I hit endpoint https://eb-env-url:9443/endpoint/, it timed out without any specific error. The Beanstalk log does not show any request was made through to the server at all.
I read it somewhere that personal key may not work when deployed to cloud, but it should at least give me some security error that points to that direction. I suspect this may have to do with the environment configuration. I used to do HTTP only for the environment and did not make any changes to the config after switching to HTTPS. One of the environment variable is SERVER_PORT which is set to 5000. I am not sure if some changes need to made in the Beanstalk environment in order to make HTTPS work.

Asp.net core application is not accessible from an external load balanced Azure VM

I have created a VM behind an external load balancer in Azure and I am using IIS as the reverse proxy webserver to host the asp.net core application.
I am able to access the application inside the VM using localhost but not able to access the same from my client machine through the public ip configured for the loadbalancer.
I have configured loadbalancing rules for incoming traffic on port 80 and port 443 for the loadbalancer and specified appropriate NSGs for those ports.
Before deploying the asp.net core application I was able to access the defaultwebsite from my client machine. so I assume that inbound rules are taken in to account and working fine.
This is a self contained application and since I am able to access the application inside the VM through localhost I assume that the aspnet hosting module and other configuration required is proper.
Please let me know if there is anything else I can be missing.
I guess i have figured out what the issue is.
The Loadbalancer probe for the application is configured to be Http since its a webserver and is instructed to check at the default path "/" and since the application i have created does not serve anything on "/" its considering the node as unhealthy and does not respond or serve anything.
I changed the probe to tcp and it works just fine.
Thanks,
Teja

AWS ELB Configuration for a multi-master

I have a multi-master Origin setup in AWS. I have an ELB in front that uses SSL certificate configuration.
I'm having difficulty configuring the access to the Web console as it seems that the web sockets are being interrupted. I can tell this because of the image below and the inability to access the logs or terminal for a pod in the web console.
Server connection interrupted
What is the proper configuration in AWS to allow the web console to function correctly?
I resolved my issue. I figured out the ELB configuration by following the CloudFormation template in the reference architecture here:
https://github.com/openshift/openshift-ansible-contrib/reference-architecture/aws-ansible/playbooks/roles/cloudformation-infra/files/greenfield.json
I also had an issue with the version of Chrome (50) and had to upgrade to version 55. Basically I was getting 'ERR_DISALLOWED_URL_SCHEME'. This post pointed my towards upgrading Chrome:
https://productforums.google.com/forum/#!topic/chrome/leVmLPNVISI