Azure Container Services Port Load Balancer - azure-container-service

While trying to port my application which is running on docker Swarm locally to Azure Container Service I am struck on the load balancer part of the Azure.
Locally I have a container instance of HAproxy running on Swarm Master and multiple web containers running.
Web containers have just exposed the ports and they are not mapped to machines on which they are running.
HAproxy container has mapped port to the master and internally is talking to my web containers for load balancing.
This gives me the leverage to run any number of containers with limited number of workers in Docker Swarm.
In azure container service I see that Azure load balancer will talk to only ports that are mapped, that means that I can only run 1 container per agent or I keep an internal load balancer in my containers, which implies that users will be going through 2 load balancers before hitting my application.
Not an ideal scenario when my application uses sticky sessions.
So Apparently Microsoft's statement "Everything works same in Azure containers" goes for a toss ?
what are the solutions available or am I doing something wrong here?
Regards,
Harneet

The solution in ACS is almost identical. Use HAProxy and have the Azure LB talk to that. The only difference is that you will not be running the proxy on the master, you will have Swarm deploy it to an agent for you.
You shouldn't really be running workloads on your masters. What would you do if you have a DDoS attack and can't reach your masters, for example. Having Swarm deploy the proxy for you means that you can also have swarm monitor the health of the proxy.
You could, if you really wanted to, run the proxy on the master as you do now. The solution would be the same, have the Azure LB provide a public connection to the proxy just as you currently do.

Related

Endpoint Paths for APIs inside Docker and Kubernetes

I am newbie on Docker and Kubernetes. And now I am developing Restful APIs which later be deployed to Docker containers in a Kubernetes cluster.
How the path of the endpoints will be changed? I have heard that Docker-Swarm and Kubernetes add some ords on the endpoints.
The "path" part of the endpoint URLs themselves (for this SO question, the /questions/53008947/... part) won't change. But the rest of the URL might.
Docker publishes services at a TCP-port level (docker run -p option, Docker Compose ports: section) and doesn't look at what traffic is going over a port. If you have something like an Apache or nginx proxy as part of your stack that might change the HTTP-level path mappings, but you'd probably be aware of that in your environment.
Kubernetes works similarly, but there are more layers. A container runs in a Pod, and can publish some port out of the Pod. That's not used directly; instead, a Service refers to the Pod (by its labels) and republishes its ports, possibly on different port numbers. The Service has a DNS name service-name.namespace.svc.cluster.local that can be used within the cluster; you can also configure the Service to be reachable on a fixed TCP port on every node in the service (NodePort) or, if your Kubernetes is running on a public-cloud provider, to create a load balancer there (LoadBalancer). Again, all of this is strictly at the TCP level and doesn't affect HTTP paths.
There is one other Kubernetes piece, an Ingress controller, which acts as a declarative wrapper around the nginx proxy (or something else with similar functionality). That does operate at the HTTP level and could change paths.
The other corollary to this is that the URL to reach a service might be different in different environments: http://localhost:12345/path in a local development setup, http://other_service:8080/path in Docker Compose, http://other-service/path in Kubernetes, https://api.example.com/other/path in production. You need some way to make that configurable (often an environment variable).

AKS in a private VNET behind a corporate proxy

we have our AKS running in a private VNET, behind a corporate proxy. The proxy is not a "transparent" proxy and needs to be configured manually an all nodes. Is this a supported behavior? Is it possible to configure worker nodes and all system containers to work via proxy?
Actually, Azure Kubernetes managed by Azure and in a private Vnet create yourself or Azure. You can use the Load Balancer to transfer the traffic or use ingress. But you just can select one size and type for the nodes when you create the cluster and it seems not support multi size for the nodes currently. Maybe it will be supported in the future on Azure.
For more details about AKS, see Azure Kubernetes Service.

Does EC2 Elastic Load Balancer remove the need for apache/nginx?

I am striving for a very simple cloud based architecture on Amazon AWS. I would like to have an app layer of several "elastic" EC2 instances where my application (and application servers) run, but I'm wondering what the load balancing will look like.
If I choose to use ELB, does it remove the need for Apache or Nginx?
No. All the loadbalancer does is just that, distributes load across instances. Whatever your stack is running on each instance will still need a nginx or apache or whatever service you want to respond back to the request routed through the load balancer.
I'm assuming you're running a web stack needing some type of server like nginx, apache, or java needing tomcat or something.
However, if you want AWS to take care of nginx and/or apache, look into running as a ElasticBeanstalk application: https://aws.amazon.com/elasticbeanstalk/

Kubernetes API for provisioning pods-as-a-service?

Currently I have an app (myapp) that deploys as a Java web app running on top of a "raw" (Ubuntu) VM. In production there are essentially 5 - 10 VMs running at any given time, all load balanced behind an nginx load balancer. Each VM is managed by Chef, which injects the correct env vars and provides the app with runtime arguments that make sense for production. So again: load balancing via nginx and configuration via Chef.
I am now interested in containerizing my future workloads, and porting this app over to Docker/Kubernetes. I'm trying to see what features Kubernetes offers that could replace my app's dependency on nginx and Chef.
So my concerns:
Does Kube-Proxy (or any other Kubernetes tools) provide subdomains or otherwise-loadbalanced URLs that could load balance to any number of pod replicas. In other words, if I "push" my newly-containerized app/image to Kubernetes API, is there a way for Kubernetes to make image available as, say, 10 pod replicas all load balanced behind myapp.example.com? If not what integration between Kubernetes and networking software (DNS/DHCP) is available?
Does Kubernetes (say, perhas via etc?) offer any sort of key-value basec configuration? It would be nice to send a command to Kubernetes API and give it labels like myapp:nonprod or myapp:prod and have Kubernetes "inject" the correct KV pairs into the running containers. For instance perhaps in the "nonprod" environment, the app connects to a MySQL database named mydb-nonprod.example.com, but in prod it connects to an RDS cluster. Or something.
Does Kubernetes offer service registry like features that could replace Consul/ZooKeeper?
Answers:
1) DNS subdomains in Kubernetes:
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
Additionally, each Service loadbalancer gets a static IP address, so you can also program other DNS names if you want to target that IP address.
2) Key/Value pairs
At creation time you can inject arbitrary key/value environment variables and then use those in your scripts/config. e.g. you could connect to ${DB_HOST}
Though for your concrete example, we suggest using Namespaces (http://kubernetes.io/v1.0/docs/admin/namespaces/README.html) you can have a "prod" namespace and a "dev" namespace, and the DNS names of services resolve within those namespaces (e.g. mysql.prod.cluster.internal and mysql.dev.cluster.internal)
3) Yes, this is what the DNS and Service object provide (http://kubernetes.io/v1.0/docs/user-guide/walkthrough/k8s201.html#services)

Coherence web and load balancing

We had 2 managed servers sitting behind a Citrix Netscaler loadabalncer with sticky session enabled, so a request will be forwarded to the same managed server.
Now we configured a coherence*web cluster with 2 managed servers and a Citrix Netscaler as load balancer sitting in the front. How do we call the coherence cluster from the loadbalancer without calling the managed servers? Is there any IP address for the coherence cluster that we need to call from the netscaler or how to call the cluster without calling individual servers?
Thanks a lot.
You keep the same config that you had before for load-balancing to the web or application servers.
Coherence*Web makes sure that the data in the sessions will be shared between those servers (even if you add and remove servers dynamically!), and will not be lost if one server dies.