Designing scaling services with high available load-balancing - load-balancing

When running self-healing, scalable stateless services in frameworks such as marathon, the "affirmed" pattern is to have a tool for service discovery (e.g., bamboo) that feeds a load-balancer (e.g., HAProxy), preferrably with some automatic configuration, so that users can be proxied to services when hitting the load-balancer.
I don't seem to find much material about how to make the load-balancer itself highly available.
If the host that runs the load-balancer dies, I would like the services to still be accessed on the same URIs without downtimes.
What I desire can be achieved with Pacemaker/Corosync, but the fact that this specific point is often omitted in the various tutorials and blog posts, makes me think that maybe there is a simpler pattern or that I am overlooking the problem.
Do you have any suggestions?

One common approach is to run multiple machines with Bamboo/HAProxy on them pulling container locations from the Marathon/Mesos masters. In an AWS/Cloud-du-jour environment, these proxy machines can go behind an ELB (or even in an Autoscaling Group if your automation is good enough) to give you true HA functionality.

Related

Service Fabric - Local Cluster - Queuing

I am in a situation where I can use Service Fabric (locally) but cannot leverage Azure Service Bus (or anything "cloud"). What would be the corollary for queuing/pub-sub? Service Fabric is allowed since it is able to run in a local container, and is "free". Other 3rd party messaging infrastructure, like RabbitMQ, are also off the table (at the moment).
I've built systems using a locally grown bus, built on MSMQ and WCF, but I don't see how to accomplish the same thing in SF. I suspect I can have SF services use a custom ICommunicationListener that exposes msmq, but that would only be available inside the cluster (the way I understand it). I can build an HTTPBridge (in SF) in front of those to make them available outside the cluster, but then I'd lose the lifetime decoupling (client being able to call a service, using queues, even if that service isn't online at the time) since the bridge itself wouldn't benefit from any of the aspects of queuing.
I have a few possibilities but all suffer from some malady that only exists because of SF, locally. Also, the same code needs to easily deploy to full Azure SF (where I can use ASB and this issue disappears) so I don't want to build two separate systems just because of where I am hosting it in some instances.
Thanks for any tips.
You can build this yourself, for example like this. This uses a BrokerService that will distribute message-data to subscribed services and actors.
You can also run a containerized queuing platform like RabbitMQ with volumes.
By running the queue system inside the cluster you won't introduce an external dependency.
The problem is not SF, The main issue with your design is that you are coupling architectural requirements to implementations. SF runs on top of VirtualMachines, in the end, the only difference is that SF put the services in those machines, using another solution you would have an Agent Deploying these services in there or doing a Manual deployment. The challenges are the same.
It is clear from the description that the requirement in your design is a need for a message queue, the concept of queues are the same does not matter if it is Service Bus, RabbitMQ or MSMQ. Each of then will have the basic foundations of queues with specifics of each implementation, some might add transactions, some might implement multiple patterns, and so on.
If you design based on specific implementation, you will couple your solution to the implementation and make your solution hard to maintain and face challenges like you described.
Solutions like NServiceBus and Masstransit reduce a lot of these coupling from your code, and if you think these are not enough, you can create your own abstraction. Then you use configurations to tied your business logic to implementations.
Despite the above advice, I would not recommend you using different
solutions per environment, because as said previously, each solution
has it's own implementations and they might not assimilate to each other, as example, you might face issues in
production because you developed against MSMQ on DEV and TEST
environments, and when deployed to Production you use ServiceBus, they
have different limitations, like message size, retention period and son
on.
If you are willing to use MSMQ, you can add MSMQ to the VMs running your cluster and connect from your services without any issue. Take a look into this SO first: How can I use MSMQ in Azure Service Fabric

Do companies that provide APIs use a shim or proxy in front of their APIs?

I'm researching how large companies manage their public APIs. I'm thinking of companies with mature established APIs such as Google, Facebook, Twitter, and Amazon.
These companies have a number of different APIs that they expose to the public. Google, for example, has Plus, AdSense, AdWords etc. APIs that are publicly consumable. I'd like to understand if they use a cluster of reverse-proxy servers in front of those APIs to provide common functionality so that their specialist API servers don't need to implement that.
For example: Throttling and Authentication could be handled at this layer instead of implementing it in each API cluster.
The questions: Does anyone use a shim or reverse proxy in front of their APIs to handle common tasks? What are the use cases that make a reverse-proxy a good or bad idea for a cluster of API servers?
Most large companies explore a variety of things to handle the traffic and load on their servers. Roughly speaking:
A load balancer sits between the entry point and the actual client.
A reverse proxy often times sits between these to handle static files, pre-computed/rendered views, and other such largely static assets.
Any cast is used for DNS purposes, so that you are routed towards the nearest server that handles that URL.
Back pressure is employed in systems to limit the amount of requests feeding through a single pipeline and so that services don't tip over.
Memcached, Redis and the like are used as short term caches. That is, if it's going to roughly be the same result every 5 seconds, then that result can be cached in memory for faster delivery. Some proxies can be configured to read out of these.
If you're really interested, start reading some of the Netflix blog. Take a look at some of the open source they've used like Hystrix or Zuul. You can also take a look at some of their videos. They make heavy use of proxies and have built in some very advanced distributed behavior.
As far as a reverse proxy being a good idea, think in terms of failure. If your service calls out to another API by direct route and that service fails, then your service will fail and cascade upwards to the end user. On the other hand, if it's hitting a reverse proxy, then that proxy can be configured or even auto detect failures and divert traffic to back up servers.
As far as a reverse proxy being a good idea, think in terms of load. Sometimes servers can only handle a fraction of the traffic individually so that load must be shared on many servers. This is true not just of CPU capped but also IO capped resources (even if the return signal itself will not be the cause of the IO capping.)
Daisy chaining like this presents its own special little hell but it's sometimes unavoidable. On the downsides and what makes it a really bad choice if you can avoid it at all costs is a loss of deterministic behavior. Sometimes the stupidest things will bring your servers down. And by stupid, I mean, really, really dumb stuff that you never thought in a million years might bite you in the butt (think server clocks out of sync.) You have to start using rolling deploys of code, take down servers manually or forcefully if they stop responding, and keep those proxy configs in good order.
HTTP1.1 support can also be an issue. Not all reverse proxy adhere to the spec. In fact, some of them only cover ~50%. HAProxy does not do SSL. If you're only limited hardware then thread based proxy can unexpectedly swamp the system with threads.
Finally, adding in a proxy is one more thing that will break (not can, will.) You have to monitor them just like any piece of the platform, aggregate their logs, and run mock drills on them too.

Web App: High Availability / How to prevent a single point of failure?

Can someone explain to me how high-availability ("HA") works for a web application ... because I assume HA means that there exist no single-point-of-failure.
However, even if a load balancer is used- isn't that the single point of failure?
I have found this article on the subject:
http://www.tenereillo.com/GSLBPageOfShame.htm
Basically if you do not require long lasting sticky sessions you can configure your DNS servers to return multiple A records (IP addresses) for your website.
Web browsers are smart enough to try all the addresses until they find one that works.
In simple words high availability can be defined as running a system 24*7 without a downtime even if there are hardware and software failures. In other way a fault tolerance application. This helps ensure uninterrupted use of the application for it’s intended users.
Read more on High Availability Deployment Architecture
It works the following way that you setup two HA Proxy servers with heartbeat, so when one fails (stops responding to queries), it's being removed from the cluster.
Requests from HA Proxy can be forwarded to web servers in round robin fashion, and if one web server fails, HA Proxy servers do not try to contact it until it's alive.
Web servers are storing all dynamic information in database, which is replicated across two MySQL instances.
As you can see, HA Proxy and Cluster MySQL (or simply MySQL replication) as well IP Clustering here is the key.
Sure it is when operated alone. Usual highly available setup includes 2 or more load balancers running in cluster in either active/active or active/passive configuration. To further increase the availability you can have 2 different Internet Service Providers (or geo distributed datacenters) each running a pair of clustered load balancers. Then you configure DNS A record resolving to 2 distinct public IP addresses which guarantees round-robin processing splitting DNS requests evenly (CloudFlare is very fast and reliable at this). There's also possibility to return IP address of datacenter closest to your originating geo location by using something like PowerDNS dnsdist
This is what big players do to make their services highly available.
Please read https://docs.oracle.com/cd/E23824_01/html/821-1453/gkkky.html for more clearity. Actually both load balancer uses same vip(Virtual IP Address. https://techterms.com/definition/vip).
HA architecture is a entire field and multiple books were written on it, so it is hard to answer in a short paragraph.
To sum up the ideal situation, you would be using multiple servers, interconnected to a layer of multiple load balancers. The nodes and LB will be located in a few different data centers, and connected to different network backbone. Ideally the data centers will be located all over the world.
In short, all component will have redundancy, including the load balancers.
For a starting point, see Wikipedia for High Availability Cluster

WebLogic load balancing

I'm currently developing a project supported on a WebLogic clustered environment. I've successfully set up the cluster, but now I want a load-balancing solution (currently, only for testing purposes, I'm using WebLogic's HttpClusterServlet with round-robin load-balancing).
Is there any documentation that gives a clear comparison (with pros and cons) of the various ways of providing load-balancing for WebLogic?
These are the main topics I want to cover:
Performance (normal and on failover);
What failures can be detected and how fast is the failover recovery;
Transparency to failure (e.g., ability to automatically retry an idempotent request);
How well is each load-balancing solution adapted to various topologies (N-tier, clustering)
Thanks in advance for your help.
Is there any documentation that gives a clear comparison (with pros and cons) of the various ways of providing load-balancing for WebLogic?
It's not clear what kind of application you are building and what kind of technologies are involved. But...
You will find useful information in Failover and Replication in a Cluster and Load Balancing in a Cluster (also look at Cluster Implementation Procedures) but, no real comparison between the different options, at least not to my knowledge. But, the choice isn't that complex: 1. Hardware load balancers will perform better than software load balancers and 2. If you go for software load balancers, then WebLogic plugin for Apache is the recommended (by BEA) choice for production. Actually, for web apps, its pretty usual to put the static files on a web server and thus to use the Apache mod_wl plugin. See the Installing and Configuring the Apache HTTP Server Plug-In chapter.
These are the main topics I want to cover:
Performance (normal and on failover): If this question is about persistent session, WebLogic uses in memory replication by default and this works pretty well with a relatively low overhead.
What failures can be detected and how fast is the failover recovery: It is unclear which protocols you're using. But see Connection Errors and Clustering Failover.
Transparency to failure (e.g., ability to automatically retry an idempotent request): Clarifying the protocols you are using would make answering easier. If this question is about HTTP requests, then see Figure 3-1 Connection Failover.
How well is each load-balancing solution adapted to various topologies (N-tier, clustering): The question is unclear and too vague (for me). But maybe have a look at Cluster Architectures.
Oh, by the way, another nice chapter that you must read Clustering Best Practices.

What would be the best approach to designing a highly available pool of web services?

I've heard a lot of people touting success using Linux based proxies to handle routing for high availability of web applications, but what are others doing with web services? I have a bank of WCF services that need to be moved to a high availability (failover) model, meaning that if a particular server hosting the WCF services goes down, the request is routed to another of the servers in the bank. I would rather stay away from implementing a Linux based solution, since there are no Linux knowledgeable people in the environment.
If you don't need durability, you can load balance WCF service requests just like normal web requests without doing anything special. If you need durability and want requests to survive being cut off mid-process, use the netMsmqBinding.
I would rather stay away from
implementing a Linux based solution,
since there are no Linux knowledgeable
people in the environment.
This is probably a strong enough reason to not use a Linux-based solution. Doing what you describe well requires reasonable expertise beyond a simple recipe approach, and substantial maintenance.