Multi-region anycast for TCP and UDP - load-balancing

Is there an option or work-around to have multi-region anycast load balancing for UDP and TCP? Cloud interface suggests that these are only single region while HTTP is multi region.

Related

Behaviour of L4 load balancers on server addition to the pool

As I understand L4 level load balancers, e.g. Azure Load Balancer, are almost alway stateless, i.e. they do not keep per-flow state on which server handles which TCP connection.
What is the behaviour of such load balancers in case of server additions to DIP pool? Do they lose some of the connections since corresponding packets get sent over to the new server?
As I understand L4 level load balancers, e.g. Azure Load Balancer, are
almost alway stateless, i.e. they do not keep per-flow state on which
server handles which TCP connection.
That is not true.
By default, Azure Load Balancer distributes network traffic equally among multiple VM instances in a 5-tuple hash distribution mode (the source IP, source port, destination IP, destination port, and protocol type). You can also configure session affinity. For more information, see Load Balancer distribution mode. For session affinity, the mode uses a 2-tuple (source IP and destination IP) or 3-tuple (source IP, destination IP, and protocol type) hash to map traffic to the available servers. By using source IP affinity, connections that are initiated from the same client computer go to the same DIP endpoint.
What is the behaviour of such load balancers in case of server
additions to DIP pool? Do they lose some of the connections since
corresponding packets get sent over to the new server?
They do not lose connection.
The load balancing rules work rely on health probes to detect the failure of an application on a backend instance. Refer to probe down behavior. If a backend instance's health probe fails, established TCP connections to this backend instance continue. For a new TCP connection, it will connect to the remaining healthy instances. Load Balancer does not terminate or originate flows. It's a pass-through service (does not terminate TCP connections) and the flow is always between the client and the VM's guest OS and application.
Found Ananta: Cloud Scale Load Balancing - SIGCOMM paper which actually says that per-flow state is stored in one MUX machine (not replicated) which receives the associated traffic from the router. Hence server addition doesn't affect existing TCP connections as long as MUX machines stay as is.

ping command with UDP client-server

I am confused about usage of ping command on mininet. When I implement UDP server and client and execute them with mininet, do I have to use ping command to measure packet loss, delay etc. Or ping command is not used for measuring statistics of UDP server client?
Are you asking how to implement your own ping?
Ping is simply a tool that uses ICMP datagrams to measure point-to-point network latencies and other such things.
https://www.ietf.org/rfc/rfc792.txt

Mule ESB as a message router for TCP based messaging protocols ( multiple persistent connections )

Mule ESB as a message router for TCP based messaging protocols ( multiple persistent connections )
Would Mule ESB be suitable for building up application with Server and Client endpoints which would offer persistent TCP connections on both Server and Client sides, where there should be direct mapping between the connection on Server and Client sides. It should also be able to support multiple tcp concurrent TCP connections on both sides. Is this about building a Transport and/or Connectors ?
Yes, Mule can certainly do that. We built an in-house app that act as a server accepting incoming persistent TCP connections.

redis is how to achieve inter-process communication?

We can simply use redis to achieve the remote communication such as:
redis.StrictRedis(host=REDIS_IP,port=PORT)
I don't know whether redis achieve the remote and local in same pattern ?
Maybe I just want to know how redis achieve network communication and inter-process communcation in different way?
If there is something wrong, please point out. Thanks
Redis can handle classical TCP sockets, but also stream oriented unix domain sockets.
TCP sockets can be used to perform both network and local inter-process communication. Unix domain sockets can only support local inter-process communication.
Both kind of sockets are materialized by file descriptors. Redis is based on an event-loop working at the file descriptor level, so it processes TCP and unix domain sockets in the same exact way (using the standard network API). You will find most of the related source code in ae.c (event loop) and anet.c (networking).
You can use unix domain sockets to improve the performance of Redis roundtrips when client and server are hosted on the same box. It depends on your workload, but the performance benefit (in term of throughput) of unix domain sockets over TCP sockets is typically around 1.5 (i.e. you can process 50% more operations when using unix domain sockets).

Why use Unicast versus Multicast in Weblogic Clusters

It's unclear from the documentation why you should use Unicast rather than Multicast in a WebLogic cluster. Anyone have experience using either and the benefits of moving to Unicast?
The main difference between Unicast and Multicast is as follows
Unicast:
Say you have three servers (MS-1,MS-2,MS-3) in a cluster. If they have to communicate with each other, then they have to ping (i.e. heartbeats ) the cluster master for informing it that they are alive.
If MS-1 is the master then MS-2 and MS-3 would send the ping to MS-1
Multicast:
In Multicast, there is no cluster master. Instead each server has to ping each other to inform everyone that they are alive.
So MS-1 would send the ping to MS-2 & MS-3 and at the same time MS-2 would send the ping to MS-1 & MS-3 and MS-3 would ping MS-1 & MS-3.
Thus, in multicast there are more pings sent which makes the congestion in sending the pings much heavier compared to unicast. Because of this, WLS recommends using Unicast instead for less congestion in the network: Oracle Docs: Communications In a Cluster
The principle behind Multicast is that any message is received by all subscribers to the Multicast address. So MS-1 only needs to send 1 network packet to alert all other cluster members of its status. This means a status or JNDI update requires only 1 packet per Cluster for Muticast vs 1 packet per Server (approximately) for Unicast. Multicast also requires no "master" election. Multicast is thus far simpler to code and creates less network traffic.
So, Multicast is great? Not necessarily. It uses UDP datagrams which are inherently unreliable and un-acknowledged, so given the unreliable bearer protocol - Etherent - your message might never turn up (interpret: you fall out of the cluster). The whole concept of Multicast is based on subscription, it is not a 'routable' protocol in the normal sense, so by default routers must discard Multicast packets or risk a network storm. Hence the historical requirement for all cluster members to reside on the same network segment.
These shortcomings of Multicast mean Unicast is the way to go if your cluster spans networks or you're losing too many multicast packets.
The main benefit of Unicast over Multicast is ease of configuration. Unicast uses TCP communications and this usually requires no additional network configuration. Multicast uses UDP communication and Multicast addresses and this may require some network configuration and an additional effort in selecting the address to be used.
There is a great article from the Oracle A-Team with in-depth explanation: WebLogic Server Cluster Messaging Protocols.
In documentation for WLS 12.1.2 Oracle added Considerations for Choosing Unicast or Multicast in which they suggest to use Multicast in clusters with more than 10 Managed Servers.
In my personal experience I found that Unicast may give some problems in large clusters mainly because it is a new protocol introduced in WLS 10.0 and still suffer from some minor issues.
The answer here appears to conflict with the recommendation from the Oracle A-Team. Their recommendation is:
In general, the A-Team rule of thumb is to always recommend customers use multicast unless there is good reason why it isn’t possible or practical (e.g., spanning multiple subnets where routers are not allowed to propagate UDP multicast messages). The primary reasons for this are simply efficiency and resiliency to resource shortages.
The full article can be found here.
UPDATE
Weblogic defaults to unicast and documentation for 12c implies that multicast is only supported to ensure backward compatibility:
It is important to note that although parts of the WebLogic Server documentation suggest that multicast is only supported for backward compatibility—this is not correct. The multicast cluster messaging protocol is fully supported by Oracle. The A-team is working with WebLogic Server product management to address these documentation issues in the Weblogic Server 12c documentation.
Both multicast and unicast configurations have cluster masters. In addition to a cluster master, unicast has one or more leaders. The cluster leader may or may not be the cluster master.
Multicast is a broadcast; they do not ping each other like a tcp message. In both unicast and multicast cases, the traffic is usually trivial. But, multicast is almost always the best choice if you network supports it.
Unicast presents a simpler configuration than multicast in that you don't need multicast support. All routers/switches support TCP, but not all routers/switches support or have multicast enabled. But, unicast generates more network traffic than multicast.