Image below scenario:
Some hosts forms a cluster, and there is a load-balance layer above cluster, but in distributed system, hosts crash are gonna happen, my question is how cluster perceive change of ip so that load-balancer could route requests to working hosts but not crashed ones.
Related
We have two VMs behind a Load Balancer. We would like to make one of the VMs publicly inaccessible when we do a new deployment, so we can test the new version of the application before it becomes publicly accessible. The current plan is to block one out of two VMs by changing Network Security Group rule via Service Tag for Load Balancer:
It works. When we change NSG Rule for VM1 from Allow to Deny only VM2 stays publicly accessible. Once we verify that new release works as expected we then switch NSG rule for VM2 and switch NSG rule for VM1, so only a VM with the newest version of application is accessible while we update application on the other VM.
The problem with that is NSG rules don't immediately take effect and can take 1-3 minutes to make VM inaccessible/accessible.
More over if we switch NSG for both VMs at the same time we can be in situations when both VMs with different version of software are publicly available which can lead to data corruption or data lose or both VM are not accessible. So the only way around this is to change NSG rule for VM2 then for VM1 and having downtime of 2-6 minutes. Is there a better way to do that?
Blocking ports 80 and 443 with Windows Defender Firewall via PowerShell Remoting brought the downtime to 40 seconds total.
Is it possible to have multiple apache-server instances running when using "host" networking? Just as it is possible with "bridged" networking & port mapping?
Or does other instances next to the "host" networking instance have to be "bridged" in order to map another port than 80 that might already be in use?
Anything that runs using host networking, well, uses the host networking. There is no isolation between your container, other host-networked containers, and processes running directly on the host. If you are running Apache on your host, and two --net host Apache containers, and they all try to bind to 0.0.0.0 port 80, they will conflict. You need to resolve this using application-specific configuration; there is no concept of port mapping in host networking mode.
Particularly for straightforward HTTP/TCP services, host networking is almost never necessary. If you use standard bridged networking then applications in containers won’t conflict with each other or host processes. You can remap the ports to whatever is convenient for you, without worrying about reconfiguring the application.
I run a 2GB RAM Linode (Ubuntu) that hosts a few WordPress websites. Recently my server has been OOMing and crashing and I have been up all night trying to find out what's causing it. I have discovered there I get an enormous influx of traffic (a tiny DoS) that brings the whole thing down.
I have access logs setup across all of the virtual hosts and I am using tcptrack to monitor activity on the server.
The traffic appearing in my access logs does not account for the traffic I am seeing on tcptrack. i.e. there are a dozen i.p. addresses that are constantly opening and closing connections on the server, but are nowhere to be seen in the access logs for each virtual host.
Clearly it's because these i.ps are not hitting the virtual hosts, but I have tried to set up access logs to monitor server-wide traffic so that I can see what requests their making but I'm really struggling.
Can anyone please point me in the right direction, perhaps tcptrack is just too simplified to provide any meaningful insight?
Start using mod_security
https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#Installation_for_Apache
Debian has it which means Ubuntu likely does as well. You should also make sure the kernel is setup properly, search google for SYN_COOKIES. Look into iptables/shorewall etc. Shorewall is a package that wraps iptables. Iptables can be configured for detect floods and start dropping packets.
Is there a way to make physical F5 BigIP LB to route traffic to both EC2 instances(Autoscaling) and physical machines? I came across this article https://devcentral.f5.com/articles/using-big-ip-gtm-to-integrate-with-amazon-web-services but it seems it is routing traffic to an entire AWS zone, not to a couple of EC2 instances behind a ELB.
yes, you can route traffic to any resources from BIG-IP, whether they are locally defined on the same L3 network or remote, you just need to make sure you have routes defined on BIG-IP pointing in the right direction. If you are trying to cloudburst, you can define priority level in the pool so that your physical servers get the traffic unless the minimum threshold is crossed, at which point the remote servers (other datacenter or cloud servers, doesn't matter) will be automatically engaged.
You can also add orchestration to where your cloud servers aren't up and active unless you are getting close to a threshold, at which point the BIG-IP can trigger an action to spin up those servers, then add them to the pool dynamically. There are many options available to you with BIG-IP
We had 2 managed servers sitting behind a Citrix Netscaler loadabalncer with sticky session enabled, so a request will be forwarded to the same managed server.
Now we configured a coherence*web cluster with 2 managed servers and a Citrix Netscaler as load balancer sitting in the front. How do we call the coherence cluster from the loadbalancer without calling the managed servers? Is there any IP address for the coherence cluster that we need to call from the netscaler or how to call the cluster without calling individual servers?
Thanks a lot.
You keep the same config that you had before for load-balancing to the web or application servers.
Coherence*Web makes sure that the data in the sessions will be shared between those servers (even if you add and remove servers dynamically!), and will not be lost if one server dies.