Continuous Delivery with Azure Load Balancer - azure-load-balancer

We have two VMs behind a Load Balancer. We would like to make one of the VMs publicly inaccessible when we do a new deployment, so we can test the new version of the application before it becomes publicly accessible. The current plan is to block one out of two VMs by changing Network Security Group rule via Service Tag for Load Balancer:
It works. When we change NSG Rule for VM1 from Allow to Deny only VM2 stays publicly accessible. Once we verify that new release works as expected we then switch NSG rule for VM2 and switch NSG rule for VM1, so only a VM with the newest version of application is accessible while we update application on the other VM.
The problem with that is NSG rules don't immediately take effect and can take 1-3 minutes to make VM inaccessible/accessible.
More over if we switch NSG for both VMs at the same time we can be in situations when both VMs with different version of software are publicly available which can lead to data corruption or data lose or both VM are not accessible. So the only way around this is to change NSG rule for VM2 then for VM1 and having downtime of 2-6 minutes. Is there a better way to do that?

Blocking ports 80 and 443 with Windows Defender Firewall via PowerShell Remoting brought the downtime to 40 seconds total.

Related

IIS 10 ARR LoadBalancer Working more like Redundent Web Servers

We have configured a new webfarm using IIS10 with 3 hosts operating with the web traffic with a loadbalancing IIS ARR3.0 server sitting infront to balance incoming requests between all the nodes. During initial testing (Basic HTML pages) the round robin setup (33.33%) distribution between each node was working well but we had to enable server / client affinity so that our applications kept a consistent connection between our client session and the application. Since then, we are finding that all traffic going to these applications originating from different machines on different networks are all being forwarded to the same application server. If you take the server offline the application seamlessly starts running on the next server in the list (Client obviously must sign in again). Whilst one server is fine at this time to run the two applications we have running when we ramp up our migration and have all our 140 applications running, I don’t think one server will be too happy with the load.
ADDITIONAL INFORMATION
LoadBalancers/Arr Servers: LB-01 (LB-02 DUPLICATED Server for redundancy). Default ARR URL ReWrite with Route to Server Farm Action. Image of LB/ARR URL ReWrite Rule Server Affinity Enabled Client Affinity enabled use hostname selected no Advanced Settings, no routing rules. ARR Default Proxy Settings Image of Proxy Settings
Web/Application Servers WEB-01, WEB-02, WEB-03 FileSystem Shared using DFS All running on Shared Config's
The Applications would be as follows
https://www.domainname.com/application-name1
https://www.domainname.com/application-name2
...
Were the application launch page changes but the domain name stays the same
Image of IIS Monitoring and Management Window showing distribution
If there is a setting you wish to verify please ask for them. I know people arent physchic but huge paragraphs of information never really help.
My hunch is it is something to do with the URL rewrite I have tried the settings in the below post to no avail.
IIS ARR & load balancing
Uncheck 'Host Name Affinity' to dispatch to all your hosts

appache2 server on Azure VM get Error : "This site can’t be reached *.*.*.*"

i get this error for installation script that worked perfect on EC2 vm but now seems that i can't reach the site , should i add some inbound rule or something to enable apache2 server ? the error in the chrome is
This site can’t be reached *.*.*.*.com’s server IP address could not be found.
Try running Windows Network Diagnostics.
DNS_PROBE_FINISHED_NXDOMAIN
Network Security Group
Azure VMs do not have any ports open firewall ports by default unless you open them when you provision your VM. When you created your Azure VM in the Azure Portal, you likely created a Network Security Group for the VM. If you didn't specify any ports to open during the VM's creation, you'll need to open up the VM's firewall.
To Open Ports
To open up the ports on the firewall, head out to the Azure Portal (where you set up the VM). Find the VM in the list of resources. It should take you to a page for your VM where the name, status, location, size, IP address, etc will be displayed. On the left side, you'll have a vertical menu > Select Networking. From there, you'll be able to see currently active firewall rules for the VM. Since you're likely missing HTTP (80) and HTTPS, select add inbound port rule. From the dropdown for service, select HTTP and assign a name/priority. Perform the same options, except this time selecting HTTPS (443). Press save and test. You should be able to access Apache running on the VM.
Additional Troubleshooting
The script you used may have inadvertently set up the VM's iptables. You can view Linux's firewall with sudo iptables -L to verify that no firewall rules have been enabled. Since Azure handles the firewall, you shouldn't need any iptables rules, but they could always be added for additional security.
This answer assumes that you do not have Azure's Load Balancing servers installed in front of the VM.

Azure Container Services Port Load Balancer

While trying to port my application which is running on docker Swarm locally to Azure Container Service I am struck on the load balancer part of the Azure.
Locally I have a container instance of HAproxy running on Swarm Master and multiple web containers running.
Web containers have just exposed the ports and they are not mapped to machines on which they are running.
HAproxy container has mapped port to the master and internally is talking to my web containers for load balancing.
This gives me the leverage to run any number of containers with limited number of workers in Docker Swarm.
In azure container service I see that Azure load balancer will talk to only ports that are mapped, that means that I can only run 1 container per agent or I keep an internal load balancer in my containers, which implies that users will be going through 2 load balancers before hitting my application.
Not an ideal scenario when my application uses sticky sessions.
So Apparently Microsoft's statement "Everything works same in Azure containers" goes for a toss ?
what are the solutions available or am I doing something wrong here?
Regards,
Harneet
The solution in ACS is almost identical. Use HAProxy and have the Azure LB talk to that. The only difference is that you will not be running the proxy on the master, you will have Swarm deploy it to an agent for you.
You shouldn't really be running workloads on your masters. What would you do if you have a DDoS attack and can't reach your masters, for example. Having Swarm deploy the proxy for you means that you can also have swarm monitor the health of the proxy.
You could, if you really wanted to, run the proxy on the master as you do now. The solution would be the same, have the Azure LB provide a public connection to the proxy just as you currently do.

Setting up ubuntu VM on Azure with apache

In Azure, I created a virtual network and then associated an Ubuntu Server virtual machine, created with Azure Resource Manager Deployment method, with the network. I then updated the associated Network Security Group and added an inbound security rule for port 80 (Source:Any, Destination:Any, Service:TCP/80). After installing Apache on the VM, I tried to access the server from my browser, but have run into a wall. I can SSH into the VM just fine, but web is a no-go, and I cannot figure out why. Any help would be appreciated.
It sometimes happen to me too because I forgot to RESTART the VM, yes just restart it. At least this works for me. and also dont forget to add outbound rule too
It worked for me with this inbound rule.
Note that when a VM is created from the portal (in ARM model), it gets automatically associated to a virtual network (vnet), a specific subnet within the vnet and a network security group.
When creating the inbound security rule, make sure to:
identify the correct network security group associated to the VM
use a priority number lower than 65500
set the source port range as *
You also need open port 80 on the VM to allow web access.
I dont think that creating your Network Security Group opens the desired port on the VM automatically.
By default in Azure Resource Manager (ARM), all ports are open; there is no need to make Network Security Groups (NSGs) to open ports, only to close them. Here is an example of an ARM template that deploys an ubuntu VM with apache:
https://github.com/Azure/azure-quickstart-templates/tree/master/apache2-on-ubuntu-vm
Alternatively, if you want an auto-scaling LAP stack using VM Scale Sets (in public preview), you can find the ARM template for that here:
https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-lapstack-autoscale
Hope this helps! :)

Routing traffic from F5 BigIP LB to both EC2 instances and Physical servers together

Is there a way to make physical F5 BigIP LB to route traffic to both EC2 instances(Autoscaling) and physical machines? I came across this article https://devcentral.f5.com/articles/using-big-ip-gtm-to-integrate-with-amazon-web-services but it seems it is routing traffic to an entire AWS zone, not to a couple of EC2 instances behind a ELB.
yes, you can route traffic to any resources from BIG-IP, whether they are locally defined on the same L3 network or remote, you just need to make sure you have routes defined on BIG-IP pointing in the right direction. If you are trying to cloudburst, you can define priority level in the pool so that your physical servers get the traffic unless the minimum threshold is crossed, at which point the remote servers (other datacenter or cloud servers, doesn't matter) will be automatically engaged.
You can also add orchestration to where your cloud servers aren't up and active unless you are getting close to a threshold, at which point the BIG-IP can trigger an action to spin up those servers, then add them to the pool dynamically. There are many options available to you with BIG-IP