Need to setup Azure WebApp with Loadbalancer and fault tollerance - load-balancing

I have a website which is created using ASP.Net, C#, Azure SQL and it is hosted on Azure Webapp.
I have a requirement where I need to setup Loadbalancer for website with fault tolerance.
I have set up a Traffice manager where there are two replicas of my site as end points. (mywebsitea.azure.com, mywebsiteb.azure.com)
It is using Performance algorithm of internal Azure load balancer, one site is hosted in asia region and another is hosted at west europe region.
This all works good. (mywebsite.trafficmanager.com)
Now I also wish to set up a fault tolerance mechanism. Can we configure both of load balancer and fault tolerance using traffic manager?
or is there any other way to achieve this?
Thanks in advance..

Your question seems to be mixing terminology: you say "It is using Performance algorithm of internal Azure load balancer". However, 'Performance' is a mode of Azure Traffic Manager, not Azure load balancer.
To answer your question: all Traffic Manager traffic-routing methods (including 'Performance') include health checks and endpoint failover.
Regards,
Jonathan Tuliani
Program Manager
Azure Networking - DNS and Traffic Manager

Related

Which channels should use SSL in a Kubernetes cluster?

I have the following Kubernetes setup (forgive the poor ASCII art):
Azure SQL DB_1 > deployment_1 > service_1 \
Azure SQL DB_2 > deployment_2 > service_2 > -> nginx_ingress
Azure SQL DB_N > deployment_N > service_N /
The DBs are outside the Kubernetes cluster. They are exposed through a Private Endpoint to the VNet the Kubernetes cluster is on. They obtain a private IP address inside that VNet, and are otherwise unreachable.
Every deployment is a different microservice. Each one has a service in front of it to handle communication. In turn, all these services can be reached through the NGINX ingress. All services are configured as ClusterIPs, so they cannot be reached from outside the cluster. The only entrypoint from outside the VNet is through the ingress.
My question is, which of these channels should be secured with SSL, and where is it not worth it (for example, because of impact on performance)?
The Ingress of course, will have SSL in front of it. This is a given.
Should there be SSL between the ingress and the services?
Should there be SSL between the services and the microservices behind them?
The DB itself seems to already do encrypted connections automatically. Is there any reason why this would be unnecessary, or conversely, can/should it be made more secure somehow?
Of course, I understand that more encryption is usually A Good Thing. But for example, is it worth generating and keeping track of certificates for comms between the microservices and the services, since these are internal to the cluster and cannot be reached in any other way?
Thank you for any information / examples / experiences you can provide!
simple is to terminate the TLS at ingress layer only, as it is inside AKS ( I am assuming ) and AKS' VNET is secure, so no direct exposure to external world and only ingress nginx controller will be exposed to external world.
The DB based communication , if you are using SQL server , then is already under the hood of TLS.
Apart from this you can define CORS too, wherever required.

AWS - NLB Performance Issue

AWS
I am using network load balancer infront of private VPC in the API gateway. Basically for APIs in the gateway the endpoint is network load balancer's DNS name.
The issue is, performance sucks (+5 seconds).. If I use the IP address of the EC2 instead of NLB DNS the response is very good (less than 100ms).
Can somebody point me what is the issue? Any configuration screw up I did while creating NLB?
I have been researching for the past 2 days and couldn't find any solution.
Appreciate your response.
I had a similar issue that was due to failing health checks. When all health checks fails, the targets are tried randomly (typically target in each AZ), however, at that stage I had only configured an EC2 in one of the AZs. The solution was to fix the health checks. They require the SecurityGroup (on the EC2 instances) to allow the entire VPC CIDR range (or at least the port the health checks are using).

Azure Container Services Port Load Balancer

While trying to port my application which is running on docker Swarm locally to Azure Container Service I am struck on the load balancer part of the Azure.
Locally I have a container instance of HAproxy running on Swarm Master and multiple web containers running.
Web containers have just exposed the ports and they are not mapped to machines on which they are running.
HAproxy container has mapped port to the master and internally is talking to my web containers for load balancing.
This gives me the leverage to run any number of containers with limited number of workers in Docker Swarm.
In azure container service I see that Azure load balancer will talk to only ports that are mapped, that means that I can only run 1 container per agent or I keep an internal load balancer in my containers, which implies that users will be going through 2 load balancers before hitting my application.
Not an ideal scenario when my application uses sticky sessions.
So Apparently Microsoft's statement "Everything works same in Azure containers" goes for a toss ?
what are the solutions available or am I doing something wrong here?
Regards,
Harneet
The solution in ACS is almost identical. Use HAProxy and have the Azure LB talk to that. The only difference is that you will not be running the proxy on the master, you will have Swarm deploy it to an agent for you.
You shouldn't really be running workloads on your masters. What would you do if you have a DDoS attack and can't reach your masters, for example. Having Swarm deploy the proxy for you means that you can also have swarm monitor the health of the proxy.
You could, if you really wanted to, run the proxy on the master as you do now. The solution would be the same, have the Azure LB provide a public connection to the proxy just as you currently do.

Can I open ports on Azure Websites?

If I want to self host WCF in a Windows Azure Website by spinning up my own ServiceHost can I host end points on 8080 or any other port I want to? Is there any specific usable range of ports I have access to or is port access entirely blocked?
Edit: for absolute clarification this question is NOT about web or worker roles and is only about Azure Websites
This blog post is slightly out dated now as Windows Azure Websites have more features now (like staging and production slots, WebJobs, etc) but the part regarding ports is still true for Azure Websites.
When to use Cloud Services [...] Windows Azure Websites is all IIS, the web server provides the entire platform, there is no room for long running processes or threads that can sit and wait for communication on another port outside of IIS
http://blogs.msdn.com/b/cdndevs/archive/2013/11/21/windows-azure-websites-vs-cloud-services.aspx
Note that now you can have a long running process using webjobs that does back-end work, but you can't listen on anything other than 80
No, WAMS won't let you open ports. If you need that, you should host in a Web Role (Cloud services). Then you can configure your endpoints through windows azure management portal.

Hosting custom WCF in IIS using NLB

We're in the process of trying to layout a new net topography for our system. We currently host our WCF as a windows service which exposes HTTP, HTTPS, NET TCP, and now AJAX Service endpoints...
Does anyone know if it would be possible to move our WCF into IIS while still having those same exposed end points AND take advantage of IIS Clustering and NLB? Can those exposed end points be part of the NLB? Not sure how it works, I've been doing some research but can't find anything that addresses those concerns.
I'm a little new to WCF and IIS and we're currently in the research phase of this project so any opinions or suggestions would be welcomed and greatly appreciated.
You can move your service hosting from Windows services to IIS as long as you have WAS turned which will be required for tcp bound requests.
You will have to reconfigure your services to support load balancing so take a look at the articles below as a helpful starting point about load balancing.
Things to Consider When Implementing a Load Balancer with WCF
Load Balancing with the Basic HTTP Binding
Questions to consider:
Do you use session enabled contract? Does the service behavior use PerSession? Do you have reliable messaging turned on? Session and reliable session are local to a particular server so failover requires a new session be created. The client has to initiate this by creating a new channel (proxy).
Other helpful articles:
Unable to connect to Windows Server 2008 NLB Virtual IP Address from hosts in different subnets when NLB is in Multicast Mode