We have configured a new webfarm using IIS10 with 3 hosts operating with the web traffic with a loadbalancing IIS ARR3.0 server sitting infront to balance incoming requests between all the nodes. During initial testing (Basic HTML pages) the round robin setup (33.33%) distribution between each node was working well but we had to enable server / client affinity so that our applications kept a consistent connection between our client session and the application. Since then, we are finding that all traffic going to these applications originating from different machines on different networks are all being forwarded to the same application server. If you take the server offline the application seamlessly starts running on the next server in the list (Client obviously must sign in again). Whilst one server is fine at this time to run the two applications we have running when we ramp up our migration and have all our 140 applications running, I don’t think one server will be too happy with the load.
ADDITIONAL INFORMATION
LoadBalancers/Arr Servers: LB-01 (LB-02 DUPLICATED Server for redundancy). Default ARR URL ReWrite with Route to Server Farm Action. Image of LB/ARR URL ReWrite Rule Server Affinity Enabled Client Affinity enabled use hostname selected no Advanced Settings, no routing rules. ARR Default Proxy Settings Image of Proxy Settings
Web/Application Servers WEB-01, WEB-02, WEB-03 FileSystem Shared using DFS All running on Shared Config's
The Applications would be as follows
https://www.domainname.com/application-name1
https://www.domainname.com/application-name2
...
Were the application launch page changes but the domain name stays the same
Image of IIS Monitoring and Management Window showing distribution
If there is a setting you wish to verify please ask for them. I know people arent physchic but huge paragraphs of information never really help.
My hunch is it is something to do with the URL rewrite I have tried the settings in the below post to no avail.
IIS ARR & load balancing
Uncheck 'Host Name Affinity' to dispatch to all your hosts
Related
Suppose we have two servers serving requests through a load balancer. Is it necessary to have web server in both of our servers to process the requests. Can load balancer itself act as a web server. Suppose we are using apache web server and HAProxy. So does that mean that web server(Apache) should be installed in both the server and load balancer in any one of the server. Why can't we have load balancer in both of our server machine that will be receiving the request and talking to each other to process the requests.
At the very basic, you want to have Webservers fulfill requests for static contents, while Application servers handle business logics, i.e. handle requests for dynamic contents.
But Web servers can do many other things as well such as authenticate and validate requests, logging metrics. Also, the important part of Webserver is putting the Content it gets from Application servers with a View for client to represent.
You want to have LB sitting in front of both Web and App servers if you have more than one server. Also, there's nothing preventing you from putting both Web and App server in one.
The load balancer is in front of your webserver(s) to redirect requests according to number of sessions, a hash of source IP and destination IP, requested URL or other criteria. Additionally, it will check availability of the backend servers to ensure requests get answered even if one server fails.
It's not installed on every webserver - you only need one instance. It could be a hardware appliance, or a software (like HAproxy) which may or may not be installed on one of the webservers. Although this would not be prudent, as this webserver could fail and then the proxy would not be able to redirect traffic to the remaining server.
There are several different scenarios for this. One is load balancing requests to 2 webservers which serve the same HTML content, to provide redundancy.
Another would be to provide multiple websites using just one public address, i.e. applying destination NAT according to the requested URL. For this, the software has to determine the URL in the HTML request and redirect traffic to the backend webserver servicing this site. This sometimes is called 'reverse proxy' as it hides the internal server addresses from the outside.
I've been trying all day to set up my instance of TFS2017 to work with HTTPS.
I've read the official setup guide, but it didn't help much.
My instance is attached to a domain and configuration has been made with an Administrators group user. The domain account is referenced as an administration console user properly.
The setup has been made with default 8080 port and domain account user can access the website as expected (hosted at http://machine-name:8080/tfs)
Now, when I change the IIS website settings binding to use HTTPS on port 443 with a valid wildchar certificate + set the hostname to be tfs.mydomain.com + ask for SSL require, I cannot have my user to authenticate anymore.
I make TFS Public Url point to https://tfs.mydomain.com/tfs.
I get prompted for the authentication box, but after many attempts, the site would just fail with 401.
The tests are made into the server environment to avoid Firewall confusions.
My instance has two network cards with 2 separate networks. First resolves to public IP, second resolves to private IP. I noticed the configuration works with the machine names, while it fails with the DNS resolution on the public IP. Could this be a reason ?
Thanks for your help
To perform the procedures in your requirements, you must first meet some prerequisites such as required Permissions and so on. Please double check this first. Also please make sure you have set up the corresponding ports such as below prompted.
Important:
The default port number for SSL connections is 443, but you
must assign a unique port number for each of the following
sites: Default Website, Team Foundation Server, Microsoft Team
Foundation Server Proxy (if your deployment uses it), and SharePoint
Central Administration (if your deployment uses SharePoint).
You should record the SSL port number for each website that you
configure. You will need to specify these numbers in the
administration console for Team Foundation.
There is a very detail tutorial about configuring HTTPS with SSL, please refer Setting up HTTPS with Secure Sockets Layer (SSL) for Team Foundation Server
To narrow down the issue with IP, you could disable one of your two network cards. Give a test with only using one network card each time.
I have created a service (WCF) that acts as a backend for a DB. For now it does basic operations such as INSERT, SELECT etc. I have run it locally and now it is time to expose her to the internet and enter 'production'. Is there a best practice to doing so? Bear in mind this service will be hosted on a PC as a Windows Service (not IIS). This is the first time I am putting a Windows Service into production so I am hazy on the details but I think this is the main idea:
On the service: Check for 'rookie' errors such as SQL Injection. Set maximum message sizes to ones marginally higher than the largest message that should be transmitted by my service. Also upgrade self signed X.509 certificate to one issued by a CA. (Where does one store this certificate? Locally on the PC?)
On the PC: Fully patched software (OS etc) and windows firewall with a specific set of rules that allows traffic only on the ports being used (I suppose the safest way to do this is to use the windows tool Allow a program or feature through Windows Firewall ?). Furthermore an updated antivirus running.
On the Network: For the network router, port forward the respective ports being used (the base address is declared as http://localhost:8080 so I guess port 80 for HTTP and 443 for HTTPS? I am using message level Security.)
General precautions: Full message logging on the service to analyze traffic and potential attackers. Also run a Network intrusion detection system such as Snort so that I can sleep a bit better at night.
Am I missing anything obvious? Also should I be hosting in IIS, on security exchange someone said that I would be vulnerable to HTTP attacks if I did not put the code behind a web server. However I have not read this anywhere else
I've created literally dozens and dozens of web servers in my day, but this is my first attempt with Windows Azure and I'm running into some problems. I just started migrating from AWS recently.
First of all, I'm running Ubuntu 13.04. Firewall disabled (for debugging), Apache2 installed correctly (using apt). SSH works fine as do many other services with both the DNS hostname and public IP. Virtual host is set up correctly and validated. However, I cannot access the HTTP website either through the Azure provided subdomain or the virtual IP. It just times out.
This is also my first time using Ubuntu 13.04 as well. So, through the powers of deduction, I'm assuming there is something I'm missing either with this new version of Ubuntu or some quirk in Azure. Does anyone have any suggestions?
SOLUTION
These steps to create "endpoint" works fine for all VPS:
open "virtual machine > endpoint > add endpoint"
choose "next"
set "name:http, protocol:tcp, public port:80, private port:80"
choose "complete"
and then must wait for activation and then for some time.
If you are using Azure Resource Groups along with your VMs (which is available on the new portal) you cannot use endpoints because it's not available there, so you should follow the following to open up the HTTP port or ANY other port:
1- Select the VM that you want to manage ports on.
2- In settings, click on Network Interfaces and select your network.
3- Go to Network Security Group and select your group.
4- Add Inbound or Outbound security rules depending on what you need.
If two web servers are configured in between a load balancer and a weblogic cluster, will the two Apache server maintain session stickiness?
Say for example, the load balancer forwards the first request to the 1st apache and in turn 1st apache forwards to 1st WL managed instance. Even if the second req from the same user is forwarded by the load balancer to the second apache, will the second apache be able to forward it to the 1st WLManaged instance which served the first request rather than the second WLManaged instance which is not aware of the session information at all.
What should ideally be the behaviour of the weblogic apache plugin? The catch is I don't want to enable session replication on the wl server cluster.
According to the section "Failover, Cookies, and HTTP Sessions" of the Apache HTTP Server Plug-In:
When a request contains session information stored in a cookie or in the POST data, or encoded in a URL, the session ID contains a reference to the specific server instance in which the session was originally established (called the primary server) and a reference to an additional server where the original session is replicated (called the secondary server). A request containing a cookie attempts to connect to the primary server. If that attempt fails, the request is routed to the secondary server. If both the primary and secondary servers fail, the session is lost and the plug-in attempts to make a fresh connection to another server in the dynamic cluster list. See Figure 3-1 Connection Failover.
Note: If the POST data is larger than 64K, the plug-in will not parse the POST data to obtain the session ID. Therefore, if you store the session ID in the POST data, the plug-in cannot route the request to the correct primary or secondary server, resulting in possible loss of session data.
Figure 3-1 Connection Failover
In other words, yes, both Apache servers will be able to forward an incoming request to the "right" WebLogic instance as the session ID contains all the required information for that. Note that there is no real need to confirm this with testing but it would very easy though.
UPDATE: Answering the following comment from the OP
I think this document stands good for only one apache server. In my case I have two and the load balancer forwards the requests to both the servers in a 50:50 manner. I did test this and the weblogic plugin is not maintaining the stickiness.
I understood you are using two apache fontend and I'm not sure this document applies to configuration with one apache server only. As explained, the session ID contains a reference of the primary server (and the secondary server as well) so both apache should be able to deal with it. At least, this is my understanding. Actually, I've worked with a similar configuration in the past but can't remember if things were working as I think they should or if the load balancer was configured to handle stickiness too (i.e. forward to a given Apache server). I have a little doubt now...
Could post your plugin configuration (of both apache server if they differ)? Could you also confirm that things are working as expected when only one apache server is up (and test this with both apache if their configuration differ, which shouldn't be the case though)?
When you have 2 Apache instances with a TCP load balancer in front, the stateflow diagram is not applicable anymore, because the Apache instances do not share their states.
I guess that the WebLogic plug-in maintains a state with a directional mapping [IPAddress+Port -> JVMID]. If it receives a cookie with a JVMID it does not know yet (for instance, it has never sent a request to this server yet), it has no way to know which IPAdress+Port it refers to, so it will not be able to reuse these JVMID and it will reassign new primary/secondary ones, which will be identical for 2 instances (maybe swapped), and which might be different if there are strictly more than 2 instances.
I did not confirm it by running specific tests, but on paper it seems not to work in all cases.
The answer is yes. We've got a write up of this on our blog http://blog.c2b2.co.uk/2012/10/basic-clustering-with-weblogic-12c-and.html which provides step by step instructions on setting up web session failover in a cluster.
Essentially the jsessionid cookie encodes the primary and secondary weblogic servers. Mod-wl parses the cookie and routes the request to the primary server. In your case Managed Server 1. If it is down it will automatically route the request to the backup server Managed Server 2.
The diagram above holds true for 2 Apache servers connected to the same WL cluster. The cookie session info contains details on what WLS to connect to and the plugin will respect that. If the primary (the server it originally connected to) WL server ins't available, then the request would be sent to the secondary server (designated such at the time of the first request based on the rules defined in selecting a "Preferred Replication Group"). This secondary server maintains the same session state as the primary WLS server and should be able to handle the request.
If session replication isn't setup (I think this is OFF by default), then there would be no session copied to another server and if the original/primary WL server goes down, you lose the session.
The answer is NO. As you have 2 Apache webserver, you need to implement stickiness at both hardware and software loadbalancer level in order to achieve your requirement.
Means you already have sticky session implemented in Weblogic plug-in for Apache level, but you also need Source IP based stickiness at the hardware loadbalancer level. This will allow your hardware loadbalancer to send the subsequent request from same user to same apace web server.