how HTTPS request is handled by IIS server - sql

I am trying to understand the working of HTTP , IIS and sql server.
I am having an IIS 7 server in my environment which is interacting with a sql database server.
The architecture is
Apache ----> IIS -----> SQL server
The Apache is a reverse proxy server which is sending the HTTP request to the IIS server and from there IIS server is interacting with the SQL database connecting by different application pools for different applications.
My query is if some request has been forwarded by the Apache server and it has reached the IIS ; after that, if the network between the Apache and IIS is having any packet drops ;
Will that have any affect to the performance of IIS and database server?
Will there be any long running queries in the worker process of the IIS? Because my concern is what will happen after the queries that has successfully executed in the Database server. But since the network between the IIS and Apache is broken how can they be forwarded further to the Apache and further to the end user.
Will these queries keep on holding the resources till they are forwarded from IIS to Apache? Because they have successfully completed their tasks but because of network issue they are not being forwarded further. Or are such request stacked up somewhere by the IIS to free up the resources for the upcoming requests??

Once a request has reached IIS it will go and do the required actions, and format the reply. It will try to send the reply to the requestor but if the link has gone, it will be unable to. It will then abandon the request. Resources it is holding for that request will be freed.
To get the data it had for you, Apache has to repeat the request.

Related

Show Maintenance "Custom Message" to end User if server stops/down in payara

My web application developed using (GWT/J2EE stack) is deployed on Payara glassfish server and its on production server. We frequently stops the server for maintenance like 'fixing bugs/for enhancement/upgrading'. If the end user try to access the web application and server is down it shows
"This site can’t be reached"
Is it possible to show the Custom message like Server is down for maintenance to end user? any thoughts/suggestion on this.
This isn't related to Payara Server. If any server is down, it can't serve requests. The browser doesn't receive any reply and prints "This site can't be reached".
If you want to have control over what browsers display, you need to set up a proxy server that serves the requests and forwards them to Payara Server (e.g. Apache HTTP server or Nginx). ANd you have to configure it to return a response with a custom message if Payara Server is down.
To see how to set up a proxy server with Payara Server, have a look at these guides:
Apache server with Payara Server
Nginx with Payara Server

Why aren't HTTP Headers from Oracle Access Manager passing through to WebSphere from IHS?

I have a IBM HTTP Web Server setup as a reverse proxy for a WebSphere application server. We use Oracle Access Manager for user authentication. There is also a Oracle Webgate running on the IHS server to intercept the requests and check them against the Oracle policy.
I can see the authentication going through and Oracle passes back the value needed in an HTTP Header, OAM_REMOTE_USER. The problem is, at some point in the process, that header is not passed on to the WebSphere Application Server.
The Oracle Webgate is monitoring port 8443, but I am not understanding if that means for the Web Server or the App Server since both are on the same physical machine and have the same server name. If I just create a virtual host on the Web Server for 8443 and do not create the port on the App Server, the headers are going through correctly. The problem with this is that I have to use PreserveProxyHeader for the request to go through the WebGate 8443 port, so after authentication it comes back looking for my Application on port 8443, which does not exist on the Web Server. The only way it can find my application on port 8443 is if I also add a port on the App server for that port, which contains the application.
I guess the main thing I am struggling to understand is if I need to define the port Webgate monitors on the HTTP Server and App Server, or if it should only be on the HTTP Server side. It seems like no matter what I do, at some point the request gets redirected from the HTTP Server to the App Server and strips out any OAM HTTP headers that were there. I've managed to prevent them from dropping by removing the 8443 port from the app server, but now my app cannot be mapped to.
This is WebSphere App Server 8.0 and IBM HTTP Server 8.0.0.5.
In the administrative console, click Servers > Server Types > Web servers > web_server_name > Plug-in properties > Request routing. Disable "Remove special headers". Regenerate your plugin configuration XML, and redistribute it.

Apache HTTP server starts blocking requests after some time

My hosting uses Apache server and I am connecting to the same url from the same client application which makes 10-20 requests per minute and after some time the server starts blocking the client app which shows errors for unreachable address and connection refused etc.
I am sure this isn't a connectivity issue as it is consistently reproducible.
Any ideas how to setup the server so it won't block the client app's ip address?

Load balancing server

I would like to know about load balancing servers.
I am having an application which is having load balanced server.
When i made some changes to the data, in my application how it is taking effect?
Also, when we restart the application , what are all the steps that are happening, to a load balanced server?
well, the load balancer is separate from the application code, basically it is just routing the requests to one of a number of set up servers (a.k.a. downstream servers, for instance web application servers, apache/nginx+php, etc) that handles the actual request. So to update the application (i.e. java servlet, JSP, PHP page, static HTML page, image, etc) all the downstream servers will have to be updated. As for data (i.e. articles, user database, etc) this will usually be stored in a database that all the downstream servers connect to
As for restarting the application, when you do that on each of the downstream servers it will temporarily be unable to service requests, the load balancer will thus get an "unable to connect" issue when trying to send requests to the server with the application being restarted, and will then try to send the request to the next server in the list of downstream servers. Depending on how the load balancer is set up it will automatically retry sending new requests to the previously restarted server and when the restarted downstream server is up again it will again service requests. So to update the applications you basically just update one downstream server at the time, as the other servers take over the load while it is restarted it will be no downtime, and the clients will be none the wiser
Is this a hardware appliance or at server running HAProxy/nginx/other?

NServiceBus and one way communication

I would like to use NServiceBus in my application. Unfortunately there are some security limitations in my infrastructure. I have the following scenario:
I have web server, proxy server and application server. The main problem is that communication from web server to proxy server and from proxy server to application server is blocked by firewall. Reverse communication is allowed so my application server can fire proxy server and then my proxy server can fire web server. Is there any way to support this scenario with NServiceBus (eg. with Gateway that will be periodically check proxy queue and web server queue) or maybe I must write my own solution.