I'm investigating a problem a user is having with a web application that is built using Yii.
The user is not seeing the Yii 'flash' session-based user-feedback messages. These messages are shown once to a user and then destroyed (so they're not shown on subsequent page loads).
I took a look at the server access logs and I noticed something weird.
When this user requests a page there is a second identical request but from a different IP and with a different User Agent string. The second request is often at the same time or is sometimes (at most) a couple of minutes later. A bit of googling leads me to the conclusion that the user is browsing the web using a HTTP Proxy.
So, is this likely to be a HTTP Proxy? Or could it be something more suspicious? And if it is a HTTP Proxy, does this explain why they're not seeing the flash session messages? Could it be that the messages are being 'shown' to the Proxy and then destroyed?
Related
We use a custom HTTP module in IIS as a reverse proxy for web applications. Generally this works well and has done for some time, but we've come across an issue with Windows Authentication (WA). We're using IE 11, IIS 10 and Server 2016.
When accessing the target site directly, WA works fine - we get a browser login dialog when the initial HTML page is requested and the subsequent requests (CSS, JS, etc) go through fine.
When accessing via our proxy, the same (correct behaviour) happens for the initial html page, the first CSS/JS request authenticates ok too, but the subsequent ones cause a browser login to popup.
What seems to happen on the 'bad' requests (i,.e. those that cause the login dialog) is:
1) Browser decides it needs to authenticate, so sends an Authorization header (Negotiate, with an NTLM token)
2) Server responds (401) with a WWW-Authenticate: Negotiate response with a full NTLM token
3) Browser re-requests with an Authorization header (Negotiate, with a full NTLM token)
4) Server responds (401) with a WWW-Authenticate: Negotiate (with no token), which causes the browser to show the login dialog
5) With login credentials entered, Browser sends the same request as in (1) - identical NTLM token, server responds as in (2), Browser re-requests as in (3), but this time it works!
We've set up a test web site with one html page, requesting 3 JS and 2 CSS files to replicate this. On our test server we've got two sites, one using our reverse proxy and one using ARR. The ARR site works fine. Also, since step (5) above works, we believe that the proxy pass-through is fundamentally working, i.e. NTLM tokens are not being messed up by dodgy encoding, etc.
One thing that does work, is that if we use Fiddler and put breakpoints on each request, we're able to hold back on the 5 sub-requests (JS & CSS files), letting one go through at a time. If we let each sequence (i.e. NTLM token exchange for each URL/file, through to the 200 response), then it works. This made us think that there is some inter-leaving effect (e.g. shared memory corruption) in our proxy, this is still a possibility.
So, we put code at the start of BeginRequest and end of EndRequest with a Synclock and a shared var to store the Path (AppRelativeCurrentExecutionFilePath). This was for our code to 'Single Thread' each of these request/exchanges. This does what we expected, i.e. only allowing one auth exchange to happen and resulting in a 200 before allowing the next. However, we still have the same problem of the server rejecting the first exchange. So, does this indicate something happening in/before BeginRequest, where if we hold the requests back in Fiddler then they work, but not if we do it in our http module?
Or is there some sort of timing issue where the manual breakpoints in Fiddler also mean we’re doing it at ‘human’ speed and therefore allowing things to work better?
One difference we can see is the ‘Connection: Keep-Alive’. That header is in the request from the browser to our proxy site, but not passed from our proxy to the base site, yet the ARR site does pass that through... It’s all using HTTP 1.1. and so we can't find a way to set Keep-Alive on our outgoing request - could this be it?
Regarding 'things to try', we think we've eliminated things like having the site in the Intranet Zone for IE by having the ARR site work ok, and having the same IE settings for that site. Clearly, something is not right, so we could have missed something here!
In short, we've been working on this for days, and have tried most of what we can find on SO and elsewhere, but can't figure out what the heck is going on.
Any suggestions - let me know if you want any further info. All help will be very gratefully received!
I created an instance group through an instance template, and aligned this instance group to a backend service which is used by a http load balancer.
Now when I open a url to an instance vm from the instance group I created, I can do GET POST and DELETE requests and all of the requests are fast, and everything works as expected.
When I open up the url to the static IP for the load balancer. I can do GET and POST requests, but DELETE requests throw a 400 BAD REQUEST with a response page saying:
That’s an error.
Your client has issued a malformed or illegal request. That’s all we
know.
Other load balancer issues:
The site is quite slow through the load balancer. Perhaps
there is a setting I'm missing, I'm pretty sure I set everything to
us-central-1b.
Sometimes the site doesn't even show up. It will work for http, but then
it won't work for https and visa versa. The load balancer has very strange
behaviour.
My VM api access is set to This instance has full API access to all Google Cloud services
I'm using Django as my api layer, I turned on debugging on this host and saw that the DELETE requests weren't even coming through when making requests through the loadbalancer static ip. Is there a firewall setting I'm missing?
Please help me make this fast again and allow the DELETE requests to happen.
Thanks!
Are you sending anything in the body of the request?
Google load balancer will respond with 400 BAD REQUEST if you try to send anything in the body. Easy way to check if this is the problem is fire up Chrome Developer tools and check the Request Payload section is empty/doesn't exist.
The HTTP spec doesn't explicitly say wether you can pass anything in the body so this isn't wrong, just undefined.
Is the load balancer slow for all requests or just pages with lots of elements on?
I am trying to build a model for how frequently users make web requests. I am interested in the timing between each new page they visit. I want to build a load simulator which then uses this model.
To do this I've been analyzing Squid access logs and looking at the timing between http requests by user IP. Squid captures all the requests associated with a web site request and I am only interested in the top level page requests. There are numerous starting pages for a request eg. not just *.html so it seems challenging to only capture the starting page for each session.
Is there a way to only capture the initial request for the top level page, like for when a user a page on Amazon, and then they jump to another page, etc.
You can use Squid Analysis Report Generator it will read log files and generate reports in HTML format with detailed information like access and denied website,daily and weekly report.
We attempted a passbook program but it never made it out of beta, but there are a few passes out there that keep phoning home (and throwing errors because the passes are out of sync with existing data). My plan is to 404 any incoming requests, but I'm not sure if that is the best way to handle existing passes. Any other ideas or is 404 the right solution?
There are a few of options:
Return an updated pass without that has a blank web service url
Return an appropriate error
Remove the DNS entry of the subdomain
Update the web service url
Any of the fields in the pass can be updated including the web service url. Removing the url will prevent further requests for updates. This s potentially the most effective, but would require a bit of development to return the updated pass and would need to be maintained until all passes have been "disabled."
Return an appropriate error code
It may be easier to simply return an error code. This could be done through the web server configuration preventing the requests from being processed by your application (and presumably stop the errors in the application). This would allow you to remove the code altogether from your application.
The Passbook Web Service Reference indicates that Passbook will eventually give up when receiving persistent errors.
If a request fails—for example, due to a network connectivity issue—Passbook tries again several times after waiting a period of time. Each time it tries again, it waits longer. If the request continues to fail, it eventually gives up.
The documentation also indicates that standard HTTP status codes should be used in the response from the call to Getting the Latest Version of a Pass (and others).
Response
If request is authorized, return HTTP status 200 with a payload of the pass data.
If the request is not authorized, return HTTP status 401.
Otherwise, return the appropriate standard HTTP status.
Discussion
Support standard HTTP caching on this endpoint: check for the If-Modified-Since header and return HTTP status code 304 if the pass has not changed.
It sounds like the ending of the passbook program is permanent in which case 410 Gone would be an appropriate error code. (From RFC 2616).
410 Gone
The requested resource is no longer available at the server and no forwarding address is known. This condition is expected to be considered permanent. Clients with link editing capabilities SHOULD delete references to the Request-URI after user approval. If the server does not know, or has no facility to determine, whether or not the condition is permanent, the status code 404 (Not Found) SHOULD be used instead. This response is cacheable unless indicated otherwise.
The 410 response is primarily intended to assist the task of web maintenance by notifying the recipient that the resource is intentionally unavailable and that the server owners desire that remote links to that resource be removed. Such an event is common for limited-time, promotional services and for resources belonging to individuals no longer working at the server's site. It is not necessary to mark all permanently unavailable resources as "gone" or to keep the mark for any length of time -- that is left to the discretion of the server owner.
Remove subdomain DNS
If your web service url was set up on a separate subdomain (e.g. passbook.example.com) you can simply remove the DNS entry for the subdomain. The requests will never reach the server and Passbook will eventually give up.
I have an Apache web server and I need to do some processes outside the web server when a user requests a certain page.
I will try to be more clear: when a user requests page X, I have to start an external program, passing it some session parameters, wait for response, and then send the requested page to the user.
Is this possible to do this?
I used apache ext_mod_filter: http://httpd.apache.org/docs/2.0/mod/mod_ext_filter.html
Performances are not that great, but for my purposes it is ok.