ending a passbook program - HTTP response to incoming passbook requests? - passbook

We attempted a passbook program but it never made it out of beta, but there are a few passes out there that keep phoning home (and throwing errors because the passes are out of sync with existing data). My plan is to 404 any incoming requests, but I'm not sure if that is the best way to handle existing passes. Any other ideas or is 404 the right solution?

There are a few of options:
Return an updated pass without that has a blank web service url
Return an appropriate error
Remove the DNS entry of the subdomain
Update the web service url
Any of the fields in the pass can be updated including the web service url. Removing the url will prevent further requests for updates. This s potentially the most effective, but would require a bit of development to return the updated pass and would need to be maintained until all passes have been "disabled."
Return an appropriate error code
It may be easier to simply return an error code. This could be done through the web server configuration preventing the requests from being processed by your application (and presumably stop the errors in the application). This would allow you to remove the code altogether from your application.
The Passbook Web Service Reference indicates that Passbook will eventually give up when receiving persistent errors.
If a request fails—for example, due to a network connectivity issue—Passbook tries again several times after waiting a period of time. Each time it tries again, it waits longer. If the request continues to fail, it eventually gives up.
The documentation also indicates that standard HTTP status codes should be used in the response from the call to Getting the Latest Version of a Pass (and others).
Response
If request is authorized, return HTTP status 200 with a payload of the pass data.
If the request is not authorized, return HTTP status 401.
Otherwise, return the appropriate standard HTTP status.
Discussion
Support standard HTTP caching on this endpoint: check for the If-Modified-Since header and return HTTP status code 304 if the pass has not changed.
It sounds like the ending of the passbook program is permanent in which case 410 Gone would be an appropriate error code. (From RFC 2616).
410 Gone
The requested resource is no longer available at the server and no forwarding address is known. This condition is expected to be considered permanent. Clients with link editing capabilities SHOULD delete references to the Request-URI after user approval. If the server does not know, or has no facility to determine, whether or not the condition is permanent, the status code 404 (Not Found) SHOULD be used instead. This response is cacheable unless indicated otherwise.
The 410 response is primarily intended to assist the task of web maintenance by notifying the recipient that the resource is intentionally unavailable and that the server owners desire that remote links to that resource be removed. Such an event is common for limited-time, promotional services and for resources belonging to individuals no longer working at the server's site. It is not necessary to mark all permanently unavailable resources as "gone" or to keep the mark for any length of time -- that is left to the discretion of the server owner.
Remove subdomain DNS
If your web service url was set up on a separate subdomain (e.g. passbook.example.com) you can simply remove the DNS entry for the subdomain. The requests will never reach the server and Passbook will eventually give up.

Related

Windows Authentication issue with .Net Reverse Proxy using IIS custom HTTP module

We use a custom HTTP module in IIS as a reverse proxy for web applications. Generally this works well and has done for some time, but we've come across an issue with Windows Authentication (WA). We're using IE 11, IIS 10 and Server 2016.
When accessing the target site directly, WA works fine - we get a browser login dialog when the initial HTML page is requested and the subsequent requests (CSS, JS, etc) go through fine.
When accessing via our proxy, the same (correct behaviour) happens for the initial html page, the first CSS/JS request authenticates ok too, but the subsequent ones cause a browser login to popup.
What seems to happen on the 'bad' requests (i,.e. those that cause the login dialog) is:
1) Browser decides it needs to authenticate, so sends an Authorization header (Negotiate, with an NTLM token)
2) Server responds (401) with a WWW-Authenticate: Negotiate response with a full NTLM token
3) Browser re-requests with an Authorization header (Negotiate, with a full NTLM token)
4) Server responds (401) with a WWW-Authenticate: Negotiate (with no token), which causes the browser to show the login dialog
5) With login credentials entered, Browser sends the same request as in (1) - identical NTLM token, server responds as in (2), Browser re-requests as in (3), but this time it works!
We've set up a test web site with one html page, requesting 3 JS and 2 CSS files to replicate this. On our test server we've got two sites, one using our reverse proxy and one using ARR. The ARR site works fine. Also, since step (5) above works, we believe that the proxy pass-through is fundamentally working, i.e. NTLM tokens are not being messed up by dodgy encoding, etc.
One thing that does work, is that if we use Fiddler and put breakpoints on each request, we're able to hold back on the 5 sub-requests (JS & CSS files), letting one go through at a time. If we let each sequence (i.e. NTLM token exchange for each URL/file, through to the 200 response), then it works. This made us think that there is some inter-leaving effect (e.g. shared memory corruption) in our proxy, this is still a possibility.
So, we put code at the start of BeginRequest and end of EndRequest with a Synclock and a shared var to store the Path (AppRelativeCurrentExecutionFilePath). This was for our code to 'Single Thread' each of these request/exchanges. This does what we expected, i.e. only allowing one auth exchange to happen and resulting in a 200 before allowing the next. However, we still have the same problem of the server rejecting the first exchange. So, does this indicate something happening in/before BeginRequest, where if we hold the requests back in Fiddler then they work, but not if we do it in our http module?
Or is there some sort of timing issue where the manual breakpoints in Fiddler also mean we’re doing it at ‘human’ speed and therefore allowing things to work better?
One difference we can see is the ‘Connection: Keep-Alive’. That header is in the request from the browser to our proxy site, but not passed from our proxy to the base site, yet the ARR site does pass that through... It’s all using HTTP 1.1. and so we can't find a way to set Keep-Alive on our outgoing request - could this be it?
Regarding 'things to try', we think we've eliminated things like having the site in the Intranet Zone for IE by having the ARR site work ok, and having the same IE settings for that site. Clearly, something is not right, so we could have missed something here!
In short, we've been working on this for days, and have tried most of what we can find on SO and elsewhere, but can't figure out what the heck is going on.
Any suggestions - let me know if you want any further info. All help will be very gratefully received!

Yii Flash Messages not showing - possible HTTP Proxy browsing?

I'm investigating a problem a user is having with a web application that is built using Yii.
The user is not seeing the Yii 'flash' session-based user-feedback messages. These messages are shown once to a user and then destroyed (so they're not shown on subsequent page loads).
I took a look at the server access logs and I noticed something weird.
When this user requests a page there is a second identical request but from a different IP and with a different User Agent string. The second request is often at the same time or is sometimes (at most) a couple of minutes later. A bit of googling leads me to the conclusion that the user is browsing the web using a HTTP Proxy.
So, is this likely to be a HTTP Proxy? Or could it be something more suspicious? And if it is a HTTP Proxy, does this explain why they're not seeing the flash session messages? Could it be that the messages are being 'shown' to the Proxy and then destroyed?

POST Requests seen as GET by server

Got a really strange problem here. When sending post requests to my PHP script
$_SERVER['REQUEST_METHOD']
returns "GET" instead of "POST".
It works fine for every other REST method
so this is what I get
GET -> GET
POST-> GET
PUT -> PUT
DELETE -> DELETE
It only happens on one of my servers so i'm assuming it's an apache problem and i've managed to figure out that it only happens if I add "www" to my url.
I.e
www.something.com
causes the problem but
something.com
does not
I have tested on different sites on the same server and I get the same thing so I'm assuming it's global config.
Any thoughts
As the HTTP spec says for response codes 301 and 302:
Note: For historic reasons, a user agent MAY change the request method
from POST to GET for the subsequent request. If this behavior is
undesired, the 307 (Temporary Redirect) status code can be used
instead
A third (but unlikely) possibility is you're getting a 303 response to the initial URI. The solution is twofold:
Configure the clients which are under your control to POST to the canonical URI so they are not redirected at all.
Configure your server to redirect using 307 in this case instead of 301/302.

Internet Explorer: SCRIPT7002: XMLHttpRequest: Network Error 0x2f7d, Could not complete the operation due to error 00002f7d

This problem is driving me nuts. Our web app uses HTTP POST to login users and now IE 10 is aborting the connection and saying:
SCRIPT7002: XMLHttpRequest: Network Error 0x2f7d, Could not complete the operation due to error 00002f7d.
Here are all the details I have
IE version 10.0.9.16618, update version 10.0.6. I've also reproduced this on IE version 10.0.9200.16635, update version 10.0.7.
The domain is using HTTPS. The problem doesn't occur on HTTP connections
I've read that for some reason IE needs to get a certificate before it can do an HTTP POST, so I have HTTP GETs running before my POST request, but now the GET is erroring out. See network flow screen shot. The GET is super simple, just a PING page that returns "I'm up."
Asyn is turned off $.ajax({type: 'POST',url: url,async: false...}); I've read in other posts that this matters.
The certificate is good, see screen shot.
The problem goes away if the site is added as a "trusted site" but that's not really the user experience we're shooting for.
This just started about a month ago. Did Microsoft push some new updates recently?
I've already read: http://social.msdn.microsoft.com/Forums/windowsapps/en-US/dd5d2762-7643-420e-880a-9bf75554e383/intermittent-xmlhttprequest-network-error-0x2f7d-could-not-complete-the-operation-due-to-error. It doesn't help.
Screen shots:
Network flow:
Cert is good:
Any help is greatly appreciated. I've spent a lot of hours on this with no luck. As you would expect this works fine in Chrome and Firefox. If you need any more detail about what's happening please let me know.
Thanks,
Certificate revocation checks may block the initial JSON POST, but allow subsequent requests after the GET callback
We recently determined that URLMon's code (Win8, Win7, and probably earlier) to ignore certificate revocation check failures is not applied for content uploads (e.g. HTTP POST). Hence, if a Certificate Revocation check fails, that is fatal to the upload (e.g. IE will show a Page Cannot Be Displayed error message; other clients would show a different error). However, this rarely matters in the real world because in most cases, the user first performs a download (HTTP GET) from the target HTTPS site, and as a result the server's certificate is cached with the "ignore revocation check failures" exemption for the lifetime of the process and thus a subsequent POST inherits that flag and succeeds. The upload fails if the very first request to the HTTPS site in the current process was for an upload (e.g. as in a cross-origin POST request).
Here is how it works:
A little background: When a web browser initiates a HTTPS handshake with a web server, the server immediately sends down a digital certificate. The hostname of the server is listed inside the digital certificate, and the browser compares it to the hostname it was attempting to reach. If these hostnames do not match, the browser raises an error.
The matching-hostnames requirement causes a problem if a single-IP is configured to host multiple sites (sometimes known as “virtual-hosting”). Ordinarily, a virtual-hosting server examines the HTTP Host request header to determine what HTTP content to return. However, in the HTTPS case, the server must provide a digital certificate before it receives the HTTP headers from the browser. SNI resolves this problem by listing the target server’s hostname in the SNI extension field of the initial client handshake with the secure server. A virtual-hosting server may examine the SNI extension to determine which digital certificate to send back to the client.
The GET may be victim of the operation aborted scenario:
The HTML file is being parsed, and encounters a script block. The script block contains inline script which creates a new element and attempts to add it to the BODY element before the closing BODY tag has been encountered by the parser.
<body>
<div>
<script>document.body.appendChild(newElem)</script>
</div>
</body>
Note that if I removed the <div> element, then this problem would not occur because the script block's immediate parent would be BODY, and the script block's immediate parent is immune to this problem.
References
Understanding Certificate Revocation Checks
Client Certificates vs Server Certificates
Understanding and Managing the Certificate Stores
Preventing Operation Aborted Scenarios
HTTPS Improvements in IE
Online Certificate Status Protocol - OCSP
[SOLVED]
I only observed this error today. for me the Error code was different though.
SCRIPT7002: XMLHttpRequest: Network Error 0x2efd, Could not complete
the operation due to error 00002efd.
I was occuring randomly and not all time. but what it noticed is, if it comes it comes for subsequent ajax calls.. so i put some delay of 5 seconds between the ajax calls and it resolved.
Also the CORS must be configured on your web server.
I had the same exact issue and I just finally resolved it. For some reason I got the same error that you were receiving on IE when connecting to the API using OWIN middleware that was used to receive login credentials. It seemed to work fine while connecting to any other sort of API though. For some reason it didnt like cross domain request even though I had CORS enabled server side on the API.
Anyways I was able to resolve the issue using the xdomain library. Make sure you load this script before loading any other javascript.
First create a proxy.html page on the root of your API server and add this code. Replace placeholder URL.
<!DOCTYPE HTML>
<script src="//cdn.rawgit.com/jpillora/xdomain/0.7.3/dist/xdomain.min.js" master="http://insert_client_url_here.com"></script>
Now simply add this to your client replacing the placeholder URL pointing to the proxy.html page on your API server.
<script src="//cdn.rawgit.com/jpillora/xdomain/0.7.3/dist/xdomain.min.js" slave="http://Insert_Api_Url_Here.com/proxy.html"></script>
Adding a delay is not a proper solution.
This can be because the IE will treat it as an network error when the empty body request is made.
Try adding a empty class as the parameter in the server and IE should start working.

Mod_rewrite - How to tell Google to dynamically delete pages from their index after 7 days

Search engines like to crawl and index webpages or URLs, but what if your webpages/URLs have expired content and you do not want them to be indexed after so many days?
Can you put an expiration in the URL and have mod_rewrite 301 redirect pages after a given expiration date?
Or maybe a cron job to add a 301 redirect header to all expired pages?
Just have the 'expired' pages return a 404? I am pretty sure that when Google encounters a 404, it will remove the page.
Not 404 or 301, but 410 Gone. This is the appropriate HTTP response:
The requested resource is no longer available at the server and no forwarding address is known. This condition is expected to be considered permanent. Clients with link editing capabilities SHOULD delete references to the Request-URI after user approval. If the server does not know, or has no facility to determine, whether or not the condition is permanent, the status code 404 (Not Found) SHOULD be used instead. This response is cacheable unless indicated otherwise.
The 410 response is primarily intended to assist the task of web maintenance by notifying the recipient that the resource is intentionally unavailable and that the server owners desire that remote links to that resource be removed. Such an event is common for limited-time, promotional services and for resources belonging to individuals no longer working at the server's site. It is not necessary to mark all permanently unavailable resources as "gone" or to keep the mark for any length of time -- that is left to the discretion of the server owner.
How you provide this response is open to discussion, however. There are many ways.