Internet Explorer: SCRIPT7002: XMLHttpRequest: Network Error 0x2f7d, Could not complete the operation due to error 00002f7d - xmlhttprequest

This problem is driving me nuts. Our web app uses HTTP POST to login users and now IE 10 is aborting the connection and saying:
SCRIPT7002: XMLHttpRequest: Network Error 0x2f7d, Could not complete the operation due to error 00002f7d.
Here are all the details I have
IE version 10.0.9.16618, update version 10.0.6. I've also reproduced this on IE version 10.0.9200.16635, update version 10.0.7.
The domain is using HTTPS. The problem doesn't occur on HTTP connections
I've read that for some reason IE needs to get a certificate before it can do an HTTP POST, so I have HTTP GETs running before my POST request, but now the GET is erroring out. See network flow screen shot. The GET is super simple, just a PING page that returns "I'm up."
Asyn is turned off $.ajax({type: 'POST',url: url,async: false...}); I've read in other posts that this matters.
The certificate is good, see screen shot.
The problem goes away if the site is added as a "trusted site" but that's not really the user experience we're shooting for.
This just started about a month ago. Did Microsoft push some new updates recently?
I've already read: http://social.msdn.microsoft.com/Forums/windowsapps/en-US/dd5d2762-7643-420e-880a-9bf75554e383/intermittent-xmlhttprequest-network-error-0x2f7d-could-not-complete-the-operation-due-to-error. It doesn't help.
Screen shots:
Network flow:
Cert is good:
Any help is greatly appreciated. I've spent a lot of hours on this with no luck. As you would expect this works fine in Chrome and Firefox. If you need any more detail about what's happening please let me know.
Thanks,

Certificate revocation checks may block the initial JSON POST, but allow subsequent requests after the GET callback
We recently determined that URLMon's code (Win8, Win7, and probably earlier) to ignore certificate revocation check failures is not applied for content uploads (e.g. HTTP POST). Hence, if a Certificate Revocation check fails, that is fatal to the upload (e.g. IE will show a Page Cannot Be Displayed error message; other clients would show a different error). However, this rarely matters in the real world because in most cases, the user first performs a download (HTTP GET) from the target HTTPS site, and as a result the server's certificate is cached with the "ignore revocation check failures" exemption for the lifetime of the process and thus a subsequent POST inherits that flag and succeeds. The upload fails if the very first request to the HTTPS site in the current process was for an upload (e.g. as in a cross-origin POST request).
Here is how it works:
A little background: When a web browser initiates a HTTPS handshake with a web server, the server immediately sends down a digital certificate. The hostname of the server is listed inside the digital certificate, and the browser compares it to the hostname it was attempting to reach. If these hostnames do not match, the browser raises an error.
The matching-hostnames requirement causes a problem if a single-IP is configured to host multiple sites (sometimes known as “virtual-hosting”). Ordinarily, a virtual-hosting server examines the HTTP Host request header to determine what HTTP content to return. However, in the HTTPS case, the server must provide a digital certificate before it receives the HTTP headers from the browser. SNI resolves this problem by listing the target server’s hostname in the SNI extension field of the initial client handshake with the secure server. A virtual-hosting server may examine the SNI extension to determine which digital certificate to send back to the client.
The GET may be victim of the operation aborted scenario:
The HTML file is being parsed, and encounters a script block. The script block contains inline script which creates a new element and attempts to add it to the BODY element before the closing BODY tag has been encountered by the parser.
<body>
<div>
<script>document.body.appendChild(newElem)</script>
</div>
</body>
Note that if I removed the <div> element, then this problem would not occur because the script block's immediate parent would be BODY, and the script block's immediate parent is immune to this problem.
References
Understanding Certificate Revocation Checks
Client Certificates vs Server Certificates
Understanding and Managing the Certificate Stores
Preventing Operation Aborted Scenarios
HTTPS Improvements in IE
Online Certificate Status Protocol - OCSP

[SOLVED]
I only observed this error today. for me the Error code was different though.
SCRIPT7002: XMLHttpRequest: Network Error 0x2efd, Could not complete
the operation due to error 00002efd.
I was occuring randomly and not all time. but what it noticed is, if it comes it comes for subsequent ajax calls.. so i put some delay of 5 seconds between the ajax calls and it resolved.
Also the CORS must be configured on your web server.

I had the same exact issue and I just finally resolved it. For some reason I got the same error that you were receiving on IE when connecting to the API using OWIN middleware that was used to receive login credentials. It seemed to work fine while connecting to any other sort of API though. For some reason it didnt like cross domain request even though I had CORS enabled server side on the API.
Anyways I was able to resolve the issue using the xdomain library. Make sure you load this script before loading any other javascript.
First create a proxy.html page on the root of your API server and add this code. Replace placeholder URL.
<!DOCTYPE HTML>
<script src="//cdn.rawgit.com/jpillora/xdomain/0.7.3/dist/xdomain.min.js" master="http://insert_client_url_here.com"></script>
Now simply add this to your client replacing the placeholder URL pointing to the proxy.html page on your API server.
<script src="//cdn.rawgit.com/jpillora/xdomain/0.7.3/dist/xdomain.min.js" slave="http://Insert_Api_Url_Here.com/proxy.html"></script>

Adding a delay is not a proper solution.
This can be because the IE will treat it as an network error when the empty body request is made.
Try adding a empty class as the parameter in the server and IE should start working.

Related

Windows Authentication issue with .Net Reverse Proxy using IIS custom HTTP module

We use a custom HTTP module in IIS as a reverse proxy for web applications. Generally this works well and has done for some time, but we've come across an issue with Windows Authentication (WA). We're using IE 11, IIS 10 and Server 2016.
When accessing the target site directly, WA works fine - we get a browser login dialog when the initial HTML page is requested and the subsequent requests (CSS, JS, etc) go through fine.
When accessing via our proxy, the same (correct behaviour) happens for the initial html page, the first CSS/JS request authenticates ok too, but the subsequent ones cause a browser login to popup.
What seems to happen on the 'bad' requests (i,.e. those that cause the login dialog) is:
1) Browser decides it needs to authenticate, so sends an Authorization header (Negotiate, with an NTLM token)
2) Server responds (401) with a WWW-Authenticate: Negotiate response with a full NTLM token
3) Browser re-requests with an Authorization header (Negotiate, with a full NTLM token)
4) Server responds (401) with a WWW-Authenticate: Negotiate (with no token), which causes the browser to show the login dialog
5) With login credentials entered, Browser sends the same request as in (1) - identical NTLM token, server responds as in (2), Browser re-requests as in (3), but this time it works!
We've set up a test web site with one html page, requesting 3 JS and 2 CSS files to replicate this. On our test server we've got two sites, one using our reverse proxy and one using ARR. The ARR site works fine. Also, since step (5) above works, we believe that the proxy pass-through is fundamentally working, i.e. NTLM tokens are not being messed up by dodgy encoding, etc.
One thing that does work, is that if we use Fiddler and put breakpoints on each request, we're able to hold back on the 5 sub-requests (JS & CSS files), letting one go through at a time. If we let each sequence (i.e. NTLM token exchange for each URL/file, through to the 200 response), then it works. This made us think that there is some inter-leaving effect (e.g. shared memory corruption) in our proxy, this is still a possibility.
So, we put code at the start of BeginRequest and end of EndRequest with a Synclock and a shared var to store the Path (AppRelativeCurrentExecutionFilePath). This was for our code to 'Single Thread' each of these request/exchanges. This does what we expected, i.e. only allowing one auth exchange to happen and resulting in a 200 before allowing the next. However, we still have the same problem of the server rejecting the first exchange. So, does this indicate something happening in/before BeginRequest, where if we hold the requests back in Fiddler then they work, but not if we do it in our http module?
Or is there some sort of timing issue where the manual breakpoints in Fiddler also mean we’re doing it at ‘human’ speed and therefore allowing things to work better?
One difference we can see is the ‘Connection: Keep-Alive’. That header is in the request from the browser to our proxy site, but not passed from our proxy to the base site, yet the ARR site does pass that through... It’s all using HTTP 1.1. and so we can't find a way to set Keep-Alive on our outgoing request - could this be it?
Regarding 'things to try', we think we've eliminated things like having the site in the Intranet Zone for IE by having the ARR site work ok, and having the same IE settings for that site. Clearly, something is not right, so we could have missed something here!
In short, we've been working on this for days, and have tried most of what we can find on SO and elsewhere, but can't figure out what the heck is going on.
Any suggestions - let me know if you want any further info. All help will be very gratefully received!

How long before a browser will automatically reset the SSL cache?

We run an e-commerce site. Last week we renewed our SSL certificate and our web hosting provider inexplicably put the wrong web address on the new certificate.
So when we visited the site - browsers were giving us security errors, (and presumably to any customers during that time).
Once the SSL was fixed, we were able to access the site by either clearing the browser cache or using a different browser.
My question is: Will browsers automatically reset the cache after a period of time for our customers?
My concern is, unless customers manually clear their browser cache they will continue to think our site is unsafe.
There is no such thing as a SSL cache for failed attempts. If the browser connects first to a site with HTTPS it will get the certificate and validate it. If the validation was successful the browser might cache the current TLS session for reconnects - but only if the server sends a session id or session ticket for the TLS session. If the validation and thus the connection fails the browser caches nothing. And even if the browser tries to resume a TLS session later it depends on the server if this resumption is accepted at all - otherwise again a full handshake is done which involves getting and validating the certificate.
While you don't describe it this way I rather suspect that there was a wrong HTTP redirect, i.e. something like redirecting from http://example.com to https://wrong.example.org instead to https://www.example.com. Given the problems you describe this was likely a 301 "permanent" redirect which means that the browser can cache this redirect forever. See How long do browsers cache HTTP 301s? for more on this.

Can't fetch API Resource

I have a react-native application which I'm hooking up to an existing API which already has two clients (webapp, chrome extension). For some reason I just get the generic [TypeError: Network request failed] with nothing more.
The requests that are failing are just basic get requests such as
fetch('https://api.mydomain.com/pages/') or axios.get('https://api.mydomain.com/pages/') and they work fine from the webapp and chrome extension that are using this API as well - the requests only fail from the react-native application.
Everything I'm finding on google for this is in regards to localhost or SSL Certificate problems, although these are not my issues because I'm trying to pull from a deployed server and also that server has SSL correctly enabled and I'm using the https endpoint.
Some more notes:
when I do go against localhost (using the IP Address, not localhost address) I get this same error.
I'm getting this error on Android and I haven't touched nor am concerned yet with iOS
I get this same error with fetch and axios
The request goes through OK with this endpoint https://jsonplaceholder.typicode.com/posts/1 and I get back a response
This last note is the most interesting because I believe that means there is some issue with my server, however my server is not receiving any request... I have opened up CORS for testing purposes but have the same issue, although if that were the issue the server would have received the request and responded with 403.
This has to do with Android not trusting my SSL certificate. Apparently Android has some additional trust requirements on top of what web browsers require.
I found this through error.request._response via the axios catch block which showed me the error java.security.cert.CertPathValidatorException: Trust anchor for certification path not found.
After figuring that out, the root cause ended up being my SSL Certificate uploaded to AWS didn't have the correct intermediate cert which was fine in chrome, but not in android.

Safari 9 disallowed running of insecure content?

after upgrading to Safari 9 I'm getting this error in the browser:
[Warning] [blocked] The page at https://localhost:8443/login was not allowed to run insecure content from http://localhost:8080/assets/static/script.js.
Anyone knows how to enable the running of insecure content on the new Safari?
According to the Apple support forums Safari does not allow you to disable the block on mixed content.
Though this is frustrating for usability in legitimate cases like yours, it seems to be part of their effort to force secure content serving / content serving best practices.
As a solution for you you can either upgrade the HTTP connection to HTTPS (which it seems you have done) or proxy your content through an HTTPS connection with an HTTPS-enabled service (or, in your case, port).
You can fix the HTTPS problem by using HTTPS locally with a self signed SSL certificate. Heroku has a great how-to article about generating one.
After setting up SSL on all of your development servers, you will still get an error loading the resource in Safari since an untrusted certificate is being used(self signed SSL certificates are not trusted by browsers by default because they cannot be verified with a trusted authority). To fix this, you can load the problematic URL in a new tab in Safari and the browser will prompt you to allow access. If you click "Show Certificate" in the prompt, there will be a checkbox in the certificate details view to "Always allow content from localhost". Checking this before allowing access will store the setting in Safari for the future. After allowing access just reload the page originally exhibiting a problem and you should be good to go.
This is a valid use case as a developer but please make sure you fully understand the security implications and risks you are adding to your system by making this change!
If like me you have
frontend on port1
backend on port2b
want to load script http://localhost:port1/app.js from http://localhost:port2/backendPage
I have found an easy workaround: simply redirect with http response all http://localhost:port2/localFrontend/*path to http://localhost:port1/*path from your backend server configuration.
Then you could load your script directly from http://localhost:port2/localFrontend/app.js instead of direct frontend url. (or you could configure a base url for all your resources)
This way, Safari will be able to load content from another domain/port without needing any https setup.
For me disabling the Website tracking i.e. uncheck the Prevent cross-site tracking worked.

JMeter stops sending JSESSIONID cookie when switched to SSL

I have a test plan that runs fine under http, and the Cookie Manager is correctly keeping my sessions in place. It is also capable of talking to the same server when switched to ssl, and even thinks everything is working correctly because it gets a 200 response with our custom message about not being logged in.
All I need to do to reproduce the behavior is switch from http to https. The test is still able to talk to the server, but I can see in the "View Results in Table" log that cookies has a JSESSIONID under http, and is empty under https. And each request under ssl is answered with a Set-Cookie for JSESSIONID.
Interesting scenario. Does the Jmeter log file offer any clues?
Could it be that Jmeter needs a copy of the certificate to properly store the SSL cookie? The console would display a handshake problem, which can be resolved by adding the certificate into the key store:
http://www.java-samples.com/showtutorial.php?tutorialid=210
You might be able to do some further debug by writing out the cookie value to a variable and logging its value:
Received Cookies can be stored as JMeter thread variables (versions of JMeter after 2.3.2 no longer do this by default). To save cookies as variables, define the property "CookieManager.save.cookies=true". Also, cookies names are prefixed with "COOKIE_" before they are stored (this avoids accidental corruption of local variables) To revert to the original behaviour, define the property "CookieManager.name.prefix= " (one or more spaces). If enabled, the value of a cookie with the name TEST can be referred to as ${COOKIE_TEST}.
Source: http://jmeter.apache.org/usermanual/component_reference.html#HTTP_Cookie_Manager
Edit: Somebody asked how my specific problem was solved. It turned out not to have anything to do with ssl specifically, but that other unrelated headers changed very slightly in their format, so the regex we were using to match on them started failing. So I'd start there with looking at your headers and comparing the difference between when you post http vs https