We run an e-commerce site. Last week we renewed our SSL certificate and our web hosting provider inexplicably put the wrong web address on the new certificate.
So when we visited the site - browsers were giving us security errors, (and presumably to any customers during that time).
Once the SSL was fixed, we were able to access the site by either clearing the browser cache or using a different browser.
My question is: Will browsers automatically reset the cache after a period of time for our customers?
My concern is, unless customers manually clear their browser cache they will continue to think our site is unsafe.
There is no such thing as a SSL cache for failed attempts. If the browser connects first to a site with HTTPS it will get the certificate and validate it. If the validation was successful the browser might cache the current TLS session for reconnects - but only if the server sends a session id or session ticket for the TLS session. If the validation and thus the connection fails the browser caches nothing. And even if the browser tries to resume a TLS session later it depends on the server if this resumption is accepted at all - otherwise again a full handshake is done which involves getting and validating the certificate.
While you don't describe it this way I rather suspect that there was a wrong HTTP redirect, i.e. something like redirecting from http://example.com to https://wrong.example.org instead to https://www.example.com. Given the problems you describe this was likely a 301 "permanent" redirect which means that the browser can cache this redirect forever. See How long do browsers cache HTTP 301s? for more on this.
Related
I have an Nginx with SSL already without HSTS.
But in the backend, few services are not https.
Is there any potential risk for the enable the HSTS?
I am worried about the HSTS header will break the internal route when the HSTS header exists, force redirect to HTTPS
For example
current:
public user -> https://www.123.com --- internal service ---> Http://internalA.123.com -> Http://internalA.123.com
Will it become below?
public user -> "https://www.123.com" --- internal service ---> Https://internalA.123.com -> Https://internalA.123.com
If yes, then The service will definitely break with HSTS
HSTS is solely about the connection between client (browser) and web server (nginx). It does not matter what the server then does with the received data, i.e. if the data are processed locally or if the server is just a reverse proxy to some other server. Similar HTTPS between client and server does only protect the communication between these two and does neither protect any further connections from the reverse proxy nor does it secure local processing of data at the server side.
HSTS is an instruction from the server (nginx) to client (browser) to say "hey next time just assume I'm on HTTPS." So in above scenario it will not be used for the back end connection as Steffen says.
However there are definitely a few dangers to be aware of:
First of all if you set this at the top level domain (e.g. 123.com) and use the includeSubDomains then every domain on 123.com is suddenly potentially given HSTS protection. This will likely still not affect your backend connections however if you happen to visit https://123.com in your browser (maybe you just typed that in the URL bar) and pick up that header and then try to visit http://intranet.123.com or http://dev.123.com or even http://dev.123.com:8080 then all of them will redirect to HTTPS and if they are not available over HTTPS then they will fail and you just can't visit those sites anymore. This can be difficult to track down as perhaps not many visit the bare domain so it "works fine for me".
Of course if you're only using HTTPS on all your sites on that domain (including all your internal sites) then this is not an issue but if not...
As an extra protection you can also submit your site to a preload list which will then be hardcoded into browsers in their next release, and some other client also use this hardcoded list. This is an extra protection though brings with it extra risks as one of the requirements for it is that the top level domain is included with includeSubDomains. I realise you haven't asked about this, but since you're asking about the risk of this I think it's well worth mentioning here. So, with preloading HSTS, it suddenly brings all above risks into play even without visiting https://123.com to pick up the header. And, as it's months (or even years) between browser releases this is basically irreversible. You could quite happily be running HSTS on your www domain, think it's all working fine and decide to upgrade to the preload list as well cause you've not seen any issues and suddenly with the next browser release all your HTTP-only internal sites stop working and you need to upgrade them all HTTPS immediately or they can't be accessed.
In summary take care with HSTS if still have some sites on HTTP-only at present. Consider only returning it on sites that need it (e.g. https://123.com without includeSubdomains and https://www.123.com with includeSubdomains) and be extra careful with preloading it. Personally I think preload is overkill for most sites but if you really want to do it, then best practice is to first load a resource from https://123.com on your home page with a small expiry and increase it slowly. That way everyone picks this up before submitting it to preload list and there are no surprises.
But HSTS is good, should be used on all public facing websites IMHO and I don't want this answer putting you off - just understand how it works and so the risks with it.
after upgrading to Safari 9 I'm getting this error in the browser:
[Warning] [blocked] The page at https://localhost:8443/login was not allowed to run insecure content from http://localhost:8080/assets/static/script.js.
Anyone knows how to enable the running of insecure content on the new Safari?
According to the Apple support forums Safari does not allow you to disable the block on mixed content.
Though this is frustrating for usability in legitimate cases like yours, it seems to be part of their effort to force secure content serving / content serving best practices.
As a solution for you you can either upgrade the HTTP connection to HTTPS (which it seems you have done) or proxy your content through an HTTPS connection with an HTTPS-enabled service (or, in your case, port).
You can fix the HTTPS problem by using HTTPS locally with a self signed SSL certificate. Heroku has a great how-to article about generating one.
After setting up SSL on all of your development servers, you will still get an error loading the resource in Safari since an untrusted certificate is being used(self signed SSL certificates are not trusted by browsers by default because they cannot be verified with a trusted authority). To fix this, you can load the problematic URL in a new tab in Safari and the browser will prompt you to allow access. If you click "Show Certificate" in the prompt, there will be a checkbox in the certificate details view to "Always allow content from localhost". Checking this before allowing access will store the setting in Safari for the future. After allowing access just reload the page originally exhibiting a problem and you should be good to go.
This is a valid use case as a developer but please make sure you fully understand the security implications and risks you are adding to your system by making this change!
If like me you have
frontend on port1
backend on port2b
want to load script http://localhost:port1/app.js from http://localhost:port2/backendPage
I have found an easy workaround: simply redirect with http response all http://localhost:port2/localFrontend/*path to http://localhost:port1/*path from your backend server configuration.
Then you could load your script directly from http://localhost:port2/localFrontend/app.js instead of direct frontend url. (or you could configure a base url for all your resources)
This way, Safari will be able to load content from another domain/port without needing any https setup.
For me disabling the Website tracking i.e. uncheck the Prevent cross-site tracking worked.
This problem is driving me nuts. Our web app uses HTTP POST to login users and now IE 10 is aborting the connection and saying:
SCRIPT7002: XMLHttpRequest: Network Error 0x2f7d, Could not complete the operation due to error 00002f7d.
Here are all the details I have
IE version 10.0.9.16618, update version 10.0.6. I've also reproduced this on IE version 10.0.9200.16635, update version 10.0.7.
The domain is using HTTPS. The problem doesn't occur on HTTP connections
I've read that for some reason IE needs to get a certificate before it can do an HTTP POST, so I have HTTP GETs running before my POST request, but now the GET is erroring out. See network flow screen shot. The GET is super simple, just a PING page that returns "I'm up."
Asyn is turned off $.ajax({type: 'POST',url: url,async: false...}); I've read in other posts that this matters.
The certificate is good, see screen shot.
The problem goes away if the site is added as a "trusted site" but that's not really the user experience we're shooting for.
This just started about a month ago. Did Microsoft push some new updates recently?
I've already read: http://social.msdn.microsoft.com/Forums/windowsapps/en-US/dd5d2762-7643-420e-880a-9bf75554e383/intermittent-xmlhttprequest-network-error-0x2f7d-could-not-complete-the-operation-due-to-error. It doesn't help.
Screen shots:
Network flow:
Cert is good:
Any help is greatly appreciated. I've spent a lot of hours on this with no luck. As you would expect this works fine in Chrome and Firefox. If you need any more detail about what's happening please let me know.
Thanks,
Certificate revocation checks may block the initial JSON POST, but allow subsequent requests after the GET callback
We recently determined that URLMon's code (Win8, Win7, and probably earlier) to ignore certificate revocation check failures is not applied for content uploads (e.g. HTTP POST). Hence, if a Certificate Revocation check fails, that is fatal to the upload (e.g. IE will show a Page Cannot Be Displayed error message; other clients would show a different error). However, this rarely matters in the real world because in most cases, the user first performs a download (HTTP GET) from the target HTTPS site, and as a result the server's certificate is cached with the "ignore revocation check failures" exemption for the lifetime of the process and thus a subsequent POST inherits that flag and succeeds. The upload fails if the very first request to the HTTPS site in the current process was for an upload (e.g. as in a cross-origin POST request).
Here is how it works:
A little background: When a web browser initiates a HTTPS handshake with a web server, the server immediately sends down a digital certificate. The hostname of the server is listed inside the digital certificate, and the browser compares it to the hostname it was attempting to reach. If these hostnames do not match, the browser raises an error.
The matching-hostnames requirement causes a problem if a single-IP is configured to host multiple sites (sometimes known as “virtual-hosting”). Ordinarily, a virtual-hosting server examines the HTTP Host request header to determine what HTTP content to return. However, in the HTTPS case, the server must provide a digital certificate before it receives the HTTP headers from the browser. SNI resolves this problem by listing the target server’s hostname in the SNI extension field of the initial client handshake with the secure server. A virtual-hosting server may examine the SNI extension to determine which digital certificate to send back to the client.
The GET may be victim of the operation aborted scenario:
The HTML file is being parsed, and encounters a script block. The script block contains inline script which creates a new element and attempts to add it to the BODY element before the closing BODY tag has been encountered by the parser.
<body>
<div>
<script>document.body.appendChild(newElem)</script>
</div>
</body>
Note that if I removed the <div> element, then this problem would not occur because the script block's immediate parent would be BODY, and the script block's immediate parent is immune to this problem.
References
Understanding Certificate Revocation Checks
Client Certificates vs Server Certificates
Understanding and Managing the Certificate Stores
Preventing Operation Aborted Scenarios
HTTPS Improvements in IE
Online Certificate Status Protocol - OCSP
[SOLVED]
I only observed this error today. for me the Error code was different though.
SCRIPT7002: XMLHttpRequest: Network Error 0x2efd, Could not complete
the operation due to error 00002efd.
I was occuring randomly and not all time. but what it noticed is, if it comes it comes for subsequent ajax calls.. so i put some delay of 5 seconds between the ajax calls and it resolved.
Also the CORS must be configured on your web server.
I had the same exact issue and I just finally resolved it. For some reason I got the same error that you were receiving on IE when connecting to the API using OWIN middleware that was used to receive login credentials. It seemed to work fine while connecting to any other sort of API though. For some reason it didnt like cross domain request even though I had CORS enabled server side on the API.
Anyways I was able to resolve the issue using the xdomain library. Make sure you load this script before loading any other javascript.
First create a proxy.html page on the root of your API server and add this code. Replace placeholder URL.
<!DOCTYPE HTML>
<script src="//cdn.rawgit.com/jpillora/xdomain/0.7.3/dist/xdomain.min.js" master="http://insert_client_url_here.com"></script>
Now simply add this to your client replacing the placeholder URL pointing to the proxy.html page on your API server.
<script src="//cdn.rawgit.com/jpillora/xdomain/0.7.3/dist/xdomain.min.js" slave="http://Insert_Api_Url_Here.com/proxy.html"></script>
Adding a delay is not a proper solution.
This can be because the IE will treat it as an network error when the empty body request is made.
Try adding a empty class as the parameter in the server and IE should start working.
I've just installed SSL on my server (shared reseller package).
Its my first time using SSL and I couldn't get it working properly in Chrome or Firefox. Obviously this was because many of the CSS/JS/etc. links were not "https", but "http".
I've modified the necessary files and now Google Chrome is displaying a nice green padlcok and confirms my pages (checkout/login/controls/accountools).php are all secure.
However, if I open up the page in Firefox, instantly I get:
This Connection is Untrusted
You have asked Firefox to connect
securely to www.domain.co.uk, but we can't confirm that your connection is secure.
Normally, when you try to connect securely,
sites will present trusted identification to prove that you are
going to the right place. However, this site's identity can't be verified.
What Should I Do?
If you usually connect to
this site without problems, this error could mean that someone is
trying to impersonate the site, and you shouldn't continue.
Contrast this to Google:
Your connection to domain.com is encrypted with 256-bit encryption.
The connection uses TLS 1.0
The connection is encrypted using AES_256_CBC, with SHA1 for message authentication and DHA_RSA as the key exchange mechanism
What's more, the other browsers (MSIE, Safari, Opera), didn't bat an eyelid even when the pages were technically "unsecure" due to the CSS/JS/images, etc.
I know I can simply add my site to the trusted list in Firefox, but this doesn't look good for me when someone comes along and sees "UNTRISTED WEB SITE" before they even get to the checkout/login pages etc.
How can I fix this?
Complain to the iSP. The SSL certificate isn't properly signed.
I have a test plan that runs fine under http, and the Cookie Manager is correctly keeping my sessions in place. It is also capable of talking to the same server when switched to ssl, and even thinks everything is working correctly because it gets a 200 response with our custom message about not being logged in.
All I need to do to reproduce the behavior is switch from http to https. The test is still able to talk to the server, but I can see in the "View Results in Table" log that cookies has a JSESSIONID under http, and is empty under https. And each request under ssl is answered with a Set-Cookie for JSESSIONID.
Interesting scenario. Does the Jmeter log file offer any clues?
Could it be that Jmeter needs a copy of the certificate to properly store the SSL cookie? The console would display a handshake problem, which can be resolved by adding the certificate into the key store:
http://www.java-samples.com/showtutorial.php?tutorialid=210
You might be able to do some further debug by writing out the cookie value to a variable and logging its value:
Received Cookies can be stored as JMeter thread variables (versions of JMeter after 2.3.2 no longer do this by default). To save cookies as variables, define the property "CookieManager.save.cookies=true". Also, cookies names are prefixed with "COOKIE_" before they are stored (this avoids accidental corruption of local variables) To revert to the original behaviour, define the property "CookieManager.name.prefix= " (one or more spaces). If enabled, the value of a cookie with the name TEST can be referred to as ${COOKIE_TEST}.
Source: http://jmeter.apache.org/usermanual/component_reference.html#HTTP_Cookie_Manager
Edit: Somebody asked how my specific problem was solved. It turned out not to have anything to do with ssl specifically, but that other unrelated headers changed very slightly in their format, so the regex we were using to match on them started failing. So I'd start there with looking at your headers and comparing the difference between when you post http vs https