I am developing a Rails application that uses SSL connection. I am currently using third party resources that are js and css files for implementing a map (OpenStreetMap) . I have already tried to import these resources (js and css) into my application, but the javascript code tries to access an external WMS via HTTP.
The problem is that Google Chrome is blocking access to third-party resources from HTTP when the application is in HTTPS.
So I disabled SSL on a certain pages of the application and tried to force the HTTP or HTTPS the way I desire.
Following this blog: http://www.simonecarletti.com/blog/2011/05/configuring-rails-3-https-ssl/ and it works.
But when I force the HTTP protocol to the page where these resources will be used using Google Chrome, it forces HTTPS connection causing infinite loop.
If I clear the Chrome cache (that have already accessed the same page with HTTPS) in order access it via HTTP it works. But if I have accessed a HTTPS page and try to access via HTTP, Chrome forces the HTTPS connection resulting in an infinite loop.
The question is: Is there something I can set in the request that causes Chrome to accept the connection?
Regards
I've been doing some research on this, and it turns out that turning on force_ssl = true on Rails 3 causes the app to send an HSTS header. There's a bit of information about it here: How to disable HTTP Strict Transport Security?
Essentially, the HSTS header tells Chrome (and Firefox) to access your site only through HTTPS for a specific amount of time.
So... the answer I have for you now is that you can clear your own HSTS setting by going to about:net-internals within your Chrome browser and removing the HSTS state.
I think the answers here can help you: Rails: activating SSL support gets Chrome confused
Related
I have a website (userbob.com) that normally serves all pages as https. However, I am trying to have one subdirectory (userbob.com/tools/) always serve content as http. Currently, it seems like Chrome's HSTS feature (which I don't understand how it works) is forcing my site's pages to load over https. I can go to chrome://net-internals/#hsts and delete my domain from Chrome's HSTS set, and the next query will work as I want without redirecting to an https version. However, if I try to load the page a second time, it ends up redirecting again. The only way I can get it to work is if I go to chrome://net-internals/#hsts and delete my domain from Chrome's HSTS set after each request. How do I let browsers know that I want all my pages from userbob.com/tools/ to load as http? My site uses an apache/tomcat web server.
(Just FYI, the reason I want the pages in the tools directory to serve pages over http instead of https is because some of them are meant to iframe http pages. If I try to iframe an http page from an https page I end up getting mixed-content errors.)
HTTP Strict Transport Security (or HSTS) is a setting your site can send to browsers which says "I only want to use HTTPS on my site - if someone tries to go to a HTTP link, automatically upgrade them to HTTPS before you send the request". It basically won't allow you to send any HTTP traffic, either accidentally or intentionally.
This is a security feature. HTTP traffic can be intercepted, read, altered and redirected to other domains. HTTPS-only websites should redirect HTTP traffic to HTTPS, but there are various security issues/attacks if any requests are still initially sent over HTTP so HSTS prevents this.
The way HSTS works is that your website sends a HTTP Header Strict-Transport-Security with a value of, for example, max-age=31536000; includeSubDomains on your HTTPS requests. The browser caches this and activates HSTS for 31536000 seconds (1 year), in this example. You can see this HTTP Header in your browsers web developer tools or by using a site like https://securityheaders.io . By using the chrome://net-internals/#hsts site you are able to clear that cache and allow HTTP traffic again. However as soon as you visit the site over HTTPS it will send the Header again and the browser will revert back to HTTPS-only.
So to permanently remove this setting you need to stop sending that Strict-Transport-Security Header. Find this in your Apache/Tomcat server and turn it off. Or better yet change it to max-age=0; includeSubDomains for a while first (which tells the browser to clear the cache after 0 seconds and so turns it off without having to visit chrome://net-internals/#hsts, as long as you visit the site over HTTPS to pick up this Header, and then remove the Header completely later.
Once you turn off HSTS you can revert back to having some pages on HTTPS and some on HTTP with standard redirects.
However it would be remiss of me to not warn you against going back to HTTP. HTTPS is the new standard and there is a general push to encourage all sites to move to HTTPS and penalise those that do not. Read his post for more information:
https://www.troyhunt.com/life-is-about-to-get-harder-for-websites-without-https/
While you are correct that you cannot frame HTTP content on a HTTPS page, you should consider if there is another way to address this problem. A single HTTP page on your site can cause security problems like leaking cookies (if they are not set up correctly). Plus frames are horrible and shouldn't be used anymore :-)
You can use rewrite rules to redirect https requests to http inside of subdirectory. Create an .htaccess file inside tools directory and add the following content:
RewriteEngine On
RewriteCond %{HTTPS} on
RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
Make sure that apache mod_rewrite is enabled.
Basically any HTTP 301 response from an HTTPS request indicating a target redirect to HTTP should never be honored at all by any browser, those servers doing that are clearly violating basic security, or are severaly compromized.
However a 301 reply to an HTTPS request can still redirect to another HTTPS target (including on another domain, provided that other CORS requirements are met).
If you navigate an HTTPS link (or a javascript event handler) and the browser starts loading that HTTPS target which replies with 301 redirect to HTTP, the behavior of the browser should be like if it was a 500 server error, or a connection failure (DNS name not resolved, server not responding timeout).
Such server-side redirect are clearly invalid. And website admins should never do that ! If they want to close a service and inform HTTPS users that the service is hosted elsewhere and no longer secure, they MUST return a valid HTTPS response page with NO redirect at all, and this should really be a 4xx error page (most probably 404 PAGE NOT FOUND) and they should not redirect to another HTTPS service (e.g. a third-party hosted search engine or parking page) which does not respect CORS requirements, or sends false media-types (it is acceptable to not honor the requested language and display that page in another language).
Browsers that implement HSTS are perfectly correct and going to the right direction. But I really think that CORS specifications are a mess, just tweaked to still allow advertizing network to host and control themselves the ads they broadcast to other websites.
I strongly think that serious websites that still want to display ads (or any tracker for audience measurement) for valid reasons can host these ads/trackers themselves, on their own domain and in the same protocol: servers can still get themselves the ads content they want to broadcast by downloading/refreshing these ads themselves and maintaining their own local cache. They can track their audience themselves by collecting the data they need and want and filtering it on their own server if they want this data to be analysed by a third party: websites will have to seriously implement thelselves the privacy requirements.
I hate now those too many websites that, when visited, are being tracked by dozens of third parties, including very intrusive ones like Facebook and most advertizing networks, plus many very weak third party services that have very poor quality/security and send very bad content they never control (including fake ads, fake news, promoting illegal activities, illegal businesses, invalid age rating...).
Let's return to the origin of the web: one site, one domain, one third party. This does not mean that they cannot link to other third party sites, but these must done only with an explicit user action (tapping or clicking), and visitors MUST be able to kn ow wherre this will go to, or which content will be displayed.
This is even possible for inserting videos (e.g. Youtube) in news articles: the news website can host themselves a cache of static images for the frame and icons for the "play" button: when users click that icon, it will start activating the third party video, and in that case the thirf party will interact directly with that user and can collect other data. But the unactivated contents will be tracked only by the origin website, under their own published policy.
In my local development environment I use apache server. What worked for me was :
Open you config file in sites-availabe/yoursite.conf. then add the following line inside your virtualhost:
Header always set Strict-Transport-Security "max-age=0". Restart your server.
after upgrading to Safari 9 I'm getting this error in the browser:
[Warning] [blocked] The page at https://localhost:8443/login was not allowed to run insecure content from http://localhost:8080/assets/static/script.js.
Anyone knows how to enable the running of insecure content on the new Safari?
According to the Apple support forums Safari does not allow you to disable the block on mixed content.
Though this is frustrating for usability in legitimate cases like yours, it seems to be part of their effort to force secure content serving / content serving best practices.
As a solution for you you can either upgrade the HTTP connection to HTTPS (which it seems you have done) or proxy your content through an HTTPS connection with an HTTPS-enabled service (or, in your case, port).
You can fix the HTTPS problem by using HTTPS locally with a self signed SSL certificate. Heroku has a great how-to article about generating one.
After setting up SSL on all of your development servers, you will still get an error loading the resource in Safari since an untrusted certificate is being used(self signed SSL certificates are not trusted by browsers by default because they cannot be verified with a trusted authority). To fix this, you can load the problematic URL in a new tab in Safari and the browser will prompt you to allow access. If you click "Show Certificate" in the prompt, there will be a checkbox in the certificate details view to "Always allow content from localhost". Checking this before allowing access will store the setting in Safari for the future. After allowing access just reload the page originally exhibiting a problem and you should be good to go.
This is a valid use case as a developer but please make sure you fully understand the security implications and risks you are adding to your system by making this change!
If like me you have
frontend on port1
backend on port2b
want to load script http://localhost:port1/app.js from http://localhost:port2/backendPage
I have found an easy workaround: simply redirect with http response all http://localhost:port2/localFrontend/*path to http://localhost:port1/*path from your backend server configuration.
Then you could load your script directly from http://localhost:port2/localFrontend/app.js instead of direct frontend url. (or you could configure a base url for all your resources)
This way, Safari will be able to load content from another domain/port without needing any https setup.
For me disabling the Website tracking i.e. uncheck the Prevent cross-site tracking worked.
I am trying to optimize my site for all HTTPS. I know that Twitter is all HTTPS and I noticed that they don't redirect HTTP to HTTPS, but instead just initiate an HTTPS connection.
Here is a screenshot of Google Chrome's Network Activity, notice there is no redirect (301/302), the HTTP request (first line) just hangs as pending and the second line is the HTTPS page. Note, I have cleared all my browser cache so HTTP Strict Transport Security (HSTS) shouldn't matter.
Here is another screenshot of the request/response for the HTTPS page. Notice, that it seems Twitter inserts some fields into the REQUEST, such as :scheme
How do they do this? I would assume its faster so that if a user types twitter.com into their browser, instead of a redirect (think extra network round trip), Twitter seems to seamlessly move to SSL (HTTPS).
A follow on question would be, does this work in all browsers?
They have been added to a list of preloaded HSTS sites in Chrome and Mozilla Firefox.
I run a secure website on Apache, but one part requires YouTube videos that aren't showing due to the SSL blocking them.
I therefore need to use HTTP for this part of the site (/videos). If I delete the 's' off https, it jumps back in there so can't simply change the link to it.
Is there a mod_rewrite code or something similar that might add an exception to this directory?
Switching from HTTPS to HTTP will always cause problems, especially if your users are authenticated and if you want to maintain security.
You could use YouTube via HTTPS instead, as described on the YouTube API Blog.
I have successfully configured my SWT Browser application to use the proxy by setting VM arguments -Dnetwork.proxy_host and -Dnetwork.proxy_port to the according values.
However the proxy needs authentication, but the username / password prompt does not open. Futhermore when registering an authentication listener, the listener is never triggered.
The problems occured with a Linux Debian 64 Bit distribution. When compiling the same application for windows, all works fine, i.e. the password promt opens. The SWT Browser is configured to use MOZILLA, not WEBKIT. Unfortunatelly I cannot test with WEBKIT as I am limited to a given environment.
Temp solution: When starting the Linux Mozilla Browser, the prompt comes up. If entering there correct values and afterwards starting the SWT Browser application, then no authentication is needed at all and internet access is possible. But this is not a good solution.
When I register a location listener with "addLocationListener" to look whats going on with url calls, then I can see that the initial url (for example www.google.de) results to call a certain http site of the proxy server. And this http site is a redirect to a https site of the proxy. Then the https site results in calling the http redirect page again. This is then an endless loop.
I would guess that somewhere in the JAVA code of the SWT Browser class there is a routine that calls setUrl with those pages (what results in an
endless loop) and skip to call any authentication listener for some reason.
Maybe someone has an idea whats going wrong in this authentication process?
I have no solution but a hint: I'm not sure what you mean by "Linux Mozilla Browser" - I know Firefox and Xulrunner. But your workaround suggests that profile information is shared somehow and that shouldn't happen.
I tried to find some information how to define the profile (where the web browser keeps its cache, config, SSL certificates, plugins, ...) but to no avail.
This entry in the FAQ shows how to set the proxy host: How do I set a proxy for the Browser to use?
Try to find a way to add the user/password information into the request sent to the proxy server. If that fails, create a local proxy which connects to the real proxy as upstream and which can authenticate itself.
Looking at the bug database, there is no support for Browser profiles: Flexible Mozilla profile support - new API request