I'm trying to get force_ssl working. I mean I want to redirect someone trying to connect via http to https with the app itself.
The app is a dockerized released version behind Nginx. Actually Nginx is serving the SSL and I know that I can totally rely on Nginx to terminate the ssl connection and even taking care of the redirection of non https requests. But I want to understand how to make it work so if someday I skip the proxy I'll know how to do it.
My prod/config looks like that:
config :my_app, MyApp.Endpoint,
http: [port: "${PORT}"],
url: [host: "${APP_URL}", port: "${APP_PORT}"],
force_ssl: [hsts: true], #I tried different options without success
server: true,
cache_static_manifest: "priv/static/manifest.json",
root: ".",
version: Mix.Project.config[:version]
Actually when accessing the server via https everything runs as expected. But when using http address, the redirect url looks like: https://%24%7Bapp_url%7D/ and it didn't work.
Maybe this happens because I didn't give a cert file and the whole process cannot be done without it? I was thinking that force_ssl is just a basic redirection if the request isn't https.
Finally I'm also trying to generate https url for instance in a mailer I've got something like this:
<%= password_url(MyApp.Endpoint, :reset_password_from_email, token: #token, email: #email) %>
But since my configuration seems not good it only generate http links not https.
Maybe should I separate this completely, making this 100% on Nginx side or 100% on the app side but not trying to mix them both?
Any help/idea/comment is welcome!
EDIT:
Last test with this:
force_ssl: [rewrite_on: [:x_forwarded_proto], subdomains: true, hsts: true, host: "${APP_URL}"]
result with the bad url: https://%24%7Bapp_url%7D/. So I think this is working but the var: "${APP_URL}"isn't converted to the real value. I'm digging on this.
By hardcoding the host: "${APP_URL}" value resolves the problem as suspected in my last edit.
There's something wrong here because all other ENV_VAR are replaced by their values but not this one.
Related
I have a simple worker that just does a fetch against an HTTPS endpoint somewhere else.
The code is literally just:
return await fetch('https://something.com/someResource')
When I test locally (wrangler dev) and even publish to a workers subdomain this works fine. When I curl https://foo.bar.workers.dev/myEndpoint I get the same response as https://something.com/someResource.
However I want to run this from my own domain (managed through cloudflare) so the worker also has a route of foo.mydomain.com/* and a AAAA record to 100:: for foo as per CloudFlare docs. The DNS works fine the URL is reachable, but when I try to hit https://foo.mydomain.com/myEndpoint CloudFlare's worker logs show that the fetch behind the scenes fails with a 525 error (SSL Handshake fail).
Things I've tried based on some CloudFlare forum posts:
Add a page rule foo.mydomain.com/* -> SSL Mode: full since my overall SSL settings are set to flexible.
Set the host header in the fetch to the origin domain ( fetch(url, {headers: {'Host': 'something.com'}})
FYI, I don't control the origin server as it's an external API I work with.
How come the same request works from local and *.workers.dev but not my own domain?
Your page rule is not taking effect. The page rule is for foo.mydomain.com/*, but it has to match the subrequest URL, which in this case is https://something.com/someResource, which doesn't match. It doesn't matter that the original worker request matched -- what matters, in this case, is whether the subrequest URL matched.
Unfortunately, you cannot create a page rule that matches a domain other than your own.
Instead, what you'll need to do is reverse things. Set your SSL mode to "full" by default, but then use page rules to set it to "flexible" for your own domain.
(Note: The "SSL Handshake fail" error itself is actually a known bug in Workers, that happens when you try to talk to a host outside your zone using HTTPS but you have "flexbile" SSL set. We do not use flexible SSL when talking to domains other than your own, but there's a bug that causes the request to fail instead of just using full SSL as it should.)
Currently we are using the default wirecloud template. But sinde we enabled SSL and redirect every request to the ssl port I would love to change the urls of static ressources to start with https to avoid mixed content warnings.
Is there a simple way to change the urls to always start wit hhttps instead of http?
That's done automatically, except if WireCloud is behind a proxy (so requests comes using HTTP instead of HTTPS). In those cases you can force WireCloud to use https links by adding this line into the settings.py file:
FORCE_PROTO = "https"
See this link for more info.
Apache 2.2.15 on RHELS 6.1
Using mod_pagespeed on a server behind https (implemented by the network's Reverse Proxy).
All html urls are written as "//server.example.com/path/to/file.css" (so, without the protocol specified).
Problem : using the default configuration, pagespeed rewrites the urls as "http://server.example.com/path/to/file.css"
I'm trying to figure out how to have it rewrite the urls as https (or leave it unspecified as //).
After reading the documentation, I tried using ModPagespeedMapOriginDomain like this
ModPagespeedMapOriginDomain http://localhost https://server.example.com
Also tried
ModPagespeedMapOriginDomain http://localhost //server.example.com
ModPagespeedMapOriginDomain localhost server.example.com
... To no avail. Urls keep being rewritten with "http://".
Question: how can I have pagespeed use https instead of http in its urls?
Full pagespeed config here, if needed
It turns out mod_pagespeed does not work with "protocol-relative" urls.
Still, the issue is bypassed if you enable trim_urls
ModPagespeedEnableFilters trim_urls
Be mindful of the potential risks (depending on your javascript codebase, ajax calls could break or produce unexpected html).
Adding this to your configuration might work:
ModPagespeedRespectXForwardedProto on
That works, if your reverse proxy forwards the X-Forwarded-Proto header in its requests.
That request header tells PageSpeed what the original protocol was that was used for the request at the loadbalancer, and thereby hands it all it needs to know to correctly rewrite urls.
I'm new to rails. I want to create secure control.
here what I did :
I created a secure and change routes.rb as
scope :constraints => {:protocol => 'https'} do
get "secure/index"
end
but, I.m having this error
[2012-10-08 12:07:07] ERROR bad URI \x12p\x00\x00H\x00��'.
[2012-10-08 12:07:07] ERROR bad URIpqn���|�լ%[�y���\x00\x00H\x00��'.
when I request https://localhost:3000/secure
thanx..
i think that you have a misunderstanding of secure http communication!
http and https are two different things. thats why they usually work on two different ports! http is 80 and https is usually 443.
https needs a signed certificate and which is usually handled by your webserver (apache, nginx etc). it's also possible to handle the https stuff within rails and there are some nice gems to handle configuring ssl-enforcement.
have a look at this post to get started: http://www.simonecarletti.com/blog/2011/05/configuring-rails-3-https-ssl/
I'm getting an infinite redirect loop after adding SSL support to my site. I'm using the "SslRequirement" plugin.
The symptoms I'm seeing are, any action that has "ssl_required" enabled, and any URL I type in manually that has https at the front, goes into an infinite loop, with the following in the development.log file, over and over until the browser catches the redirect loop and stops the loading of the page ("/admins/index" is the action in this example, but it happens with any action):
Processing AdminsController#index (for 127.0.0.1 at 2010-08-13 13:50:16) [GET]
Parameters: {"action"=>"index", "controller"=>"admins"}
Redirected to https://localhost/admins
Filter chain halted as [:ensure_proper_protocol] rendered_or_redirected.
Completed in 0ms (DB: 0) | 302 Found [http://localhost/admins]
At first I thought there was some kind of problem where I had to make ALL of my actions "ssl_allowed" - so I tried that, but to no avail.
IF I remove the use of SslRequirement, and remove any "ssl_required/ssl_allowed" references, then https works fine - so it's the redirect in actions from http to https that seems to be the issue.
Any clues?
Answer found here:
http://www.hostingrails.com/SSL-Redirecting-not-working
Short version is, I added the following line to the SSL vhost in my nginx config:
proxy_set_header X_FORWARDED_PROTO https;
Detailed version is:
Basically the issue came down to the nginx server not passing the fact that the source request was an HTTPS protocol on to the Mongrel cluster. This caused the call to "request.ssl?" inside the SslRequirement plugin to ALWAYS return false.
So, when this returned as false, the "ensure_proper_protocol" would re-issue the action over https, which would check "request.ssl?", which would return "false", which would re-issue the action over https, which would check "request.ssl?", which would return "false", which would re-issue the action over https, which would check "request.ssl?", which would return "false", which would re-issue the action over https, which would check "request.ssl?", which would return "false", which would re-issue the action over https ...
...you get the idea. The mongrel cluster NEVER thought the request was over the HTTPS protocol, so it redirected forever. A small change in the nginx config to correct this, and BAM-O: problem solved.