Passing cookies from origin through CloudFlare - cloudflare

Our organisation has a proxy configured through the CloudFlare as follows: The domain is located on the CloudFlare NS, using the A-record it accesses our internal LBs, which sends a request to one of the two servers. So the schema looks like this:
Client's request -> Domain on CloudFlare -> A -> Our LB -> Server
A cookie with the name of this host is transmitted from the end hosts, which is proxied by our LB through iRule. The problem is that when working through CF, this cookie does not reach the end user, which is why saving the session does not work.
Server --*cookie*-> Our LB --*cookie*-> CloudFlare --*cookies are lost*-> Client
I found a similar issue here, but disabling HTTP2 didn't help. Perhaps someone has come across a similar situation and knows how to implement such a solution?
I tried disabling HTTP2 and setting up page rules, but didn't find any suitable rules
upd:
Without CloudFlare:
$ curl https://domain -kv 1> /dev/null
...
< cookie: hostname
< Set-Cookie: cookie=hostname; Domain=domain;Path=/; HttpOnly
With CloudFlare (missing Set-Cookie)
$ curl https://domain -kv 1> /dev/null
...
< cookie: hostname

Related

Invalid certificate for "localhost" on Cloudflared

I am running a web server (Wordpress) locally using XAMPP (using Apache) and forwarding it to my Cloudflare-hosted domain using a Cloudflared tunnel. I am having an issue with the certificate when connecting over my domain.
I have a certificate I received from Cloudflare which is valid for my domain installed in XAMPP's location for its certificate, and I know that it is being sent with the HTTPS result. Also, my "SSL/TLS encryption mode" on Cloudflare is "Full (Strict)".
When connecting from the browser, I get a 502 Bad Gateway error, and Cloudflared prints this error: error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: x509: certificate is valid for *.example.com, example.com, not localhost where example.com is my domain.
If I go to
http://example.com or https://example.com, I get the above error.
http://localhost, the website loads but does not load any of the resources, since Wordpress loads the resources by querying the domain, https://example.com/path/to/resource.
https://localhost, the same as above happens, but Chrome also give me a warning that the certificate is not valid.
Here are the ingress rules in Cloudflared's config.yml.
ingress:
- hostname: ssh.example.com # I haven't gotten this one to work yet.
service: ssh://localhost:22
- hostname: example.com # This is the one having a problem.
service: https://localhost
- service: https://localhost
What I believe is happening is that Cloudflared receives the certificate which is valid for my domain (*.example.com, example.com) and then tries to execute the ingress rule by going to https://localhost, but the certificate is not valid for localhost. I don't think I should just get a certificate which is valid for localhost AND example.com. Do I need one certificate (valid for localhost) to be returned whenever http(s)://localhost is called and another (valid for example.com) that Cloudflared checks when it tries to execute an ingress rule involving example.com? If so, how do I do this?
I solved it by using the noTLSVerify option in Cloudflared's config.yml. When a client connects to my domain, it goes like this:
Client > Cloudflare > Cloudflared instance running on my machine > Origin (which also happens to be my machine: https://localhost)
The certificate sent back by the Origin was not valid for the address Cloudflared was accessing it from, localhost, but by adding these lines to config.yml,
originRequest:
noTLSVerify: true
I think Cloudflared does not check the certificate received from the origin, although it still returns the certificate to Cloudflare, which checks it against my domain.

Cloudflare SSL Does Not Work With Elixir/Phoenix Backend

I am using Cloudflare's flexible SSL to secure my website, and it works fine when running my vue.js frontend. Essentially what it is doing is sharing the SSL cert with 50 random customers, and it does so by piping my DNS through their edge network. Basically, it did the thing I needed and was fine, but now that I am trying to tie it to a phoenix/elixir backend it is breaking.
The problem is that you can't make an http request from inside an ssl page, because you'll get this error:
Blocked loading mixed active content
This makes sense - if it's ssl on load, it needs to be ssl all the way down. So now I need to add SSL to elixir.
This site (https://elixirforum.com/t/run-phoenix-https-behind-cloudflare-without-key-keyfile/12660/2) seemed to have the solution! Their answer was:
configs = Keyword.put(config, :http, [:inet6, port: "80"])
|> Keyword.put(:url, [scheme: "https", host: hostname, port: "443"])
So I made my config like this:
config :albatross, AlbatrossWeb.Endpoint,
http: [:inet6, port: "4000"],
url: [scheme: "https", host: "my.website", port: "443"],
secret_key_base: "SUPERSECRET",
render_errors: [view: AlbatrossWeb.ErrorView, accepts: ~w(html json)],
pubsub: [name: Albatross.PubSub,
adapter: Phoenix.PubSub.PG2]
That only allows me to get to http:etc! So I've also tried this:
config :albatross, AlbatrossWeb.Endpoint,
http: [:inet6, port: "4000"],
https: [ :inet6, port: "4443"],
url: [scheme: "https", host: "my.website", port: "443"],
secret_key_base: "SUPERSECRET",
render_errors: [view: AlbatrossWeb.ErrorView, accepts: ~w(html json)],
pubsub: [name: Albatross.PubSub,
adapter: Phoenix.PubSub.PG2]
Which doesn't work of course because there's no PEM files. Since I'm only using elixir as an API (and not a DNS) I can't use solutions like this (http://51percent.tech/blog/uncategorized/serving-phoenix-apps-ssl-and-lets-encrypt/), because letsencrypt does not allow IP address only auth (https://www.digitalocean.com/community/questions/ssl-for-ip-address).
So at this point I'm very confused. Does anyone have any advice?
EDIT:
Someone mentioned that you can go on to cloudflare and generate TLS certs by going to crypto>Origin Certificates>Create Certificate. I did that, downloaded the files, saved them in my project and ran this:
config :albatross, AlbatrossWeb.Endpoint,
http: [:inet6, port: "4000"],
https: [ port: "4443",
keyfile: "priv/ssl/cloudflare/private.key",
certfile: "priv/ssl/cloudflare/public.pem"],
url: [scheme: "https", host: "website.me", port: "443"],
secret_key_base: "SUPERSECRET",
render_errors: [view: AlbatrossWeb.ErrorView, accepts: ~w(html json)],
pubsub: [name: Albatross.PubSub,
adapter: Phoenix.PubSub.PG2]
So what are the results of all the possible ways to query the backend?
Well I'm running docker-compose so https://backendservice:4443 is what I query from the frontend. That gives me -
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://backendservice:4443/getComments?postnum=6. (Reason: CORS request did not succeed).[Learn More]
value of error : Error: "Network Error"
exports https://mywebsi.te/js/chunk-vendors.4345f11a.js:49:15423
onerror https://mywebsi.te/js/chunk-vendors.4345f11a.js:65:538540
actions.js:61:12
So that clearly doesn't work.
I can go to http://my.ip.address:4000, but I cannot go to https://my.ip.address:4443.
As far as I can tell cloudflare TLS certificates do not work.
Or, more likely, I am doing something stupid in writing the elixir config.
FURTHER CLARIFICATION:
Yes, there is a CORS header error above. However please note that it is only firing for https request and NOT http requests. Why this is happening is very confusing. I have a cors plugin for elixir in the entrypoint of my application that is currently allowing * incoming requests. This is it - it should be pretty straight forward:
plug CORSPlug, origin: "*"
More information can be found here (https://github.com/mschae/cors_plug).
I seem to have been able to make my website work with the following configuration:
config :my_app, MyAppWeb.Endpoint,
force_ssl: [hsts: true],
url: [host: "my.website", port: 443],
http: [:inet6, port: 4000],
https: [
:inet6,
port: 8443,
cipher_suite: :strong,
keyfile: "private.key.pem",
certfile: "public.cert.pem",
cacertfile: "intermediate.cert.pem"
],
cache_static_manifest: "priv/static/cache_manifest.json"
where private.key.pem and public.cert.pem are the origin certificate downloaded from Cloudflare.
(Note that the origin certificate you can download from Cloudflare is only useful for encrypting the connection between your website and Cloudflare.)
I also had to add routing rules via iptables.
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 4000
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8443
This configuration also worked before for a setup using letsencrypt certificates.
I'm not sure if the cacertfile: "intermediate.cert.pem" part is needed. I obtained it from "Step 4" of the Cloudflare documentation.
I never used Cloudflare but I know that phoenix can pretend it is serving https content if x-forwarded-proto is set in request headers. Your configuration is OK (without https part, you don't need it).
Try to add force_ssl option in endpoint configuration. This will instruct to Plug.SSL to force SSL and redirect any http request to https.
config :albatross, AlbatrossWeb.Endpoint,
http: [:inet6, port: "4000"],
url: [scheme: "https", host: "my.website", port: "443"],
force_ssl: [rewrite_on: [:x_forwarded_proto], host: nil],
secret_key_base: "SUPERSECRET",
render_errors: [view: AlbatrossWeb.ErrorView, accepts: ~w(html json)],
pubsub: [name: Albatross.PubSub,
adapter: Phoenix.PubSub.PG2]
In my experience with Cloudflare and web backends (and without seeing the exact request that is causing the issue), this is usually caused by hard-coding a CSS/JS dependency with http:// or by making an AJAX request with http:// hard-coded. If the back end you are requesting with http:// doesn't automatically redirect to https://, you will get the error you're seeing.
If you're using Cloudflare's "One-Click SSL", this is how requests are being made to your server:
[ Client Browser ] <-- HTTPS --> [ Cloudflare ] <-- HTTP --> [ Your Server ]
Since the communication between Cloudflare and your server is all over http://, you should not need to change any Phoenix configuration at all, but there are 2 potential sources of error:
Automatic redirection to https:// is not being done by Cloudflare (or Cloudflare is not serving your content)
You have some CSS/JS dependencies or Ajax requests that are hard-coded to http:// and the server responding to the requests is not automatically redirecting to https://.
Troubleshooting point 1: According to Cloudflare's docs for their "One-Click SSL", they should automatically redirect http:// requests to https:// (although this might be a setting that you need to change). If a request comes to your domain over http://, Cloudflare should automatically redirect to https://. You can check whether this is happening by requesting a page from your backend in your browser with http:// explicitly included at the front of the URL. If you are not redirected to https://, it could mean that a) Cloudflare is not actually in between your browser and the backend or b) that Cloudflare is not automatically redirecting to https://. In either case, you have some hints about your next step in solving the problem.
Troubleshooting point 2: You can check this by using the "Network" tab in the developer tools in your browser, or by manually going through your website code looking for CSS/JS dependencies or Ajax requests hard-coded with http://. In the developer tools, you can check for exactly which request(s) are causing the problem (the line should be red and have an error about mixed content). From there you can check where this request is coming from in your code. When searching through your code for hard-coded http:// requests, most of the time you can simply replace it with https:// directly, but you should double-check the URL in your browser to make sure that the URL is actually available over https://. If the CSS/JS dependency or Ajax endpoint is not actually available over https:// then you will need to remove/replace it.
Hope this helps.

Google Cloud Load Balancer - 502 - Unmanaged instance group failing health checks

I currently have an HTTPS Load Balancer setup operating with a 443 Frontend, Backend and Health Check that serves a single host nginx instance.
When navigating directly to the host via browser the page loads correctly with valid SSL certs.
When trying to access the site through the load balancer IP, I receive a 502 - Server error message. I check the Google logs and I notice "failed_to_pick_backend" errors at the load balancer. I also notice that it failing health checks.
Some digging around leads me to these two links: https://cloudplatform.googleblog.com/2015/07/Debugging-Health-Checks-in-Load-Balancing-on-Google-Compute-Engine.html
https://github.com/coreos/bugs/issues/1195
Issue #1 - Not sure if google-address-manager is running on the server
(RHEL 7). I do not see an entry for the HTTPS load balancer IP in the
routes. The Google SDK is installed. This is a Google-provided image
and if I update the IP address in the console, it also gets updated on
the host. How do I check if google-address-manager is running on
RHEL7?
[root#server]# ip route ls table local type local scope host
10.212.2.40 dev eth0 proto kernel src 10.212.2.40
127.0.0.0/8 dev lo proto kernel src 127.0.0.1
127.0.0.1 dev lo proto kernel src 127.0.0.1
Output of all google services
[root#server]# systemctl list-unit-files
google-accounts-daemon.service enabled
google-clock-skew-daemon.service enabled
google-instance-setup.service enabled
google-ip-forwarding-daemon.service enabled
google-network-setup.service enabled
google-shutdown-scripts.service enabled
google-startup-scripts.service enabled
Issue #2: Not receiving a 200 OK response. The certificate is valid
and the same on both the LB and server. When running curl against the
app server I receive this response.
root#server.com curl -I https://app-server.com
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
Thoughts?
You should add firewall rules for the health check service -
https://cloud.google.com/compute/docs/load-balancing/health-checks#health_check_source_ips_and_firewall_rules and make sure that your backend service listens on the load balancer ip (easiest is bind to 0.0.0.0) - this is definitely true for an internal load balancer, not sure about HTTPS with an external ip.
A couple of updates and lessons learned:
I have found out that "google-address-manager" is now deprecated and replaced by "google-ip-forward-daemon" which is running.
[root#server ~]# sudo service google-ip-forwarding-daemon status
Redirecting to /bin/systemctl status google-ip-forwarding-daemon.service
google-ip-forwarding-daemon.service - Google Compute Engine IP Forwarding Daemon
Loaded: loaded (/usr/lib/systemd/system/google-ip-forwarding-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2017-12-22 20:45:27 UTC; 17h ago
Main PID: 1150 (google_ip_forwa)
CGroup: /system.slice/google-ip-forwarding-daemon.service
└─1150 /usr/bin/python /usr/bin/google_ip_forwarding_daemon
There is an active firewall rule allowing IP ranges 130.211.0.0/22 and 35.191.0.0/16 for port 443. The target is also properly set.
Finally, the health check is currently using the default "/" path. The developers have put an authentication in front of the site during the development process. If I bypassed the SSL cert error, I received a 401 unauthorized when running curl. This was the root cause of the issue we were experiencing. To remedy, we modified nginx basic authentication configuration to disable authentication to a new route (eg. /health)
Once nginx configuration was updated and the path was updated to the new /health route at the health check, we were receivied valid 200 responses. This allowed the health check to return healthy instances and allowed the LB to pass through traffic

1and1 HTTPS redirect does not work but HTTP does

I have a web app running on Heroku and domain managed by 1und1 (German version of domain registrar 1and1). To make the app available via "example.com" I did the following:
Created www.example.com subdomain in 1und1.
Attached it to www.example.com.herokudns.com as described in Heroku's guides (CNAME www.example.com.herokudns.com).
Ordered SSL certs from 1und1 and used them to setup HTTPS on Heroku side.
Set up HTTP redirect example.com -> https://www.example.com to make top level domain to point to Heroku.
This all worked fine until I tried to get the app by https://example.com - Chrome shows me "This site can’t provide a secure connection" page with ERR_SSL_PROTOCOL_ERROR.
cURL output:
#1.
curl https://example.com
curl: (35) Server aborted the SSL handshake
#2.
curl -vs example.de
Rebuilt URL to: example.de/
Trying <example.de 1und1 IP address here>...
TCP_NODELAY set
Connected to example.de (<example.de 1und1 IP address here>) port 80 (#0)
GET / HTTP/1.1
Host: example.de
User-Agent: curl/7.51.0
Accept: */*
< HTTP/1.1 302 Found
< Content-Type: text/html; charset=iso-8859-1
< Content-Length: 203
< Connection: keep-alive
< Keep-Alive: timeout=15
< Date: Tue, 11 Jul 2017 14:19:30 GMT
< Server: Apache
< Location: http://www.example.de/
...
#3.
curl -vs https://example.de
Rebuilt URL to: https://example.de/
Trying <example.de 1und1 IP address here>...
TCP_NODELAY set
Connected to wavy.de (<example.de 1und1 IP address here>) port 443 (#0)
Unknown SSL protocol error in connection to example.de:-9838
Curl_http_done: called premature == 1
Closing connection 0
So, the question is: how can I set up HTTPS redirect with 1und1 and Heroku?
Answering to my question.
After spending some time to google the issue out I found this article https://ubermotif.com/1and1-nightmare-bad-registrar-can-ruin-day. They faced the same issue. I decided to call to 1und1 support (they only offer calls no chats or email tickets). They told it is their issue, the GUI screwed up and they will put the dns settings to their DB by hands.
The issue is not solved yet, I'm waiting while dns changes will be applied/propagated.
This type of error comes because of server or website. You should try following tips to fix the errors:
Disable QUIC Protocol
Remove or Modify Host file by removing bad programs or the website you searching for Clear SSL state by following steps:
Start Menu > Control Panel > Network and Internet > Network and Sharing Center
Click on Internet Options from the left button When internet properties dialog box will open, go in content tab and select 'Clear SSL' option.
Check system time that it is matching with current time or not
Check Firewall to see your website IP address has been blocked or not, and if blocked then remove from it

How to setup TLS for an NS record subdomain

I have an NS record setup for my.domain.com, which then resolves as http://my.domain.com >> https://thirdparty.domain.com
I need to setup a TLS certificate for my.domain.com so that it can be reached at https://my.domain.com >> https://thirdparty.domain.com
my.domain.com is managed in AWS, and as far as I know getting a certificate requires that certificate to live on a server. Whereas, the NS record seems to just point the domain at a server outside of my control. thirdparty.domain.com is a third party service.
Am I understanding this correctly? How/where will I need to setup the TLS certificate for https://my.domain.com
Example:
my.domain.com NS record:
some.thirdparty.server.
Results in 302: http://my.domain.com > RES: https://thirdparty.domain.com
I would like:
302: **https**://my.domain.com > RES: https://thirdparty.domain.com
In practice this is the flow:
main.domain.com POST >>
302: http://my.domain.com >>
RES: https://thirdparty.domain.com
The system that terminates (where decryption occurs) the TLS connection will need the certificate. The system that responds to a TCP connection on port 443 on the IP address which the domain that you type in your browser ultimately resolves to needs to have the TLS certificate.
my.domain.com NS record: some.thirdparty.server.
That means the third party controls which servers respond.
I would like: 302: **https**://my.domain.com > RES: https://thirdparty.domain.com
As far as I read this, this is about a HTTP redirect from the first to the second. What you would like requires to receive HTTPS connections on my.domain.com, to do that the servers that currently receives HTTP there need to also do so for HTTPS.
If you do not control these server and they currently do not answer to HTTPS, then the only way is to handle DNS yourself (i.e. remove the NS record) and point to servers that only do this redirect. (As you are currently using AWS: That is something their services S3 and CloudFront can achieve together.)