Google Cloud HTTP Balancer and gzip - load-balancing

When i was using Google Cloud Network Load Balancer all my HTTP gzip connections where left intact, but when using HTTP/S Load Balancer end users don't get the gzipped content.
I'm using nginx on the VM. Using this curl example:
curl -H "Accept-Encoding: gzip" -H "Host: my.website.com" -I https://$IP_TO_TEST/login --insecure
I get Content-Encoding: gzip when connecting direct to the VM and no gzip when I connect to the HTTP load balancer.
I've searched all the Google cloud documentation for this and they don't mention whether they support or don't gzipped content from backends.

The Google Cloud HTTP/S load balancer supports gzipped content from backends. However, requests proxied through the load balancer will have a 'Via: google 1.1' header added. The default nginx configuration does not trust proxies to be able to handle gzipped responses. The solution is to enable gzip_proxied.

The HTTP/S load balancer supports gzipped content from backends. Do you have an example of request and response headers you could share? What are you running on the VM (nginx, Apache)?

Related

Passing cookies from origin through CloudFlare

Our organisation has a proxy configured through the CloudFlare as follows: The domain is located on the CloudFlare NS, using the A-record it accesses our internal LBs, which sends a request to one of the two servers. So the schema looks like this:
Client's request -> Domain on CloudFlare -> A -> Our LB -> Server
A cookie with the name of this host is transmitted from the end hosts, which is proxied by our LB through iRule. The problem is that when working through CF, this cookie does not reach the end user, which is why saving the session does not work.
Server --*cookie*-> Our LB --*cookie*-> CloudFlare --*cookies are lost*-> Client
I found a similar issue here, but disabling HTTP2 didn't help. Perhaps someone has come across a similar situation and knows how to implement such a solution?
I tried disabling HTTP2 and setting up page rules, but didn't find any suitable rules
upd:
Without CloudFlare:
$ curl https://domain -kv 1> /dev/null
...
< cookie: hostname
< Set-Cookie: cookie=hostname; Domain=domain;Path=/; HttpOnly
With CloudFlare (missing Set-Cookie)
$ curl https://domain -kv 1> /dev/null
...
< cookie: hostname

Google Cloud Loud Balancer SSL Handshake failure

I have set up a Google Cloud Http(s) Load Balancer with Frontend of Https and Backend of Http. I am getting the following error through Postman for my service:
Error: write EPROTO 140566936757448:error:10000410:SSL routines:OPENSSL_internal:SSLV3_ALERT_HANDSHAKE_FAILURE:../../third_party/boringssl/src/ssl/tls_record.cc:594:SSL alert number 40 140566936757448:error:1000009a:SSL routines:OPENSSL_internal:HANDSHAKE_FAILURE_ON_CLIENT_HELLO:../../third_party/boringssl/src/ssl/handshake.cc:603:
The VM itself works if I call it directly with Http. Is this setup possible or what am I missing?
SSLV3 is not supported by HTTPS load balancer. Please, use a newer (and more secure) version to call your HTTPS load balancer

How to set the Host header in JAX-RS / Apache CXF

I am trying to access a service over HTTPS but due to restrictive network settings I am trying to make the request through an ssh tunnel.
I create the tunnel with a command like:
ssh -L 9443:my-service.com:443 sdt-jump-server
The service is only available via HTTPS, its hosted with a self-signed certificate, and it is behind a load-balancer that uses either the hostname or an explicit Host header to route incoming requests to the appropriate backend service.
I am able to invoke the endpoint from my local system using curl like
curl -k -H 'Host: my-service.com' https://localhost:9443/path
However, when I try to use the CXF 3.1.4 implementation of JAX-RS to make the very same request, I can't seem to make it work. I configured a hostnameVerifier to allow the connection, downloaded the server's certificate, and added it to my truststore. Now I can connect, but it seemed like the load-balancer was not honoring the Host header that I'm trying to set.
I was lost for a bit until I set -Djavax.net.debug and saw that the Host header being passed was actually localhost and not the value I set. How to make CXF honor the Host header I'm setting instead of using the value from the URL of the WebTarget?!
CXF uses HttpUrlConnection, so you need to set a system property programmatically
System.setProperty("sun.net.http.allowRestrictedHeaders", "true")
or at startup:
-Dsun.net.http.allowRestrictedHeaders=true
See also How to overwrite http-header "Host" in a HttpURLConnection?

Google Cloud Load Balancer - 502 - Unmanaged instance group failing health checks

I currently have an HTTPS Load Balancer setup operating with a 443 Frontend, Backend and Health Check that serves a single host nginx instance.
When navigating directly to the host via browser the page loads correctly with valid SSL certs.
When trying to access the site through the load balancer IP, I receive a 502 - Server error message. I check the Google logs and I notice "failed_to_pick_backend" errors at the load balancer. I also notice that it failing health checks.
Some digging around leads me to these two links: https://cloudplatform.googleblog.com/2015/07/Debugging-Health-Checks-in-Load-Balancing-on-Google-Compute-Engine.html
https://github.com/coreos/bugs/issues/1195
Issue #1 - Not sure if google-address-manager is running on the server
(RHEL 7). I do not see an entry for the HTTPS load balancer IP in the
routes. The Google SDK is installed. This is a Google-provided image
and if I update the IP address in the console, it also gets updated on
the host. How do I check if google-address-manager is running on
RHEL7?
[root#server]# ip route ls table local type local scope host
10.212.2.40 dev eth0 proto kernel src 10.212.2.40
127.0.0.0/8 dev lo proto kernel src 127.0.0.1
127.0.0.1 dev lo proto kernel src 127.0.0.1
Output of all google services
[root#server]# systemctl list-unit-files
google-accounts-daemon.service enabled
google-clock-skew-daemon.service enabled
google-instance-setup.service enabled
google-ip-forwarding-daemon.service enabled
google-network-setup.service enabled
google-shutdown-scripts.service enabled
google-startup-scripts.service enabled
Issue #2: Not receiving a 200 OK response. The certificate is valid
and the same on both the LB and server. When running curl against the
app server I receive this response.
root#server.com curl -I https://app-server.com
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
Thoughts?
You should add firewall rules for the health check service -
https://cloud.google.com/compute/docs/load-balancing/health-checks#health_check_source_ips_and_firewall_rules and make sure that your backend service listens on the load balancer ip (easiest is bind to 0.0.0.0) - this is definitely true for an internal load balancer, not sure about HTTPS with an external ip.
A couple of updates and lessons learned:
I have found out that "google-address-manager" is now deprecated and replaced by "google-ip-forward-daemon" which is running.
[root#server ~]# sudo service google-ip-forwarding-daemon status
Redirecting to /bin/systemctl status google-ip-forwarding-daemon.service
google-ip-forwarding-daemon.service - Google Compute Engine IP Forwarding Daemon
Loaded: loaded (/usr/lib/systemd/system/google-ip-forwarding-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2017-12-22 20:45:27 UTC; 17h ago
Main PID: 1150 (google_ip_forwa)
CGroup: /system.slice/google-ip-forwarding-daemon.service
└─1150 /usr/bin/python /usr/bin/google_ip_forwarding_daemon
There is an active firewall rule allowing IP ranges 130.211.0.0/22 and 35.191.0.0/16 for port 443. The target is also properly set.
Finally, the health check is currently using the default "/" path. The developers have put an authentication in front of the site during the development process. If I bypassed the SSL cert error, I received a 401 unauthorized when running curl. This was the root cause of the issue we were experiencing. To remedy, we modified nginx basic authentication configuration to disable authentication to a new route (eg. /health)
Once nginx configuration was updated and the path was updated to the new /health route at the health check, we were receivied valid 200 responses. This allowed the health check to return healthy instances and allowed the LB to pass through traffic

How to resume an HTTP download served by Apache and svn module?

I 'm using Collabnet Subversion server 1.7.6(on Windows); I store large files(30M+) in SVN repository and serve them via the bundled Apache HTTP server(2.2.22).
I find that download of SVN-served files does not support resume. In HTTP protocol term, the client requests Range: xxx- in HTTP header, but the server does not respond with HTTP header Content-Range: xxx-yyy , so the client has to download from the first byte of a file.
I also try http download resume on Apache website's svn server.
wget -c http://svn.apache.org/repos/asf/subversion/branches/1.7.x/CHANGES
I press Ctrl+C in the middle of download, then wget -c again, the same result.
Is there any workaround for this problem?