Enabling gzip compression for JBoss cluster with mod_jk load balancer - apache

We have a JBoss configured in cluster with Apache HTTP+mod_jk as a load balancer.
Do we need to configure anything on Apache side in addition to configuring compression in JBoss configuration for connector?

In standard JBoss gzip compression could be enabled for HTTP connector, but not for AJP. AJP connector is used between Apache HTTP server and JBoss.
To enable gzip compression on Apache HTTP server side add following lines to mod_jk.conf before </VirtualHost>:
AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/x-javascript application/javascript
This will enable gzip compression for specified mime types by means of mod_deflate output filter http://httpd.apache.org/docs/2.2/mod/mod_deflate.html.
Also uncomment following line in httpd.conf to turn on mod_deflate:
LoadModule deflate_module
modules/mod_deflate.so

Related

Apache deflate javascript compress ratio 1.00x

If I ran gzip in CLI, I can get a good compression ratio:
bundle.js: 75.3% -- replaced with bundle.js.gz
But in Apache, even I set deflate, it did compress, but with same file size. Below is my Apache config:
LoadModule deflate_module libexec/apache2/mod_deflate.so
<IfModule deflate_module>
DeflateCompressionLevel 9
AddOutputFilterByType DEFLATE application/javascript text/plain text/css
CustomLog /var/log/deflate_log DEFLATE
</IfModule>
Below is the response:
ETag "8342b-53dc33d01d2c0-gzip"
Server Apache/2.4.23 (Unix)
Content-Type application/javascript
Last-Modified Sat, 01 Oct 2016 01:00:35 GMT
Date Sun, 02 Oct 2016 01:14:20 GMT
Connection Keep-Alive
Vary Accept-Encoding
Accept-Ranges bytes
Keep-Alive timeout=5, max=98
Content-Encoding gzip
Transfer-Encoding Identity
The network transfer size is same as before, and the ratio is 1.00x. I narrowed it down to only js get not compressed, instead css get good compression ratio of 6.22x. Is there something wrong with the js file?
I got it!
I noticed there is no "content-length" header in the response. So I went back to check Apache document. It says:
The DeflateBufferSize directive specifies the size in bytes of the
fragments that zlib should compress at one time. If the compressed
response size is bigger than the one specified by this directive then
httpd will switch to chunked encoding (HTTP header Transfer-Encoding
set to Chunked), with the side effect of not setting any
Content-Length HTTP header. This is particularly important when httpd
works behind reverse caching proxies or when httpd is configured with
mod_cache and mod_cache_disk because HTTP responses without any
Content-Length header might not be cached.
As my js file is 500k, much over the default 8k setting, so I added the following in conf file, now everything is good:
<IfModule deflate_module>
SetOutputFilter DEFLATE
AddOutputFilterByType DEFLATE application/javascript text/plain text/css
DeflateBufferSize 8096000
</IfModule>

ASP.NET vNext, enable compression to IIS 8 on Azure?

how would one enable gzip compression of responses with Content-Type application/json when the asp.net 5 app is deployed to IIS 8 on Azure? Typically this would've been done using web.config but that's gone now... what's the new approach?
You need to reverse-proxy your kestrel application, then you can tell the reverse-proxy to compress.
In nginx, this goes as follows:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
gzip on;
gzip_min_length 1000;
#gzip_proxied expired no-cache no-store private auth;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_vary on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
location /
{
proxy_pass http://127.0.0.1:5004;
}
}
So here nginx will catch incoming requests on port 80, and then forward them to kestrel on the same machine, but on port 5004. Kestrel then sends the response back to nginx. Since gzip is on, nginx will compress the response, and send it to the user. All you need to ensure is that the application on Kestrel does not return HTTP headers, such as HTTP 1.1 chuncked-encoding when outputting for example a file (e.g. when using what used-to-be Response.TransmitFile).
IIS 7.5+ supports reverse proxying.
See here for closer information:
https://serverfault.com/questions/47537/can-iis-be-configure-to-forward-request-to-another-web-server

How do I use mod_deflate in Apache 2.4.1 with Haproxy?

I have a strange issue whereby including the following syntax in my Apache 2.4.1 httpd.conf causes "502 Bad Gateway" errors when retrieving swf files via HAproxy:
AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css application/x-javascript text/javascript
When I remove this config line the 502 Bad Gateway error goes away.
The server returns these response headers on a successful request:
Date: Wed, 11 Apr 2012 20:24:12 GMT
Server: Apache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8
200 OK
I fixed this by updating to Apache 2.4.2 (there was a mod_deflate seg fault bug in 2.4.1) and adding:
Header append Vary User-Agent
Beneath the AddOutputFilterByType line.

mod_pagespeed outputs stats but does nothing

I can see output stats with mod_pagespeed but it does not seam to be doing anything (all stats values stay at 0).
serf_fetch_request_count: 0
serf_fetch_bytes_count: 0
serf_fetch_time_duration_ms: 0
serf_fetch_cancel_count: 0
Anyone knows what can be going wrong?
Ok, I was able to found the blaming lines on my config:
// does NOT work with mod_pagespeed
<FilesMatch "\.(js|css|html|htm|php|xml)$">
SetOutputFilter DEFLATE
</FilesMatch>
So if you have some fancy DEFLATE options, disable them. On the other hand the below code works.
// does WORK with mod_pagespeed
AddOutputFilterByType DEFLATE text/html text/plain text/xml font/
opentype font/truetype font/woff

Apache and mod_proxy not handling HTTP 100-continue from client HTTP 417

'm building some webby magic and am using Apache to front our tomcat server, forwarding requests to tomcat on port 8080. I have an issue using Apache and mod_proxy to forward requests. It appears the client (a web application) sends an HTTP 100-continue to which Apache responds with a 417 Expectation Failed.
When I take Apache out of the picture and send requests directly to tomcat on port 8080, the request is successful and the client is sent a 200 OK.
My Apache config looks like:
ServerName abcproxy
DocumentRoot /apps/apache-content/default
AddOutputFilterByType DEFLATE text/html text/plain text/css application/javascript text/xml
BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4\.0[678] no-gzip
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
ExpiresActive on
ExpiresDefault "access 0 seconds"
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/
ProxyPreserveHost On
CustomLog /apps/ocp-logs/apache/abcproxy.log combined
Anyone see where i'm going wrong?
Apache has a known and unresolved issue with Expect headers, see bug 46709 and bug 47087.
The issue is that some clients set the Expect header and only send the request headers before a PUT or POST of data. This allows the server to respond to errors/redirects/security violations prior to the client sending the request body (PUT or POST data). This is a laudable goal, but apparently, the client does not wait until it gets a response and just pushes out the body of the request, which results in the 417 error.
If you have control over a .NET client you can use ServicePointManager.Expect100Continue Property set to false, to override this behavior.
If you only have control over the server, it looks like you can either force HTTP 1.0 for those clients (perhaps based on user agent string) or force unset the Expect header using mod_header early on in the request.
To remove the Expect header from the request early using mod_headers use this config directive:
<IfModule mod_headers.c>
RequestHeader unset Expect early
</IfModule>
This works because the client is not actually waiting for the "100 Continue" response and acting as if the Expect header were not set.
In our realy particular case, it was the proxy answering with 417.
Then again, the deploy seemed to have ignored the nonProxyHosts settings.
Effectively, we had run into this bug: https://github.com/mojohaus/jaxb2-maven-plugin/issues/60 thus jaxb2-maven-plugin mangled our proxy settings, and the proxy answered 417.
mvn clean deploy
failed.
While
mvn deploy
worked.
Best workaround I found (see issue linked above) was to use a different wagon which does not get broken by jaxb2-maven-plugin (version 2.4 still known to have this proxy bug):
<extensions>
<extension>
<groupId>org.apache.maven.wagon</groupId>
<artifactId>wagon-http-lightweight</artifactId>
<version>3.3.2</version>
</extension>
</extensions>
I hit this 417 Expectation Failed error while configuring Ivanti cloud services (ITSM) with API integrations with Tufin SecureChange (firewall rule automation) running apache on the frontend. This patch solved my issues.