how to enable gzip compression in libwebsocket on ESP32 - gzip

I am running a webserver on an ESP32 chip using the libwebsocket library. The server files are in a ROMFS partition on the ESP32.
Currently I am in the process of trying to improve the loading time by concatenating, minifying and compressing the javascript, html and css files.
The concatenation and minification worked properly, I now only have a concatenated.js and concatenated.css file in my website. But the issue came when I tried to get the compression working.
Initially, I thought my server would compress the files by itself before sending them, however when I looked at the server file transfer using Chrome developper extension, I found out that the javascript file GET request was returned with "content-type: text/javascript".
I tried several solutions I could think of, but none seem to work:
gzip the file before creating the romfs (ie there is now only a concatenated.js.gz in my ROMFS file system)
The server returns 404 when trying to access "concatenated.js"
gzip the file before creating the romfs and make it live alongside the original file (I was thinking maybe libwebsocket would be able to see they were both there and pick the most efficient one)
The server only returns the js file, and never the gz file
Does anybody knows how to enable the gzip compression in libwebsocket ? I am guessing there must be some options I don't have enabled, but it has been hard finding resources on the web. Most of them only discuss about the ability of the libwebsocket to get gzip from a zipped file.
Regards,

The issue ended up coming directly from the libwebsocket code.
When opening a file from the ESP32, there was no logic in place to look for a file with the same name and ".gz" at the end. The logic to look for such a file if the browser accepted gzip file needed to be added to the function.
This change was done on an older version of the libwebsocket, and as such may not apply to the latest version (for anybody looking at this modification). Also, I needed to include <string.h> to have access to the string manipulation functions:
libwebsocket/lib/plate/freertos/esp32/esp32-helpers.c -> function esp32_lws_fops_open
Replace
f->i = romfs_get_info(lws_esp32_romfs, filename, &len, &csum);
By
// check for the gzip file if gzip is allowed by the browser
f->i = NULL;
if((*flags & LWS_FOP_FLAG_COMPR_ACCEPTABLE_GZIP) == LWS_FOP_FLAG_COMPR_ACCEPTABLE_GZIP)
{
char *filename_gz = malloc(strlen(filename) + 3 + 1); // add space for ".gz" and null termination
sprintf(filename_gz, "%s.gz", filename);
f->i = romfs_get_info(lws_esp32_romfs, filename_gz, &len, &csum);
}
// if we haven't found a gz file (not allowed or no gzip), search for the regular file
if(!f->i)
{
f->i = romfs_get_info(lws_esp32_romfs, filename, &len, &csum);
}
// otherwise, add the flags to let the library knows the file transfered is a gzip file
else
{
*flags |= LWS_FOP_FLAG_COMPR_IS_GZIP;
}

Related

Serve gzipped content via a Web Resource

I have a Dynamics 365 instance that makes heavy use of custom front-end interfaces using a modern Nodejs-based build pipeline involving the usual suspects such as webpack/babel/etc. I'm hosting these files as webresources in Dynamics (one html file and one bundle.js file per SPA).
As my team nears production, I'm trying to set up a nice production build for our front-end stuff to reduce load times. Unfortunately, I can't find a good way to serve our bundle.js files encoded as gzip because Dynamics does not return the Content-Encoded: gzip header when a request is made and therefore the browser doen't decompress the file and tries to read the compressed file as plain JavaScript.
Of course, we can serve the uncompressed file just fine but we would like to provide the smaller, faster loading file if possible as it's generally about 1/3 the size.
Does anyone have any brilliant ideas for how to override the default response headers coming back from dynamics when I request a web resource? Or any other clever solutions to this problem?
Thanks, and let me know if any clarification is needed.
I don't know of any way to serve gzipped content via a web resource.
If the download size is a huge concern perhaps encode the gzipped code to base64 and store it as a string variable in JS.
Then during execution you could decode, unzip, and eval() the code.
You could also store base64 gzipped code as a file attachment via an annotation record or within an XML web resource, though those options would require an additional API call to get the code, so a string variable may be your best bet.

Server side: detecting if a user is downloading (save as...) or visualizing a file in the browser

I'm writing an apache2 module
by default and when viewed in a web browser, the module would only print the first lines of a large file and convert them to HTML.
if the user choose to 'download as...', the whole raw file would be downloaded.
Is it possible to detect this choice on the server side ? (for example is there a specific http header set ?).
note: I would like to avoid any parameter in the GET url (e.g: "http://example.org/file?mode=raw" )
Pierre
added my own answer to close the question: as said #alexeyten there is no difference. I ended by a javascript code the alter the index.html file generated by apache.

tomcat 6 . How does HttpServletResponse make browsers react immediately , not have done response.getOutputStream().close()??

This is the code in my servlet:
while( bytes....){//do read file to bytes
response.getOutputStream().write(bytes);
response.getOutputStream().flush();
log4j.debug(response.isCommitted()); // out true.
}
If my file is 100MB , the server must read 100MB to memory and then the browser alerts
a dialog of downloading file.
How the waiting time of the browser terrible, when my file is gt than 2GB ....
Browser compatibility problems, from Servlet Best Practices, Part 3 by The O'Reilly Java Authors:
The bad news is that although the HTTP specification provides a
mechanism for file downloads (see HTTP/1.1, Section 19.5.1), many
browsers second-guess the server's directives and do what they think
is best rather than what they're told.
The good news is that the right combination of headers will download
files well enough to be practical. With these special headers set, a
compliant browser will open a Save As dialog, while a noncompliant
browser will open the dialog for all content except HTML or image
files.
set the Content-Type header to a nonstandard value such as
application/x-download.

Pre-compress static files in IIS 6

I am implementing Gzip compression for CSS and JS files on my site and just need to double check something.
Is the file compressed on every request? or is it collected and sent from the Temporary folder (if the file exists)? I just want to be sure that my files are not compressed on every request.
Also, is this a default behaviour or do I need some extra configurtion?
And last, do I need to worry or configure something when using hash tags in the path (to inform the browser that the file has changed) and static file compression? or it should work with no problem.
Edit: I am just using static compression
Many thanks
In order to get the most out of IIS compression you will need to add a few extra bits into the metabase file.
Backup your meta base file.
Enable live edit to the meta base file in IIS (or you need to restart IIS when your done.)
find the IIsCompressionScheme and make the following edits to the meta base file
<IIsCompressionScheme Location ="/LM/W3SVC/Filters/Compression/deflate"
HcCompressionDll="%windir%\system32\inetsrv\gzip.dll"
HcCreateFlags="0"
HcDoDynamicCompression="TRUE"
HcDoOnDemandCompression="TRUE"
HcDoStaticCompression="TRUE"
HcDynamicCompressionLevel="10"
HcFileExtensions="htm
html
css
js
txt
xml"
HcOnDemandCompLevel="10"
HcPriority="1"
HcScriptFileExtensions="asp
dll
aspx
axd
ashx
asbx
asmx
swf
asmx
exe"
>
</IIsCompressionScheme>
<IIsCompressionScheme Location ="/LM/W3SVC/Filters/Compression/gzip"
HcCompressionDll="%windir%\system32\inetsrv\gzip.dll"
HcCreateFlags="1"
HcDoDynamicCompression="TRUE"
HcDoOnDemandCompression="TRUE"
HcDoStaticCompression="TRUE"
HcDynamicCompressionLevel="10"
HcFileExtensions="htm
html
js
css
txt
xml"
HcOnDemandCompLevel="10"
HcPriority="1"
HcScriptFileExtensions="asp
dll
aspx
axd
ashx
asbx
asmx
swf
asmx
exe"
>
</IIsCompressionScheme>
Once done test a page from your site using a FF plug in like YSlow or Firebug, with Firebug you can inspect each element in the Net tab and check if the right compression is being applied to the right file types.
There is a great article with examples here http://www.codinghorror.com/blog/2004/08/http-compression-and-iis-6-0.html
IIS 6 supports both dynamic and static compression.
Have look at the relevant documentation and a decent blog entry on the subject.
"The newly compressed file is then stored in the compression directory, and subsequent requests for that file are serviced directly from the compression directory. In other words, an uncompressed version of the file is returned to the client unless a compressed version of the file already exists in the compression directory."*
Taken from this article.

HTTP compression - How to send precompressed files that exist in a EAR file?

Is it possible to send pre-compressed files that are contained within an EARfile? More specifically, the jsp and js files within the WAR file. I am using Apache HTTP as the web server and although it is simple to turn on the deflate module and set it up to use a pre-compressed version of the files, I would like to apply this to files that are contained within an EAR file that is deployed to JBoss. The reason being that the content is quite static and compressing it on the fly each time is quite costly in terms of cpu time.
Quite frankly, I am not entirely familiar with how JBoss deploys these EAR files and 'serves' them. The gist of what I want to do is pre-compress the files contained inside the war so that when they are requested they are sent back to the client with gzip for Content-Encoding.
In theory, you could compress them before packging them in the EAR, and then serve them up with a custom controller which adds the http header to the response which tells the client they're compressed, but that seems like a lot of effort to go to.
When you say that on-the-fly compression is quite costly, have you actually measured it? Have you tried requesting a large number of uncompressed pages, measured the cpu usage, then tied it again with compressed pages? I think you may be over-estimating the impact. It uses quite low-intensity stream compression, designed to use little CPU resources.
You need to be very sure that you have a real performance problem before going to such lengths to mitigate it.
I don't frequent this site often and I seem to have left this thread hanging. Sorry about that. I did succeed in getting compression to my javascript and css files. What I did was I precompress them in the ant build process using the gzip. I then had to spoof the name to get rid of the gzip extension. So I had foo.js and compressed it into foo.js.gzip. I renamed this foo.js.gzip to foo.js and this is the file that gets packaged into the WAR file. So that handles the precompression part. To get this file served up properly, we just have to tell the browser that this file is compressed, via the content-encoding header of the http response. This was done via a output filter that is applied to files that matched the *.js extension (some Java/JBoss, WEB-INF/web.xml if it helps. I'm not too familiar with this so sorry guys).