tomcat 6 . How does HttpServletResponse make browsers react immediately , not have done response.getOutputStream().close()?? - file-io

This is the code in my servlet:
while( bytes....){//do read file to bytes
response.getOutputStream().write(bytes);
response.getOutputStream().flush();
log4j.debug(response.isCommitted()); // out true.
}
If my file is 100MB , the server must read 100MB to memory and then the browser alerts
a dialog of downloading file.
How the waiting time of the browser terrible, when my file is gt than 2GB ....

Browser compatibility problems, from Servlet Best Practices, Part 3 by The O'Reilly Java Authors:
The bad news is that although the HTTP specification provides a
mechanism for file downloads (see HTTP/1.1, Section 19.5.1), many
browsers second-guess the server's directives and do what they think
is best rather than what they're told.
The good news is that the right combination of headers will download
files well enough to be practical. With these special headers set, a
compliant browser will open a Save As dialog, while a noncompliant
browser will open the dialog for all content except HTML or image
files.
set the Content-Type header to a nonstandard value such as
application/x-download.

Related

Serve gzipped content via a Web Resource

I have a Dynamics 365 instance that makes heavy use of custom front-end interfaces using a modern Nodejs-based build pipeline involving the usual suspects such as webpack/babel/etc. I'm hosting these files as webresources in Dynamics (one html file and one bundle.js file per SPA).
As my team nears production, I'm trying to set up a nice production build for our front-end stuff to reduce load times. Unfortunately, I can't find a good way to serve our bundle.js files encoded as gzip because Dynamics does not return the Content-Encoded: gzip header when a request is made and therefore the browser doen't decompress the file and tries to read the compressed file as plain JavaScript.
Of course, we can serve the uncompressed file just fine but we would like to provide the smaller, faster loading file if possible as it's generally about 1/3 the size.
Does anyone have any brilliant ideas for how to override the default response headers coming back from dynamics when I request a web resource? Or any other clever solutions to this problem?
Thanks, and let me know if any clarification is needed.
I don't know of any way to serve gzipped content via a web resource.
If the download size is a huge concern perhaps encode the gzipped code to base64 and store it as a string variable in JS.
Then during execution you could decode, unzip, and eval() the code.
You could also store base64 gzipped code as a file attachment via an annotation record or within an XML web resource, though those options would require an additional API call to get the code, so a string variable may be your best bet.

Does apache commons fileupload support chunked uploads?

We have moved to using PLupload for file uploads and found that it can support "chunked" file uploads. The problem is that our server sees one large file upload as multiple smaller files coming in multiple POST requests.
Does anybody know if Apache Commons FileUpload supports chunked uploads?
FWIW looking at the PLupload webpage the "Chunking" they are talking about is not "HTTP Chunking". http://www.plupload.com/index.php
Their marketing term "Chunking" is their concept of sending a large payload up in small and separate HTTP requests. The server is required to have logic to group, stitch up and verify all the small parts. You are better off getting help on their forum on this. There is no reason why this logic can not be created by you on the server side and maybe they have example Java code implementing it.
Useful info and pointer to their upload.php example (maybe you convert to Java and on top of Apache Commons FileUpload) :
http://www.plupload.com/punbb/viewtopic.php?id=1484
What you are observing the small segments of a file arriving like they are separate files is exactly how the "PLupload Chunking" mechanism works. This technique is not defined in any standard, but it is also not an uncommon solution to the problems it addresses.
The "HTTP Chunking" is standard for defining how to transfer a single HTTP Request (and/or HTTP Response) between click/server using a HTTP transfer encoding. This is supported by all webservers and all browsers and has been around for a long time (since HTTP/1.1).

Pre-compress static files in IIS 6

I am implementing Gzip compression for CSS and JS files on my site and just need to double check something.
Is the file compressed on every request? or is it collected and sent from the Temporary folder (if the file exists)? I just want to be sure that my files are not compressed on every request.
Also, is this a default behaviour or do I need some extra configurtion?
And last, do I need to worry or configure something when using hash tags in the path (to inform the browser that the file has changed) and static file compression? or it should work with no problem.
Edit: I am just using static compression
Many thanks
In order to get the most out of IIS compression you will need to add a few extra bits into the metabase file.
Backup your meta base file.
Enable live edit to the meta base file in IIS (or you need to restart IIS when your done.)
find the IIsCompressionScheme and make the following edits to the meta base file
<IIsCompressionScheme Location ="/LM/W3SVC/Filters/Compression/deflate"
HcCompressionDll="%windir%\system32\inetsrv\gzip.dll"
HcCreateFlags="0"
HcDoDynamicCompression="TRUE"
HcDoOnDemandCompression="TRUE"
HcDoStaticCompression="TRUE"
HcDynamicCompressionLevel="10"
HcFileExtensions="htm
html
css
js
txt
xml"
HcOnDemandCompLevel="10"
HcPriority="1"
HcScriptFileExtensions="asp
dll
aspx
axd
ashx
asbx
asmx
swf
asmx
exe"
>
</IIsCompressionScheme>
<IIsCompressionScheme Location ="/LM/W3SVC/Filters/Compression/gzip"
HcCompressionDll="%windir%\system32\inetsrv\gzip.dll"
HcCreateFlags="1"
HcDoDynamicCompression="TRUE"
HcDoOnDemandCompression="TRUE"
HcDoStaticCompression="TRUE"
HcDynamicCompressionLevel="10"
HcFileExtensions="htm
html
js
css
txt
xml"
HcOnDemandCompLevel="10"
HcPriority="1"
HcScriptFileExtensions="asp
dll
aspx
axd
ashx
asbx
asmx
swf
asmx
exe"
>
</IIsCompressionScheme>
Once done test a page from your site using a FF plug in like YSlow or Firebug, with Firebug you can inspect each element in the Net tab and check if the right compression is being applied to the right file types.
There is a great article with examples here http://www.codinghorror.com/blog/2004/08/http-compression-and-iis-6-0.html
IIS 6 supports both dynamic and static compression.
Have look at the relevant documentation and a decent blog entry on the subject.
"The newly compressed file is then stored in the compression directory, and subsequent requests for that file are serviced directly from the compression directory. In other words, an uncompressed version of the file is returned to the client unless a compressed version of the file already exists in the compression directory."*
Taken from this article.

Accept-Encoding headers being sent by browser but not received by server

I have been trying to debug this for weeks. All of the browsers on all of the clients on my home network are sending 'Accept-Encoding: gzip,deflate'. However, that header is somehow, somewhere being dropped before the request makes it to a web server. For example, http://www.whatsmyip.org/http_compression/ says 'No, your browser is not requesting compressed content'.
I've used Fiddler to make sure that all of my browsers are indeed sending the header. I've swapped out my router. I've turned off all anti-virus software.
Brighthouse/Roadrunner (the local cable ISP) says they are not doing any filtering (and I can't see why they would in this case).
Any suggestions would be most welcome!
Try it with HTTPS.
If you are browsing a site via HTTPS, nothing between your browser and the web server can alter any HTTP-level aspect of the the request or response, including whether compression is enabled, without you having immediate and clear knowledge of that fact (check the site's certificate in your browser address bar and see if it's legit).
I had the Accept-Xncoding issue and determined it was CA Internet Security Suite causing the issue. Disabling wan't enought, you had to uninstall and then clear IE Cache.
Check your antivirus software. It's probably intercepting your outbound traffic and modifying headers on the fly in order to get uncompressed content. Lazy programmers don't like to include decompression methods themselves, or deal with chunked encoding.
Norton Internet Security will overwrite accept encoding with this line:
---------------: ----- -------
McAfee overwrites with this:
X-McProxyFilter: *************
something I haven't identified yet overwrites with this:
Accept-Xncoding: gzip, deflate
You are probably in the same boat. I read Zone Alarm wipes out the encoding header entirely (which means recalculating the size of the packet, but why should they care how much load they introduce on your system?). If you're running Zone Alarm, turn off the 'internet privacy option' or whatever it is, and try again.
Everytime I've seen this problem, it has been the result of shitty antivirus. Completely disabling someone's ability to receive compressed content without letting them know is dirty.

HTTP compression - How to send precompressed files that exist in a EAR file?

Is it possible to send pre-compressed files that are contained within an EARfile? More specifically, the jsp and js files within the WAR file. I am using Apache HTTP as the web server and although it is simple to turn on the deflate module and set it up to use a pre-compressed version of the files, I would like to apply this to files that are contained within an EAR file that is deployed to JBoss. The reason being that the content is quite static and compressing it on the fly each time is quite costly in terms of cpu time.
Quite frankly, I am not entirely familiar with how JBoss deploys these EAR files and 'serves' them. The gist of what I want to do is pre-compress the files contained inside the war so that when they are requested they are sent back to the client with gzip for Content-Encoding.
In theory, you could compress them before packging them in the EAR, and then serve them up with a custom controller which adds the http header to the response which tells the client they're compressed, but that seems like a lot of effort to go to.
When you say that on-the-fly compression is quite costly, have you actually measured it? Have you tried requesting a large number of uncompressed pages, measured the cpu usage, then tied it again with compressed pages? I think you may be over-estimating the impact. It uses quite low-intensity stream compression, designed to use little CPU resources.
You need to be very sure that you have a real performance problem before going to such lengths to mitigate it.
I don't frequent this site often and I seem to have left this thread hanging. Sorry about that. I did succeed in getting compression to my javascript and css files. What I did was I precompress them in the ant build process using the gzip. I then had to spoof the name to get rid of the gzip extension. So I had foo.js and compressed it into foo.js.gzip. I renamed this foo.js.gzip to foo.js and this is the file that gets packaged into the WAR file. So that handles the precompression part. To get this file served up properly, we just have to tell the browser that this file is compressed, via the content-encoding header of the http response. This was done via a output filter that is applied to files that matched the *.js extension (some Java/JBoss, WEB-INF/web.xml if it helps. I'm not too familiar with this so sorry guys).