I have a Dynamics 365 instance that makes heavy use of custom front-end interfaces using a modern Nodejs-based build pipeline involving the usual suspects such as webpack/babel/etc. I'm hosting these files as webresources in Dynamics (one html file and one bundle.js file per SPA).
As my team nears production, I'm trying to set up a nice production build for our front-end stuff to reduce load times. Unfortunately, I can't find a good way to serve our bundle.js files encoded as gzip because Dynamics does not return the Content-Encoded: gzip header when a request is made and therefore the browser doen't decompress the file and tries to read the compressed file as plain JavaScript.
Of course, we can serve the uncompressed file just fine but we would like to provide the smaller, faster loading file if possible as it's generally about 1/3 the size.
Does anyone have any brilliant ideas for how to override the default response headers coming back from dynamics when I request a web resource? Or any other clever solutions to this problem?
Thanks, and let me know if any clarification is needed.
I don't know of any way to serve gzipped content via a web resource.
If the download size is a huge concern perhaps encode the gzipped code to base64 and store it as a string variable in JS.
Then during execution you could decode, unzip, and eval() the code.
You could also store base64 gzipped code as a file attachment via an annotation record or within an XML web resource, though those options would require an additional API call to get the code, so a string variable may be your best bet.
Related
Is it possible to DiskCache non-image files without losing their type when rendered in the browser?
I followed the instructions on this page:
http://imageresizing.net/docs/v4/howto/cache-non-images
Per the instructions, I set PostAuthorizeRequestStart = True and cache = Always in the PostAuthorizeRequestStart event. I also added the .unknown mimeType in the config.
However, when an xml file is requested, it's returned as content-type "application/octet-stream" instead of "text/xml".
Is there anyway to preserve the original content-type of non-image files?
I'm afraid not - at least not without modifying ImageResizer source code.
We made the decision to prioritize security, and save all files with the ".unknown" extension to prevent them accidentally being executed as scripts by IIS. IIS sends the content-type based on the file extension, and (depending on your IIS configuration), the extension determines if the file should be executed as code.
I see no harm in expanding the "whitelist" of extensions to include non-image file types, as long as we're reasonably confident that other users haven't allowed IIS to consider those file types executable.
The code that would need to be modified (in v4+), would be HttpModuleRequestAssistant.EstimateResponseInfo. Instead of falling back to "unknown" immediately, a second whitelist could be consulted.
If you file an issue on GitHub about this, you can subscribe to notifications. We'd definitely accept a pull request addressing this feature request, particularly during the current v4 prerelease phase when changes to the pipeline are less risky.
We have moved to using PLupload for file uploads and found that it can support "chunked" file uploads. The problem is that our server sees one large file upload as multiple smaller files coming in multiple POST requests.
Does anybody know if Apache Commons FileUpload supports chunked uploads?
FWIW looking at the PLupload webpage the "Chunking" they are talking about is not "HTTP Chunking". http://www.plupload.com/index.php
Their marketing term "Chunking" is their concept of sending a large payload up in small and separate HTTP requests. The server is required to have logic to group, stitch up and verify all the small parts. You are better off getting help on their forum on this. There is no reason why this logic can not be created by you on the server side and maybe they have example Java code implementing it.
Useful info and pointer to their upload.php example (maybe you convert to Java and on top of Apache Commons FileUpload) :
http://www.plupload.com/punbb/viewtopic.php?id=1484
What you are observing the small segments of a file arriving like they are separate files is exactly how the "PLupload Chunking" mechanism works. This technique is not defined in any standard, but it is also not an uncommon solution to the problems it addresses.
The "HTTP Chunking" is standard for defining how to transfer a single HTTP Request (and/or HTTP Response) between click/server using a HTTP transfer encoding. This is supported by all webservers and all browsers and has been around for a long time (since HTTP/1.1).
I am implementing Gzip compression for CSS and JS files on my site and just need to double check something.
Is the file compressed on every request? or is it collected and sent from the Temporary folder (if the file exists)? I just want to be sure that my files are not compressed on every request.
Also, is this a default behaviour or do I need some extra configurtion?
And last, do I need to worry or configure something when using hash tags in the path (to inform the browser that the file has changed) and static file compression? or it should work with no problem.
Edit: I am just using static compression
Many thanks
In order to get the most out of IIS compression you will need to add a few extra bits into the metabase file.
Backup your meta base file.
Enable live edit to the meta base file in IIS (or you need to restart IIS when your done.)
find the IIsCompressionScheme and make the following edits to the meta base file
<IIsCompressionScheme Location ="/LM/W3SVC/Filters/Compression/deflate"
HcCompressionDll="%windir%\system32\inetsrv\gzip.dll"
HcCreateFlags="0"
HcDoDynamicCompression="TRUE"
HcDoOnDemandCompression="TRUE"
HcDoStaticCompression="TRUE"
HcDynamicCompressionLevel="10"
HcFileExtensions="htm
html
css
js
txt
xml"
HcOnDemandCompLevel="10"
HcPriority="1"
HcScriptFileExtensions="asp
dll
aspx
axd
ashx
asbx
asmx
swf
asmx
exe"
>
</IIsCompressionScheme>
<IIsCompressionScheme Location ="/LM/W3SVC/Filters/Compression/gzip"
HcCompressionDll="%windir%\system32\inetsrv\gzip.dll"
HcCreateFlags="1"
HcDoDynamicCompression="TRUE"
HcDoOnDemandCompression="TRUE"
HcDoStaticCompression="TRUE"
HcDynamicCompressionLevel="10"
HcFileExtensions="htm
html
js
css
txt
xml"
HcOnDemandCompLevel="10"
HcPriority="1"
HcScriptFileExtensions="asp
dll
aspx
axd
ashx
asbx
asmx
swf
asmx
exe"
>
</IIsCompressionScheme>
Once done test a page from your site using a FF plug in like YSlow or Firebug, with Firebug you can inspect each element in the Net tab and check if the right compression is being applied to the right file types.
There is a great article with examples here http://www.codinghorror.com/blog/2004/08/http-compression-and-iis-6-0.html
IIS 6 supports both dynamic and static compression.
Have look at the relevant documentation and a decent blog entry on the subject.
"The newly compressed file is then stored in the compression directory, and subsequent requests for that file are serviced directly from the compression directory. In other words, an uncompressed version of the file is returned to the client unless a compressed version of the file already exists in the compression directory."*
Taken from this article.
One of the responsibilities of my Rails application is to create and serve signed xmls. Any signed xml, once created, never changes. So I store every xml in the public folder and redirect the client appropriately to avoid unnecessary processing from the controller.
Now I want a new feature: every xml is associated with a date, and I'd like to implement the ability to serve a compressed file containing every xml whose date lies in a period specified by the client. Nevertheless, the period cannot be limited to less than one month for the feature to be useful, and this implies some zip files being served will be as big as 50M.
My application is deployed as a Passenger module of Apache. Thus, it's totally unacceptable to serve the file with send_data, since the client will have to wait for the entire compressed file to be generated before the actual download begins. Although I have an idea on how to implement the feature in Rails so the compressed file is produced while being served, I feel my server will get scarce on resources once some lengthy Ruby/Passenger processes are allocated to serve big zip files.
I've read about a better solution to serve static files through Apache, but not dynamic ones.
So, what's the solution to the problem? Do I need something like a custom Apache handler? How do I inform Apache, from my application, how to handle the request, compressing the files and streaming the result simultaneously?
Check out my mod_zip module for Nginx:
http://wiki.nginx.org/NgxZip
You can have a backend script tell Nginx which URL locations to include in the archive, and Nginx will dynamically stream a ZIP file to the client containing those files. The module leverages Nginx's single-threaded proxy code and is extremely lightweight.
The module was first released in 2008 and is fairly mature at this point. From your description I think it will suit your needs.
You simply need to use whatever API you have available for you to create a zip file and write it to the response, flushing the output periodically. If this is serving large zip files, or will be requested frequently, consider running it in a separate process with a high nice/ionice value / low priority.
Worst case, you could run a command-line zip in a low priority process and pass the output along periodically.
it's tricky to do, but I've made a gem called zipline ( http://github.com/fringd/zipline ) that gets things working for me. I want to update it so that it can support plain file handles or paths, right now it assumes you're using carrierwave...
also, you probably can't stream the response with passenger... I had to use unicorn to make streaming work properly... and certain rack middleware can even screw that up (calling response.to_s breaks it)
if anybody still needs this bother me on the github page
Is it possible to send pre-compressed files that are contained within an EARfile? More specifically, the jsp and js files within the WAR file. I am using Apache HTTP as the web server and although it is simple to turn on the deflate module and set it up to use a pre-compressed version of the files, I would like to apply this to files that are contained within an EAR file that is deployed to JBoss. The reason being that the content is quite static and compressing it on the fly each time is quite costly in terms of cpu time.
Quite frankly, I am not entirely familiar with how JBoss deploys these EAR files and 'serves' them. The gist of what I want to do is pre-compress the files contained inside the war so that when they are requested they are sent back to the client with gzip for Content-Encoding.
In theory, you could compress them before packging them in the EAR, and then serve them up with a custom controller which adds the http header to the response which tells the client they're compressed, but that seems like a lot of effort to go to.
When you say that on-the-fly compression is quite costly, have you actually measured it? Have you tried requesting a large number of uncompressed pages, measured the cpu usage, then tied it again with compressed pages? I think you may be over-estimating the impact. It uses quite low-intensity stream compression, designed to use little CPU resources.
You need to be very sure that you have a real performance problem before going to such lengths to mitigate it.
I don't frequent this site often and I seem to have left this thread hanging. Sorry about that. I did succeed in getting compression to my javascript and css files. What I did was I precompress them in the ant build process using the gzip. I then had to spoof the name to get rid of the gzip extension. So I had foo.js and compressed it into foo.js.gzip. I renamed this foo.js.gzip to foo.js and this is the file that gets packaged into the WAR file. So that handles the precompression part. To get this file served up properly, we just have to tell the browser that this file is compressed, via the content-encoding header of the http response. This was done via a output filter that is applied to files that matched the *.js extension (some Java/JBoss, WEB-INF/web.xml if it helps. I'm not too familiar with this so sorry guys).