When do I need use tomcat-coyote.jar - Coyote API? - api

when do I need load the jar tomcat-coyote API in the webserver, for what reason?
I brought this question due to a third-party product that makes use of Coyote API, I guess for some kind of connector but I`m not sure what?

It can be any one of a number of things. That JAR does contain the HTTP and AJP connector implementations but it also has a number of utility classes such as a packaged renamed copy of Apache Commons BCEL used for annotation scanning, some optimized collection implementations, various HTTP utilities (cookies, file upload, header parsing, parameter parsing, etc.) to name but a few.
The quick way to figure out what it is using is to remove the JAR and look for the ClassNotFoundExceptions.

Related

Does apache commons fileupload support chunked uploads?

We have moved to using PLupload for file uploads and found that it can support "chunked" file uploads. The problem is that our server sees one large file upload as multiple smaller files coming in multiple POST requests.
Does anybody know if Apache Commons FileUpload supports chunked uploads?
FWIW looking at the PLupload webpage the "Chunking" they are talking about is not "HTTP Chunking". http://www.plupload.com/index.php
Their marketing term "Chunking" is their concept of sending a large payload up in small and separate HTTP requests. The server is required to have logic to group, stitch up and verify all the small parts. You are better off getting help on their forum on this. There is no reason why this logic can not be created by you on the server side and maybe they have example Java code implementing it.
Useful info and pointer to their upload.php example (maybe you convert to Java and on top of Apache Commons FileUpload) :
http://www.plupload.com/punbb/viewtopic.php?id=1484
What you are observing the small segments of a file arriving like they are separate files is exactly how the "PLupload Chunking" mechanism works. This technique is not defined in any standard, but it is also not an uncommon solution to the problems it addresses.
The "HTTP Chunking" is standard for defining how to transfer a single HTTP Request (and/or HTTP Response) between click/server using a HTTP transfer encoding. This is supported by all webservers and all browsers and has been around for a long time (since HTTP/1.1).

JSF2 resources - compression, minification

I have two questions about resources in JSF2:
is there any way to set that all JSF2 resources (JS, CSS) should be compressed (gziped) or at least minified. (Something a la wro4j).
And the second one: is there any way to force exclude some library? I am using in my admin system OpenFaces, but the JS dependency is included even in the user frontend pages, even thought I never use (or import namespace) there.
Thanks
Gzipping is more a servletcontainer configuration. Consult its documentation for details. In Tomcat for example, it's a matter of adding the compression="on" attribute to the <Connector> element in /conf/server.xml. See also Tomcat Configuration Reference - The HTTP Connector.
<Connector ... compression="on">
You can also configure compressable mime types over there.
Minification is more a build process configuration. If you're using Ant as build tool, you may find the YuiCompressorAntTask useful. We use it here and it works wonderfully.
As to OpenFaces, that's a completely different question and I also don't use it so I don't have an answer for you. I'd suggest to just ask that in a separate question. I don't see how that's related to performance improvements as gzipping and minification.
For what concerns OpenFaces I had the same problem and I solved unpacking the JAR, minifying the huge Javascripts manually and repacking the JAR. It allowed me to save about 70 Kb per request which were impacting performance on heavy load.

Serving dynamic zip files through Apache

One of the responsibilities of my Rails application is to create and serve signed xmls. Any signed xml, once created, never changes. So I store every xml in the public folder and redirect the client appropriately to avoid unnecessary processing from the controller.
Now I want a new feature: every xml is associated with a date, and I'd like to implement the ability to serve a compressed file containing every xml whose date lies in a period specified by the client. Nevertheless, the period cannot be limited to less than one month for the feature to be useful, and this implies some zip files being served will be as big as 50M.
My application is deployed as a Passenger module of Apache. Thus, it's totally unacceptable to serve the file with send_data, since the client will have to wait for the entire compressed file to be generated before the actual download begins. Although I have an idea on how to implement the feature in Rails so the compressed file is produced while being served, I feel my server will get scarce on resources once some lengthy Ruby/Passenger processes are allocated to serve big zip files.
I've read about a better solution to serve static files through Apache, but not dynamic ones.
So, what's the solution to the problem? Do I need something like a custom Apache handler? How do I inform Apache, from my application, how to handle the request, compressing the files and streaming the result simultaneously?
Check out my mod_zip module for Nginx:
http://wiki.nginx.org/NgxZip
You can have a backend script tell Nginx which URL locations to include in the archive, and Nginx will dynamically stream a ZIP file to the client containing those files. The module leverages Nginx's single-threaded proxy code and is extremely lightweight.
The module was first released in 2008 and is fairly mature at this point. From your description I think it will suit your needs.
You simply need to use whatever API you have available for you to create a zip file and write it to the response, flushing the output periodically. If this is serving large zip files, or will be requested frequently, consider running it in a separate process with a high nice/ionice value / low priority.
Worst case, you could run a command-line zip in a low priority process and pass the output along periodically.
it's tricky to do, but I've made a gem called zipline ( http://github.com/fringd/zipline ) that gets things working for me. I want to update it so that it can support plain file handles or paths, right now it assumes you're using carrierwave...
also, you probably can't stream the response with passenger... I had to use unicorn to make streaming work properly... and certain rack middleware can even screw that up (calling response.to_s breaks it)
if anybody still needs this bother me on the github page

Programmatically configuration of endpoints vs. web/app.config

Has any put much thought into this? Personally, I think managing endpoints in configuration files are a pain. Are there any pros/cons to doing one over the other?
Only points in favour of configuration files from me.
Managing endpoints in configuration files mean that you don't have to update your application if (or perhaps I should say when) the endpoints change.
You can also have several instances of the application running with different endpoints.
I tend to like the config approach myself too, other than the config file can get pretty big.
The one thing I have noticed with WCF configuration is that there is a lot of stuff that you can do from code that you can't do in XML config without adding your own custom extensions. In other words, doing config in code will allow more flexibility, of course you could also just code your own extensions and use those from configuration.
However, do note that there is what I would consider a 'bug' in Visual Studio that if you start making your own extensions and including them in XML, then VS won't like your config file any more and will tag them as errors, and then if you try to add a new service through the wizards, it will fail to add the endpoint to the configuration.
This is sort of a followup to my own answer:
After months of having everything in xml configuration, I'm changing everything to construct the endpoints and bindings in code. I found a really good case for having it in code;
When you want to have a deployable / sharable .dll that contains WCF clients.
So for example if you have a CommonClients.dll that contains all your WCF interfaces and contracts to communicate with some remote server, then you don't want to also say "here is 100 lines of xml that you also have to drop into your app.config for every client to make it work". Having it all constructed in code works out much better in this case.
There is also a "feature" of .NET 3.5 where if you have some wcf extensions, you have to specify the fully qualified assembly name. This means that if your assembly containing the extensions changes the version nnumber, you have to go change the assembly name in the config file too. It is supposedly fixed in .NET 4 to use a short assembly name and not require the full name.
Offhand, an endpoint in a config file doesn't need to be recompiled when it's changed. This also means that you just need to update your config file when moving an application from Development to UAT to Production.
If your just coding something for your own use at home, then there's no real difference. However in a business environment, having the enpoint defined in your config file saves all sorts of headaches.
When using an app.config, your application does not need to be recompiled to adjust to a change. Also it can be resused in multiple situations with the exact same code. Finally, hardcoding your endpoints (or anything subject to change) is poor coding practice. Don't fear the configuration file, it's declarative programming. You say, "I want to use this endpoint." and it does the work for you.
I generally do programmatic configuration, as I don't want to expose my applications internal structure the the user. The only thing I keep configurable is service address, but even this I keep in userSettings section, not system.ServiceModel.
I prefer and recommend the configuration file approach. It offeres a lot of flexibility by allowing to make change to your server without the need to recompile the applcation.
If you need security, you can encrypt the config file.
The biggest worry with plain config files could be that it can be accidentally (or on purpose) modified by the end user causing your app to crash. To overcome this you could make some tests in code to check the configuration is ok in the config file and if not, initialize it programatically to some defaults. I presented how you could do that in another answer to this question.
It's just a question of how much flexibility you need.
Usually I prefer the config file approach.
Check out the .NET StockTrader app. It uses a repository to store config data and has a separate app to manage the configuration. The setup and structure is pretty advanced and there's a fair bit of head scratching for anyone like me that only has the basics of WCF configuration so far, but I would say it's worth a look.

Accessing a resource file from a filesystem plugin on SymbianOS

I cannot use the Resource File API from within a file system plugin due to a PlatSec issue:
*PlatSec* ERROR - Capability check failed - Can't load filesystemplugin.PXT because it links to bafl.dll which has the following capabilities missing: TCB
My understanding of the issue is that:
File system plugins are dlls which are executed within the context of the file system process. Therefore all file system plugins must have the TCB PlatSec privilege which in turn means they cannot link against a dll that is not in the TCB.
Is there a way around this (without resorting to a text file or an intermediate server)? I suspect not - but it would be good to get a definitive answer.
The Symbian file server has the following capabilities:
TCB ProtServ DiskAdmin AllFiles PowerMgmt CommDD
So any DLL being loaded into the file server process must have at least these capabilities. There is no way around this, short of writing a new proxy process as you allude to.
However, there is a more fundamental reason why you shouldn't be using bafl.dll from within a fileserver plugin: this DLL provides utility functions which interface to the file servers client API. Attempting to use it from within the filer server will not work; at best, it will lead to the file server deadlocking as it attempts to connect to itself.
I'd suggest rethinking that you're trying to do, and investigating an internal file-server API to achieve it instead.
Using RFs/RFile/RDir APIs from within a file server plugin is not safe and can potentially lead to deadlock if you're not very careful.
Symbian 9.5 will introduce new APIs (RFilePlugin, RFsPlugin and RDirPlugin) which should be used instead.
Theres a proper mechanism for communicating with plugins, RPlugin.
Do not use RFile. I'm not even sure that it would work as the path is checked in Initialise of RFile functions which is called before the plugin stack.
Tell us what kind of data you are storing in the resource file.
Things that usually go into resource files have no place in a file server plugin, even that means hardcoding a few values.
Technically, you can send data to a file server plugin using RFile.Write() but that's not a great solution (intercept RFile.Open("invalid file name that only your plugin understands") in the plugin).
EDIT: Someone indicated that using an invalid file name will not let you send data to the plugin. hey, I didn't like that solution either. for the sake of completness, I should clarify. make up a filename that looks OK enough to go through to your plugin. like using a drive letter that doesn't have a real drive attached to it (but will still be considered correct by filename-parsing code).
Writing code to parse the resource file binary in the plugin, while theoratically possible, isn't a great solution either.