How does Apache (most popular version nowadays, i guess) handle a connection to a script when this script is already being executed for another connection?
My guess has always been - upon receipt of a request to a script, script's contents are copied-to-memory/compiled/executed, and IF during this process there's another request to this script - same things happen (assuming Apache does not lock the script file, and simply gives another share of memory/cpu for another compilation/memory-storage/execution)
Or is there a queuing/waiting mechanism involved?
Assuming this additional connection is afforded enough memory, cpu, and does not pass maximum connections setting.
The quickly (and easy) answer is every request is processes by a new process.
Apache listens in some port and for each request create a new process that handles that request. That means no shared memory.
Also take a look to processes with "ps" command, you will see one "http" process for each request.
Take a look here for more complex working: http://httpd.apache.org/docs/2.0/mod/worker.html
and look at google too :) http://docstore.mik.ua/orelly/weblinux2/apache/ch01_02.htm
Related
Environment: Debian, lighttpd, fam. It is unclear whether virtual private server's kernel Dnotify is enabled or not.
After my CGI creates a file in file system, it responds a short message to client, java script running in browser, notifying the client that the file is ready for download. There is one issue here - most of the time the client gets http 404 error from lighttpd especially when all these software run in localhost. I think this problem is caused by the latency of lighttpd being notified by fam of the file creation. I imagine the all events happen in the following sequence:
CGI creates a file.
CGI responds to client.
client fetch the file from lighttpd.
lighttpd does not find the file in its cache.
I can imagine that such problem will be worse once the file is created in remote host and is notified to lighttpd via NFS.
My question is: how to code CGI to tell lighttpd about file creation before respond to client?
That is, how do I "insert" a notification step in between above steps 1. and 2.? If I can not do this with lighttpd but I can do it with nginx, I can consider switching to the latter, too. Similarly, I will be fine to switch from fam to other file alternation monitors.
Best Regards,
lighttpd is not caching the non-existence of the file, so the problem is more likely in your CGI script. The CGI script should create the file and should close it so that the kernel knows that the contents should be flushed to disk, as they might still be buffered in stdio in the CGI process. Only after that should the script send a message to the client.
As you noted, extra steps are required if the CGI is run on a different server from the one that receives the second request for the file, and the file is stored on NFS or other remote filesystem. The easiest solution is to have the client wait a second or two and retry to download the file a few times, as typical NFS caching settings are about 5 seconds.
I am running a simple server app to receive uploads from a fine-uploader web client. It is based on the fine-uploader Java example and is running in Tomcat6 with Apache sitting in front of it and using ProxyPass to route the requests. I am running into an occasional problem where the upload gets to 100% but ultimately fails. In the server logs, as well as on the client, I can see that Apache is timing out on the proxy with a 502 error.
After trying and seeing this myself, I realized the problem occurs with really large files. The Java server app was taking longer than 30 seconds to reassemble the chunks into a single file and so Apache would kill the connection and stop waiting. I have increased Apache Timeout to 300 seconds which should largely correct the problem but the potential remains.
Any ideas on other ways to handle this so that the connection between Apache and Tomcat is not killed while the app is assembling the chunks on the server? I am currently using 2 MB chunks and was thinking maybe I should use a larger chunk size. Perhaps with fewer chunks to assemble the server code could do it faster. I could test that but unless the speedup is dramatic it seems like the potential for problems remain and will just be waiting for a large enough upload to come along to trigger them.
It seems like you have two options:
Remove the timeout in Apache.
Delegate the chunk-combination effort to a separate thread, and return a response to the request as soon as possible.
With the latter approach, you will not be able to let Fine Uploader know if the chunk combination operation failed, but perhaps you can perform a few quick sanity checks before responding, such as determining if all chunks are accessible.
There's nothing Fine Uploader can do here, the issue is server side. After Fine Uploader sends the request, its job is done until your server responds.
As you mentioned, it may be reasonable to increase the chunk size or make other changes to speed up the chunk combination operation to lessen the chance of a timeout (if #1 or #2 above are not desirable).
I am kind of confused. So pls go easy on me. Take any standard web application implemented with mvc, like codeigniter or rails. The scripts gets executed only when a browser sends request right. So when a user logs in and sends request the server recieves it and sends him response.
Now consider the scenario where apart from the regular application i also need something like a backend process. For example a script which checks whether a bidding time is closed and sends the mail to the bidder that the bidding is closed and chooses the bid winner. Now all these actions has to be done automatically as soon as the bidding time ends.
Now if this script is part of a regular app then it should be triggered by the client(browser) but i dont want that to happen. This should be like a bot script which must run on the server checking the DB for events and patterns like this.
How do i go about doing something like this. Also is this possible to have this implemented on a regular shared or dedicated hosting where we dont have shell access but only ftp access.
You'd have to write your script as a standalone program and either have it run continuously in the background or have cron (or some other scheduling service; also only works if you're only interested in time-based events) execute it for you.
There are probably hosts that have shell-less ways to do this (fancy GUI interfaces for managing background processes or something,) but your run of the mill web host with only FTP access definitely doesn't.
You need a cron job, it's easy to set up on linux. That cron job will either call the command line version of PHP with your script or create a local HTTP request with curl or wget.
If you don't have access then you need an external site that automatically generates periodic HTTP requests. A cheap one is setcronjob.
Is there one? Apparently not. The H2 automatic mixed mode is described
here.
Revival for further reference.
As stated by #fredt, as far as I know, there is no official magic parameter to achieve mixed mode.
Still, you can always start a server programmatically using a Server object so that other process are able to connect to your database.
I discovery a trick to accomplish something pretty close to mixed mode. In order to do that you will need to set the remote_open property to true and connect using this form of URL.
The idea here is doing something like this:
Try connecting to the server using the kind of URL stated above.
If you can't connect it means the server haven't been started, so go ahead and start the server programmatically.
When you connect again, one of three things will happen.
If no database file exists, one will be created in the specified file path and the server will start serving it from the URL alias.
If the database file exists and it isn't being served, the server will open the file from the specified path and start serving it.
If the database file exists and it is being served, the server will simply return a connection.
I'm not sure if it is safe to use that kind of pattern when you plan to spawn a lot of short lived processes (particularly I haven't dive into HSQLDB code to check how it handles database / creation / opening for multiple simultaneous requests when remote_open is set). Still, I've been using this kind of pattern to share development databases between Web Applications for a while and never ran into a single database corruption problem.
The main limitation here is that when the application acting as a server is closed, open connections will stop working and throw Exceptions... Which is not an Issue for my development environment, here this will generally only means one or two broken requests until another server is started and the connection pool detects and renews its connections.
No HSQLDB does not support such a mode.
I am designing an application that is going to consist of 3-4 services that run as separate processes and are linked by a suitable IPC. The system is going to have a web interface and I want to use whatever webserver is there.
The web interface should be accessed under some URL that allows to have other URLs on the same webserver doing totally different things. I'm planning to use the path below that URL to specify what the web interface should do. It has facilities for use by other applications over the net and for humans to interact with in a browser.
Off the cuff, I'd work as follows:
make the webserver fire up a CGI process for every request it receives (like SetHandler in Apache)
let the CGI connect to the IPC
let it get whatever it needs from the backend services
let the CGI return HTML / XML and whatever HTTP Status based on the services' answers
Now, what I really want is to avoid the first two steps, or if I can't, avoid the second one, because I'm afraid that I'm wasting performance on unneccesary overhead (the requests coming from other applications might be frequent).
PHP, for example, can open persistent connections to a MySQL database that survive the script's runtime and don't need to be recreated next time, though I don't know how they actually do it. Also, as I understand it, the Apache modules are loaded once when the server starts, so that might remove the first step but would tie me to Apache.
So, what are good ways to hook a handler for specific URLs into different webservers? I don't want to handle the HTTP, otherwise I might just use a proxy setup to a second server, but it just seems to be so reinventing-the-wheel. If you think, CGI is fine and have examples where it handles large numbers of request of a similar structure, please let me know.
OK, I overlooked this previously. Explaining my question here brought me onto it:
Instead of creating a new process for every request, FastCGI can use a single persistent process which handles many requests over its lifetime. -- Wikipedia: FastCGI
Even under moderate loads, CGI is a pretty unscalable beast. FastCGI is an option, but you'll probably also find a mod_XXXX package where XXXX is the name of your language. There's a mod for ruby, perl, and python for instance and probably a fair few others.