how to pass information from cli or server to fastcgi (php-fpm) and back again? - api

i'm trying to write a webserver. i didn't want to write a module for php, so i figured i'd pass information to php-fpm like nginx and apache does. i did some research, and setup to prototypes, and just can't get it to work.
i've set up a php service listening on port 9999 that will print_r($GLOBALS) upon each connection. i've set up nginx to pass php requests to 127.0.0.1:9999. the requests ARE being passed, but only argc (1) and argv (the path to the php service), and $_SERVER vars are populated. the $_SERVER vars has a lot of information about the current environment that the php process is acting in, but i don't see ANY information about the connected user or their request -no REMOTE_ADDR, no QUERY_STRING, no nothing...
i'm having trouble finding documentation on HOW to pass this information from the cli or from a prototype server to a fastcgi php process. i've found a list of what some of the older CGI vars are, but no information on HOW to pass them, or if any of them are outdated with fastcgi.
so, again, i'm asking HOW you pass info from your server prototype or cli to a php-fpm or fastcgi process -or, WHERE can i find proper and clear and definitive documentation on this subject? (and no, the RFC is not the answer). i've beed reading over fastcgi.com and wikipedia as well as numerous other search results...
=== update ===
i've managed to get a working fastcgi "service" up and running via a prototype in php. it listens on 9999, parses a binary fcgi request from the cli and even from nginx, it formats a binary fcgi response, sends it back over the network, and the cli displays it fine, and nginx even returns the decoded fcgi response back to the browser just like nature intended.
now, when i try to do things the other way around --write my prototype server that forms a binary fcgi packet and sends it to PHP-FPM, i get NOTHING -no error output on the cli or from the error logs (i can't ever get php-fpm to write to the error logs any way [-_-]). so, WHY wouldn't php-fpm be giving me SOME kind of response, either in error text, or in binary network packet, or ANYTHING???
so, i can SEND data from cli to to fastcgi, but i can't get any thing back, unless it's MY OWN fastcgi process (and no, i didn't take over php-fpm's port -i'm on 9999 and it's on 9000).
=============
TIA \m/(>_<)\m/

got it.
the server passes information to the fastcgi process in the form of a network data packet. much like a dns packet, you have to use the respective binary character manipulation functions for your language to formulate the packate, including it's header and payload information, then send it across the pipe / network to the fastcgi server who will then parse the binary packet into a response. fun stuff -could have been more well documented, ahem :-\
oh, and if you prototype a listener in php, you can't access this packet via any php vars, you actually have to read the connection you're listening on (because it's a binary network packet and not plain text 'post' data sent).
i've managed to get a working fastcgi "service" up and running via a prototype in php. it listens on 9999, parses a binary fcgi request from the cli and even from nginx, it formats a binary fcgi response, sends it back over the network, and the cli displays it fine, and nginx even returns the decoded fcgi response back to the browser just like nature intended.

Related

How to forcefully flush HTTP headers in Apache httpd?

I need to periodically generate HTTP headers for clients and those headers need to be flushed to the client directly after one header is created. I can't wait for a body or anything else, I create a header and I want that Apache httpd sends it to the client.
I've already tried using autoflush, manual flush, large header data around 8k of data, disabled deflate modules and whatever could stand in may way, but httpd seems to ignore my wished until all headers are created and only afterwards flushes them. Depending on how fast I generate headers, the httpd process even increases memory to some hundreds of megabytes, so seems to buffer all headers.
Is there any way to get httpd to flush individual headers or is it impossible?
The answer is using NPH-scripts, which by default bypass the buffer of the web server. One needs to name the script nph-* and normally a web server should stop buffering headers and send them directly as they are printed and how they are. This works in my case, though using Apache httpd one needs to be careful:
Apache2 sends two HTTP headers with a mapped "nph-" CGI

Force notification of file creation to FAM or lighttpd

Environment: Debian, lighttpd, fam. It is unclear whether virtual private server's kernel Dnotify is enabled or not.
After my CGI creates a file in file system, it responds a short message to client, java script running in browser, notifying the client that the file is ready for download. There is one issue here - most of the time the client gets http 404 error from lighttpd especially when all these software run in localhost. I think this problem is caused by the latency of lighttpd being notified by fam of the file creation. I imagine the all events happen in the following sequence:
CGI creates a file.
CGI responds to client.
client fetch the file from lighttpd.
lighttpd does not find the file in its cache.
I can imagine that such problem will be worse once the file is created in remote host and is notified to lighttpd via NFS.
My question is: how to code CGI to tell lighttpd about file creation before respond to client?
That is, how do I "insert" a notification step in between above steps 1. and 2.? If I can not do this with lighttpd but I can do it with nginx, I can consider switching to the latter, too. Similarly, I will be fine to switch from fam to other file alternation monitors.
Best Regards,
lighttpd is not caching the non-existence of the file, so the problem is more likely in your CGI script. The CGI script should create the file and should close it so that the kernel knows that the contents should be flushed to disk, as they might still be buffered in stdio in the CGI process. Only after that should the script send a message to the client.
As you noted, extra steps are required if the CGI is run on a different server from the one that receives the second request for the file, and the file is stored on NFS or other remote filesystem. The easiest solution is to have the client wait a second or two and retry to download the file a few times, as typical NFS caching settings are about 5 seconds.

Can I pull my local Redis data from a url?

For example, is it possible to do something like:
localhost:6379/?command=keys&a1=*
and be returned the data. Similar to an API.
webd.is does that, its an HTTP webserver written in C.
Main features:
GET and POST are supported, as well as PUT for file uploads.
JSON output by default, optional JSONP parameter (?jsonp=myFunction or ?callback=myFunction).
Raw Redis 2.0 protocol output with .raw suffix
HTTP 1.1 pipelining (70,000 http requests per second on a desktop Linux machine.)
Multi-threaded server, configurable number of worker threads.
Connects to Redis using a TCP or UNIX socket.
Restricted commands by IP range (CIDR subnet + mask) or HTTP Basic Auth, returning 403 errors.
Possible Redis authentication in the config file.
URL-encoded parameters for binary data or slashes and question marks. For instance, %2f is decoded as / but not used as a command
separator.
Logs, with a configurable verbosity.
Cross-origin requests, usable with XMLHttpRequest2 (Cross-Origin Resource Sharing - CORS).
Optional daemonize.
Default root object: Add "default_root": "/GET/index.html" in webdis.json to substitute the request to / with a Redis request.
HTTP request limit with http_max_request_size (in bytes, set to 128MB by default).
Database selection in the URL, using e.g. /7/GET/key to run the command on DB 7.
Otherwise that is a very basic project redis-rest in ruby you may want to take a look at.

CouchDB replication is not working properly behind a proxy

Note: Made some updates based on new information. Old ideas have been added as comments below.
Note: Made some updates (again) based on new information. Old ideas have been added as comments below (again).
We are running two instances of CouchDB on separate computers behind Apache reverse proxies. When attempting to replicate between the two instances:
curl -X POST http://user:pass#localhost/couchdb/_replicate -d '{ "source": "db1", "target": "http://user:pass#10.1.100.59/couchdb/db1" }' --header "Content-Type: application/json"
(we started using curl to debug the problem)
we receive an error similar to:
{"error":"case_clause","reason":"{error,\n {{bad_return_value,\n {invalid_json,\n <<\"<!DOCTYPE HTML PUBLIC \\\"-//IETF//DTD HTML 2.0//EN\\\">\\n<html><head>\\n<title>404 Not Found</title>\\n</head><body>\\n<h1>Not Found</h1>\\n<p>The requested URL /couchdb/db1/_local/01e935dcd2193b87af34c9b449ae2e20 was not found on this server.</p>\\n<hr>\\n<address>Apache/2.2.3 (Red Hat) Server at 10.1.100.59 Port 80</address>\\n</body></html>\\n\">>}},\n {child,undefined,\"01e935dcd2193b87af34c9b449ae2e20\",\n {gen_server,start_link,\n [couch_rep,\n [\"01e935dcd2193b87af34c9b449ae2e20\",\n {[{<<\"source\">>,<<\"db1\">>},\n {<<\"target\">>,\n <<\"http://user:pass#10.1.100.59/couchdb/db1\">>}]},\n {user_ctx,<<\"user\">>,\n [<<\"_admin\">>],\n <<\"{couch_httpd_auth, default_authentication_handler}\">>}],\n []]},\n temporary,1,worker,\n [couch_rep]}}}"}
So after further research it appears that apache returns this error without attempting to access CouchDB (according to the log files). To be clear when fed the following URL
/couchdb/db1/_local/01e935dcd2193b87af34c9b449ae2e20
Apache passes the request to CouchDB and returns CouchDB's 404 error. On the other hand when replication occurs the URL actually being passed is
/couchdb/db1/_local%2F01e935dcd2193b87af34c9b449ae2e20
which apache determines is a missing document and returns its own 404 error for without ever passing the request to CouchDB. This at least gives me some new leads but I could still use help if anyone has an answer offhand.
The source CouchDB (localhost) is telling you that the remote URL was invalid. Instead of a CouchDB response, the source is receiving the Apache httpd proxy's file-not-found response.
Unfortunately, you may have some reverse-proxy troubleshooting to do. My first guess is the Host header the source is sending to the target. Perhaps it's different from when you connect directly from a third location?
Finally, I think you probably know this, but the path
/couchdb/db1/_local%2F01e935dcd2193b87af34c9b449ae2e20
Is not a standard CouchDB path. By the time CouchDB sees a request, it should have the /couchdb stripped, so the query is for a document called _local%2f... in the database called db1.
Incidentally, it is very important not to let the proxy modify the paths before they hit couch. In particular, if you send %2f then CouchDB had better receive %2f and if you send / then CouchDB had better receive /.
From official documentation...
Note that HTTPS proxies are in theory supported but do not work in 1.0.1. This is because 1.0.1 ships with ibrowse version 1.5.5. The CouchDB version in trunk (from where 1.1 will be based) ships with ibrowse version 1.6.2. This later ibrowse contains fixes for HTTPS proxies.
Can you see which version of ibrowse is involved? Maybe update that ver?
Another thought I have is with regard to the SSL certs. If you don't have any, and I know you don't :), then technically you're doing SSL wrong. In java we know there are ways around this, but maybe try putting in proper certs since all SSL stuff basically involves certs.
And for my last contribution (today) I would say have you looked through this document which seems highly relevant?
http://wiki.apache.org/couchdb/Apache_As_a_Reverse_Proxy

Using a CGI binary in an application

How would an application interface with a CGI binary? For example, say my app is running on Windows and wants to invoke the PHP.exe binary in the context of a web server. How would I do this? By passing some sort of command line arguments? I am using C++, if that helps.
NOTE - I know how to invoke processes and such. I just want to know how CGI fits into this.
The CGI interface works as follows: when a request comes, the Web server runs a process, the HTTP input (AKA POST data) is supplied to the process via stdin, the generated content and headers are emitted via stdout. Server variables are passed via environment.
Now, your question is not clear enough. Do you have a CGI-compliant local binary that you want to invoke from a program? Or do you want to invoke a CGI-compliant binary somewhere on a Web server?
In the former case, use regular means of process creation (CreateProcess, fork/exec), with I/O pipes redirected. In the latter case, use a HTTP client library (curl, WinInet) to issue an HTTP request.