I need to periodically generate HTTP headers for clients and those headers need to be flushed to the client directly after one header is created. I can't wait for a body or anything else, I create a header and I want that Apache httpd sends it to the client.
I've already tried using autoflush, manual flush, large header data around 8k of data, disabled deflate modules and whatever could stand in may way, but httpd seems to ignore my wished until all headers are created and only afterwards flushes them. Depending on how fast I generate headers, the httpd process even increases memory to some hundreds of megabytes, so seems to buffer all headers.
Is there any way to get httpd to flush individual headers or is it impossible?
The answer is using NPH-scripts, which by default bypass the buffer of the web server. One needs to name the script nph-* and normally a web server should stop buffering headers and send them directly as they are printed and how they are. This works in my case, though using Apache httpd one needs to be careful:
Apache2 sends two HTTP headers with a mapped "nph-" CGI
Related
I'm using memcached and Apache with the following default configuration
CacheEnable socache /
CacheSocache memcache:IP:PORT
MemcacheConnTTL 30
What will the behavior be when 30 seconds expire and a request for the same URL comes in? Is there a way to configure the cache key? I.e. what are the info which make a request unique?
What if the server can't get an answer? (like timeout to fetch the newly updated object) Can it be configured to serve the old object?
Thanks
What will the behavior be when 30 seconds expire and a request for the same URL comes in
Apache would simply create a new connection to memcached. It doesn’t mean something would happen to the data stored in memcached
https://httpd.apache.org/docs/2.4/mod/mod_socache_memcache.html#memcacheconnttl
Set the time to keep idle connections with the memcache server(s)
alive (threaded platforms only).
If you need to control for how long an object will be stored in a cache, check out CacheDefaultExpire
Is there a way to configure the cache key
An url is used to build the key, but you can partially configure which parts of the url are used, check out
CacheIgnoreQueryString, CacheIgnoreURLSessionIdentifiers
I.e. what are the info which make a request unique
https://httpd.apache.org/docs/2.4/mod/mod_cache.html#cacheenable
The CacheEnable directive instructs mod_cache to cache urls at or
below url-string
Notice that not all requests can be cached, there’s a lot of rules on it
What if the server can't get an answer? Can it be configured to serve the old object
You need CacheStaleOnError
https://httpd.apache.org/docs/2.4/mod/mod_cache.html#cachestaleonerror
When the CacheStaleOnError directive is switched on, and when stale
data is available in the cache, the cache will respond to 5xx
responses from the backend by returning the stale data instead of the
5xx response
I sense I'm going to end up embarrassed for asking such a simple question, but I've been researching for days and can't any useful information.
What determines the HTTP response header that a server sends? If I control the server (if we need concreteness, let's say Apache), then what file can I edit to change the response header? For example, to set it to include Content-Length instead of Transfer-Encoding: chunked?
I'm aware that PHP and Java Servlets can be used to manipulate headers. The existence and content of response headers is fundamental to HTTP, though, so there ought to exist a way to edit these without using outside technology, no?
Certain headers are set automatically. They are part of the HTTP spec and the server takes care of them for you. That’s what a web server is for and why it differs from, say, an FTP server or a fileshare. For example Content-Length is easily calculated by the webserver and needs to be set so the server just does it.
Certain other headers are set based on config. Apache usually loads a main config file (often called httpd.conf or apache2.conf) but then, to save this file getting into a big unwieldy mess it often loads other files from within that. Those files are just text files with lines of configuration text to change behaviour of the server. Other web servers may use XML configuration files and may have a GUI to control the config (e.g. IIS)
So, for some of the headers, you might not explicitly set the header value but you basically configure the server and it then uses that config to figure out the appropriate headers to send. For example you can configure the server to gzip certain files (e.g. text files but not jpgs which are already compressed). In Apache this is handled by the mod_deflate module and the config options it gives you. Once the appropriate config is added to the server config, the server will do the necessarily processing (e.g. gzip the file or not depending on type) and then automatically add the headers. So an Apache module is basically something that changes how the server works and this may or may not the also set headers. Another example is for sending caching headers to tell the browser how long to cache files for. This is controlled by adding the mod_expiries module and all the config options it allows. While some of these headers could be hardcoded (e.g. Cache-Control) others depend on Apache doing calculations (e.g. Expires) so better to use the module to do this for you based on your config.
And finally you can explicitly set headers in your server (in Apache this is done using the mod_headers module). This is useful for new features added to browsers for example (e.g. HSTS, CSP or HPKP) where the server doesn't need to do anything but just add the header and the client (e.g. the web browser) knows what to do with them. You could add a JonahHuron header for example by adding this config to httpd.conf:
Header always set JonahHuron "Some Value"
As to whether that header is used depends entirely on the program receiving the response.
Is there any php.ini setting which could cause isset($_SERVER["CONTENT_LENGTH"]) to never be set on one server, but work fine on another, where the application code is the same and php.ini upload settings are the same? In an uploader library I'm using, the content length check always fails because of this issue. On PHP5.3, CentOS and Apache. Thanks for any help
EDIT: I should add that in the Request Headers, Content-Length:33586 - but when trying to process $_SERVER["CONTENT_LENGTH"], it isn't set.
Content-Length is sent by the server application, it's not part of the HTTP request.
Your application is the one that will be setting that, however you should not be doing that from within PHP as PHP does this automatically.
If you're dealing with input from something like an upload, then you will only get the Content-Length if the HTTP request is not CHUNKED. When sending a chunked request, the data length is not known to the recipient until all the chunks have been sent.
i'm trying to write a webserver. i didn't want to write a module for php, so i figured i'd pass information to php-fpm like nginx and apache does. i did some research, and setup to prototypes, and just can't get it to work.
i've set up a php service listening on port 9999 that will print_r($GLOBALS) upon each connection. i've set up nginx to pass php requests to 127.0.0.1:9999. the requests ARE being passed, but only argc (1) and argv (the path to the php service), and $_SERVER vars are populated. the $_SERVER vars has a lot of information about the current environment that the php process is acting in, but i don't see ANY information about the connected user or their request -no REMOTE_ADDR, no QUERY_STRING, no nothing...
i'm having trouble finding documentation on HOW to pass this information from the cli or from a prototype server to a fastcgi php process. i've found a list of what some of the older CGI vars are, but no information on HOW to pass them, or if any of them are outdated with fastcgi.
so, again, i'm asking HOW you pass info from your server prototype or cli to a php-fpm or fastcgi process -or, WHERE can i find proper and clear and definitive documentation on this subject? (and no, the RFC is not the answer). i've beed reading over fastcgi.com and wikipedia as well as numerous other search results...
=== update ===
i've managed to get a working fastcgi "service" up and running via a prototype in php. it listens on 9999, parses a binary fcgi request from the cli and even from nginx, it formats a binary fcgi response, sends it back over the network, and the cli displays it fine, and nginx even returns the decoded fcgi response back to the browser just like nature intended.
now, when i try to do things the other way around --write my prototype server that forms a binary fcgi packet and sends it to PHP-FPM, i get NOTHING -no error output on the cli or from the error logs (i can't ever get php-fpm to write to the error logs any way [-_-]). so, WHY wouldn't php-fpm be giving me SOME kind of response, either in error text, or in binary network packet, or ANYTHING???
so, i can SEND data from cli to to fastcgi, but i can't get any thing back, unless it's MY OWN fastcgi process (and no, i didn't take over php-fpm's port -i'm on 9999 and it's on 9000).
=============
TIA \m/(>_<)\m/
got it.
the server passes information to the fastcgi process in the form of a network data packet. much like a dns packet, you have to use the respective binary character manipulation functions for your language to formulate the packate, including it's header and payload information, then send it across the pipe / network to the fastcgi server who will then parse the binary packet into a response. fun stuff -could have been more well documented, ahem :-\
oh, and if you prototype a listener in php, you can't access this packet via any php vars, you actually have to read the connection you're listening on (because it's a binary network packet and not plain text 'post' data sent).
i've managed to get a working fastcgi "service" up and running via a prototype in php. it listens on 9999, parses a binary fcgi request from the cli and even from nginx, it formats a binary fcgi response, sends it back over the network, and the cli displays it fine, and nginx even returns the decoded fcgi response back to the browser just like nature intended.
I'm evaluating the front end performance of a secure (SSL) web app here at work and I'm wondering if it's possible to compress text files (html/css/javascript) over SSL. I've done some googling around but haven't found anything specifically related to SSL. If it's possible, is it even worth the extra CPU cycles since responses are also being encrypted? Would compressing responses hurt performance?
Also, I'm wanting to make sure we're keeping the SSL connection alive so we're not making SSL handshakes over and over. I'm not seeing Connection: Keep-Alive in the response headers. I do see Keep-Alive: 115 in the request headers but that's only keeping the connection alive for 115 milliseconds (seems like the app server is closing the connection after a single request is processed?) Wouldn't you want the server to be setting that response header for as long as the session inactivity timeout is?
I understand browsers don't cache SSL content to disk so we're serving the same files over and over and over on subsequent visits even though nothing has changed. The main optimization recommendations are reducing the number of http requests, minification, moving scripts to bottom, image optimization, possible domain sharding (though need to weigh the cost of another SSL handshake), things of that nature.
Yes, compression can be used over SSL; it takes place before the data is encrypted so can help over slow links. It should be noted that this is a bad idea: this also opens a vulnerability.
After the initial handshake, SSL is less of an overhead than many people think* - even if the client reconnects, there's a mechanism to continue existing sessions without renegotiating keys, resulting in less CPU usage and fewer round-trips.
Load balancers can screw with the continuation mechanism, though: if requests alternate between servers then more full handshakes are required, which can have a noticeable impact (~few hundred ms per request). Configure your load balancer to forward all requests from the same IP to the same app server.
Which app server are you using? If it can't be configured to use keep-alive, compress files and so on then consider putting it behind a reverse proxy that can (and while you're at it, relax the cache headers sent with static content - HttpWatchSupport's linked article has some useful hints on that front).
(*SSL hardware vendors will say things like "up to 5 times more CPU" but some chaps from Google reported that when Gmail went to SSL by default, it only accounted for ~1% CPU load)
You should probably never use TLS compression. Some user agents (at least Chrome) will disable it anyways.
You can selectively use HTTP compression
You can always minify
Let's talk about caching too
I am going to assume you are using an HTTPS Everywhere style web site.
Scenario:
Static content like css or js:
Use HTTP compression
Use minification
Long cache period (like a year)
etag is only marginally useful (due to long cache)
Include some sort of version number in the URL in your HTML pointing to this asset so you can cache-bust
HTML content with ZERO sensitive info (like an About Us page):
Use HTTP compression
Use HTML minification
Use a short cache period
Use etag
HTML content with ANY sensitive info (like a CSRF token or bank account number):
NO HTTP compression
Use HTML minification
Cache-Control: no-store, must-revalidate
etag is pointless here (due to revalidation)
some logic to redirect the page after session timeout (taking into account multiple tabs). If someone presses the browser's Back button, the sensitive info is not displayed due to the cache header.
You can use HTTP compression with sensitive data IF:
You never return user input in the response (got a search box? don't use HTTP compression)
Or you do return user input in the response but randomly pad the response
Using compression with SSL opens you up to vulnerabilities like BREACH, CRIME, or other chosen plain-text attacks.
You should disable compression as SSL/TLS have no way to currently mitigate against these length oracle attacks.
To your first question: SSL is working on a different layer than compression. In a sense these two are features of a web server that can work together and not overlap. Yes, by enabling compression you'll use more CPU on your server but have less of outgoing traffic. So it's more of a tradeoff.
To your second question: Keep-Alive behavior is really dependent on HTTP version. You could move your static content to a non-ssl server (may include images, movies, audio, etc)