We've got an apache module in place for authentication. If the user is able to authenticate, the REMOTE_USER environment variable is set to their username, where it's available to any CGI they access.
I'd like to add a feature/module so that we can get additional information about the user from an LDAP data source and make it available to CGI and FCGI applications.
Since I know we can put information into the environment, is it appropriate to store a more complex data structure (such as JSON) in an environment variable? That seems clunky to me. Is there a better way to do it?
If it's language-dependent, then I'm most interested in Perl, but it would be best if I could make this data available to any type of CGI or FCGI application. We're using Apache 2.2 on RHEL 5.0 (with SELinux enabled).
If you need to work with CGI, environment variables seem to be the only option (with mod_perl, you can access Apache's internal data structures).
If the data is too large for the environment, you could create a temporary file and pass just the file name. You could also store this information in the database. In both cases, you probably need to worry about cleaning up the temporary data, and about race conditions with concurrent access.
If you already have persistent server-side session data (a session file or directory or shared memory segment), you may want to place it in there.
Related
I am doing some reverse engineering on a website.
We are using LAMP stack under CENTOS 5, without any commercial/open source framework (symfony, laravel, etc). Just plain PHP with an in-house framework.
I wonder if there is any way to know which files in the server have been used to produce a request.
For example, let's say I am requesting http://myserver.com/index.php.
Let's assume that 'index.php' calls other PHP scripts (e.g. to connect to the database and retrieve some info), it also includes a couple of other html files, etc
How can I get the list of those accessed files?
I already tried to enable the server-status directive in apache, and although it is working I can't get what I want (I also passed the 'refresh' parameter)
I also used lsof -c httpd, as suggested in other forums, but it is producing a very big output and I can't find what I'm looking for.
I also read the apache logs, but I am only getting the requests that the server handled.
Some other users suggested to add the PHP directives like 'self', but that means I need to know which files I need to modify to include that directive beforehand (which I don't) and which is precisely what I am trying to find out.
Is that actually possible to trace the internal activity of the server and get those file names and locations?
Regards.
Not that I tried this, but it looks like mod_log_config is the answer to my own question
So, a common practice these days is to put connection strings & passwords as environment variables to avoid their being placed into a file. This is all fine and dandy, but I'm not sure how to make this work when trying to set up a continuous deployment workflow with some configuration management tool such as Salt/Ansible or Chef/Puppet.
Specifically, I have the following questions in environments using the above mentioned configuration management tools:
Where do you store connection strings/passwords/keys separate from codebases?
Do you keep those items in a code-repo of some type (git, etc.)?
Do you use some structure built-in to your tool?
How do you keep those same items secure?
Do you track changes/back-up these items, and if so, how?
In Chef you can
store passwords or API tokens in either encrypted data bags or using chef-vault. They are then decrypted while chef does the provisioning (with encrypted data bags using a shared secret, with chef-vault using the existing PKI of Chef client).
set environment variables when calling external software using the environment parameter of e.g. the execute resource.
not sure, what to write here -- I'd say you don't really manage them. This way you set the variables only for the command that needs it, not e.g. for the whole chef run.
With puppet, the preferred way is probably to store the secrets in Hiera files, which are just plain YAML files. That means that all secrets are stored on the master, separate from the manifest files.
truecrypt virtual encrypted disks are cross-platform and independent of tooling. Mount it read-write to change the secrets in the files it contains, unmount it and then commit/push the encrypted disk image into version control. Mount read-only for automation.
ansible-vault can be used to encrypt sensitive data files. A CI server like Jenkins however is not the safest place to store access credentials. If you add Hashicorp Vault and Ansible Tower/AWX, then you can provide a secure solution for several teams.
My scripts (php, python, etc.) and the scripts of other users on my Linux system are executed by the apache user aka "www-data". Please correct me if I'm wrong, but this might lead to several awkward situations:
I'm able to read the source code of other users' scripts by using a script. I might find hardcoded database passwords.
Files written by scripts and uploads are owned by www-data and might be unreadable or undeleteable by the script owner.
Users will want their upload-folders to be writeable by www-data. Using a script I can now write into other users upload directories.
Users frustrated with these permission problems will start to set file and directory permissions to 777 (just take a look at the Wordpress Support Forum…).
One single exploitable script is enough to endanger all the other users. OS file permission security won't help much to contain the damage.
So how do people nowadays deal with this? What's a reasonable (architecturally correct?) approach to support several web-frameworks on a shared system without weakening traditional file permission based security? Is using fastCGI still the way to go? How do contemporary interfaces (wsgi) and performance strategies fit in?
Thanks for any hints!
as far as i understand this, please correct me if i am wrong!
ad 1. - 4. with wsgi you have the possibility to change and therefor restrict the user/group on per process-basis.
http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess
ad 5. with wsgi you can isolate processes.
http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIProcessGroup
quote from mod_wsgi-page:
"An alternate mode of operation available with Apache 2.X on UNIX is 'daemon' mode. This mode operates in similar ways to FASTCGI/SCGI solutions, whereby distinct processes can be dedicated to run a WSGI application. Unlike FASTCGI/SCGI solutions however, neither a separate process supervisor or WSGI adapter is needed when implementing the WSGI application and everything is handled automatically by mod_wsgi.
Because the WSGI applications in daemon mode are being run in their own processes, the impact on the normal Apache child processes used to serve up static files and host applications using Apache modules for PHP, Perl or some other language is much reduced. Daemon processes may if required also be run as a distinct user ensuring that WSGI applications cannot interfere with each other or access information they shouldn't be able to."
All your point are valid.
Points 1, 3 and 5 are solved by setting the open_basedir directive in your Apache config.
2 and 4 are truly annoying, but files uploaded by an web-application is also (hopefully) removable with the same application.
One of the responsibilities of my Rails application is to create and serve signed xmls. Any signed xml, once created, never changes. So I store every xml in the public folder and redirect the client appropriately to avoid unnecessary processing from the controller.
Now I want a new feature: every xml is associated with a date, and I'd like to implement the ability to serve a compressed file containing every xml whose date lies in a period specified by the client. Nevertheless, the period cannot be limited to less than one month for the feature to be useful, and this implies some zip files being served will be as big as 50M.
My application is deployed as a Passenger module of Apache. Thus, it's totally unacceptable to serve the file with send_data, since the client will have to wait for the entire compressed file to be generated before the actual download begins. Although I have an idea on how to implement the feature in Rails so the compressed file is produced while being served, I feel my server will get scarce on resources once some lengthy Ruby/Passenger processes are allocated to serve big zip files.
I've read about a better solution to serve static files through Apache, but not dynamic ones.
So, what's the solution to the problem? Do I need something like a custom Apache handler? How do I inform Apache, from my application, how to handle the request, compressing the files and streaming the result simultaneously?
Check out my mod_zip module for Nginx:
http://wiki.nginx.org/NgxZip
You can have a backend script tell Nginx which URL locations to include in the archive, and Nginx will dynamically stream a ZIP file to the client containing those files. The module leverages Nginx's single-threaded proxy code and is extremely lightweight.
The module was first released in 2008 and is fairly mature at this point. From your description I think it will suit your needs.
You simply need to use whatever API you have available for you to create a zip file and write it to the response, flushing the output periodically. If this is serving large zip files, or will be requested frequently, consider running it in a separate process with a high nice/ionice value / low priority.
Worst case, you could run a command-line zip in a low priority process and pass the output along periodically.
it's tricky to do, but I've made a gem called zipline ( http://github.com/fringd/zipline ) that gets things working for me. I want to update it so that it can support plain file handles or paths, right now it assumes you're using carrierwave...
also, you probably can't stream the response with passenger... I had to use unicorn to make streaming work properly... and certain rack middleware can even screw that up (calling response.to_s breaks it)
if anybody still needs this bother me on the github page
I am designing an application that is going to consist of 3-4 services that run as separate processes and are linked by a suitable IPC. The system is going to have a web interface and I want to use whatever webserver is there.
The web interface should be accessed under some URL that allows to have other URLs on the same webserver doing totally different things. I'm planning to use the path below that URL to specify what the web interface should do. It has facilities for use by other applications over the net and for humans to interact with in a browser.
Off the cuff, I'd work as follows:
make the webserver fire up a CGI process for every request it receives (like SetHandler in Apache)
let the CGI connect to the IPC
let it get whatever it needs from the backend services
let the CGI return HTML / XML and whatever HTTP Status based on the services' answers
Now, what I really want is to avoid the first two steps, or if I can't, avoid the second one, because I'm afraid that I'm wasting performance on unneccesary overhead (the requests coming from other applications might be frequent).
PHP, for example, can open persistent connections to a MySQL database that survive the script's runtime and don't need to be recreated next time, though I don't know how they actually do it. Also, as I understand it, the Apache modules are loaded once when the server starts, so that might remove the first step but would tie me to Apache.
So, what are good ways to hook a handler for specific URLs into different webservers? I don't want to handle the HTTP, otherwise I might just use a proxy setup to a second server, but it just seems to be so reinventing-the-wheel. If you think, CGI is fine and have examples where it handles large numbers of request of a similar structure, please let me know.
OK, I overlooked this previously. Explaining my question here brought me onto it:
Instead of creating a new process for every request, FastCGI can use a single persistent process which handles many requests over its lifetime. -- Wikipedia: FastCGI
Even under moderate loads, CGI is a pretty unscalable beast. FastCGI is an option, but you'll probably also find a mod_XXXX package where XXXX is the name of your language. There's a mod for ruby, perl, and python for instance and probably a fair few others.