Allowing a PHP script to ssh, using sudo - apache

I need to allow a PHP script on my local web server, to SSH to another machine to perform a specified task on some files. My httpd runs as _www with low permissions, so setting up direct passwordless SSH is difficult, not to say ill-advised.
The way I do it now is to have a minimal PHP script that sudo-exec's (as me) a shell script which is outside of the document root. The shell script in turn calls (as me) the PHP code that does the actual SSH work, and prints its output. Here's the code.
read_remote_files.php (The script I call from my browser):
exec('sudo -u me -n /home/me/run_php.sh /path/to/my_prog.php', $results);
print $results;
/home/me/run_php.sh (Runs as me, calls whatever it's given):
php $1 2>&1
sudoers:
_www ALL = (me) NOPASSWD: /home/me/run_php.sh
This all works, as my_prog.php is called as me and can SSH as me. It seems it's not too insecure since run_php.sh can't be called directly from a browser (outside document root). The issue I'm having is that my_prog.php isn't called as an HTTP program so doesn't have access to the HTTP environment variables (DOCUMENT_ROOT etc).
Two questions:
Am I making this too complicated?
Is there an easy way for my final script to get the HTTP variables?
Thanks!
Andy

Many systems do stuff like this using a (privileged) cron job that frequently checks for the existence of a file, a database record or some other resource, and then performs actions if there are any.
The huge advantage of this is that there is no direct interaction between the PHP script and the privileged script at all. The PHP script leaves the instructions in a resource, the privileged script fetches it. As long as the instructions can't lead to the system getting compromised or damaged, it's definitely more secure than sudoing.
The disadvantage is that you can't push changes whenever you like; you have to wait until the cron job runs again. But maybe it's an option anyway?

"I need to allow a PHP script on my local web server, to SSH to another machine to perform a specified task on some files."
I think that you are phrasing this in terms of a solution that you have difficulty in getting to work rather than a requirement. Surely what you should be saying is "I want to invoke a task on machine B from a PHP script running under Apache on Machine A." And then research solutions to this -- to which there are many from a simple 'roll-your-own' RPC tunnelled over HTTP(S) to using an XMLRPC or SOA framework.
Two caveats:
Do a phpinfo(); on both machines to check what extensions are available and
Also check your php.ini setting to make sure that your service provider hasn't disabled any functions that you expect to use (or do a Q&D script to echo 'disable_functions = ' . ini_get('disable_functions') . "\n"; ...)
If you browse here and the wider internet you'll find many examples. Here is one that I use for a similar purpose.

Related

Can I frontload user input, automating Google Cloud SDK gcloud init - interactive command?

I have a very similar question to this one. #cherba already gave a very rich and helpful dissection of the gcloud init command which has been very helpful.
So what I really want to do, automating gcloud init is:
Front load my interactive input: I want the users to supply all input at the beginning and not be prompted again.
Request a token, before gcloud is even installed, probably from a static perma-link, the resulting token should be usable only once, probably with a limited lifetime, maybe an hour. This is very similar to how gcloud init —-console-only already works, except with an unchanging initial URL.
I specifically want this to be for a user account, not a service account.
This would allow me to prompt the user, upfront, for all configuration input, and build the fully configured system automatically, over lunch or a long coffee break; not needing additional babysitting.
The goal here is distinct development environments, not deploying to an array of boxes.
How can I accomplish this?
This is not supported officially and is not recommended. Service accounts are meant for this kind of thing. You should use service accounts as explained in the earlier answer.
What the SDK is essentially doing is submitting a token request to https://accounts.google.com/o/oauth2/auth with following scopes:
'https://www.googleapis.com/auth/userinfo.email'
'https://www.googleapis.com/auth/cloud-platform'
'https://www.googleapis.com/auth/appengine.admin'
'https://www.googleapis.com/auth/compute'
'https://www.googleapis.com/auth/accounts.reauth'
For this to succeed you need to provide the regular oauth parameters like client_id, client_secret. To generate these you will need to register your app as an oauth app in the developer console.
This may not work if third party authorizations are not supported. I have not tried it.
You said "Front load my interactive input:" and also "Request a token, before gcloud is even installed". The problem with your request above, is that you will need to install gcloud at some point in time and gcloud will use its own authentication methods to connect, meaning that authentication should happen after gcloud is installed because you will always use the command “gcloud ….” to somehow connect. The previous post that you linked explains this.
Due to this, I'm suspecting that you need a workflow where simultaneous gcloud commands will run on multiple users/projects at the same time, by running gcloud many times in parallel. As you know, Linux runs one command at a time and "front loading" the authentication (as you call it) can either be the "screen" command inside one SSH session or running multiple SSH sessions at the same time. If that's not what you need, then a simple shell script should do. The shell script will run commands one after the other rather than in parallel.
For example, let's say that you want to install a package that will take a long time and be able to run another command at the same time, then you could do the following:
$ screen
$ sudo apt-get install [package-name]
Press Ctrl-A” and “d“ to temporarily exit this session
$ … (do another process here)
$ screen -r (re-attaches screen to continue on previous process on line 2)
The example above is somewhat the equivalent of having multiple SSH sessions open at the same time. You could maybe open multiple “screens” and launch multiple authentications at the same time, thereby also controlling when you want to stop a session. Keep in mind that if you run things in parallel, you will definitely need to load the authentication file as mentioned in the post you linked. Otherwise, you can use simple shell scripting and pass arguments. Since i'm not sure of the process that comes before/after your authentication, it's hard for me to provide a more precise example. There's a lot to consider and many unknowns about your workflow. I've included references below that show all the possibilities.
References:
- https://www.linode.com/docs/networking/ssh/using-gnu-screen-to-manage-persistent-terminal-sessions/
- https://www.geeksforgeeks.org/screen-command-in-linux-with-examples/
- https://www.lifewire.com/pass-arguments-to-bash-script-2200571
- https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account
- https://cloud.google.com/sdk/gcloud/reference/auth/login
- https://cloud.google.com/sdk/docs/scripting-gcloud

How to write Puppet modules for packages such as tigervnc or openvpn that require the user to set passwords or default settings?

I am learning puppet and am trying to write modules to install services such as tigervnc and openvpn.
The problem is that for tigervnc requires the initial password setting by the user. I have tried using:
"exec {'/usr/bin/echo password | /usr/bin/vncpasswd > ~/.vnc/passwd"
This works if I run it on the command line if I'm logged in as the user but does not work when run via puppet.
The problem with openvnc is that it requires a lot of user interaction for the default settings for certificate generation/certificate authority and key generation.
I have tried using execs with the "pkitool" methods which work to a point but not very well or stable. I am also wary of using many execs if there is a better way to do it.
So to sum up my main question is how to deal with these user interactions when trying to automate installations with puppet, and is there a better way than running lots of execs which to me seem like a last resort ?
Thanks
If setting up a piece of software requires user interaction, I don't really see a way around exec. Keeping its use to a minimum is indeed a sensible design goal.
An economic approach is to
create a script that does all the necessary lifting that Puppet resources cannot perform
make Puppet deploy that script to the agent
run it at appropriate times via exec (along with good creates or onlyif queries)
Scripts that run installation wizards that rely on interactive input should probably rely on expect and friends.

How can I generate a web page that checks the status of a process or service?

I have a dedicated server that runs a few lightweight game servers. The server is already running Apache. However I am cheap and the server hardware is not exactly robust and not all the servers we use run concurrently. I want to be able to generate a web page say /stats that has some info like:
Game 1: Online <uptime>
Game 2: Offline
...etc
I'm certain that I could run a script using a cronjob that just uses ps + grep logged into a file, and then parse that file for information on the server but I'm looking for a more dynamic option that checks as the page is generated.
You have at least a few options (other people may have additional suggestions beyond what is listed here):
Cron a shell script to generate a stats.html or stats.txt
PHP's shell_exec (could run ps |grep... for example) or exec
PHP's variety of posix functions may help (http://php.net/manual/en/ref.posix.php)
If you have PERL available there may be a few options there as well
My suggestion is to evaluate shell_exec or exec before any of the others.
If you need additional assistance please post what you have tried and the results.

Can I execute a shell script when restarting (starting) apache webserver

I have an application with some cacheing backend and I want to clear the cacheing whenever the webserver is been restarted.
Is there a apache configuration directive or any other way to execute a shell script upon webserver (re)start?
Thanks,
Phil
Adding some more information, as asked by some answers already:
Base system is ofc linux based, in this exact situation: CentOs
Modifying the startup script is unfortunately no option as pointed out by one of the comments already, due to it beeing not configuration file within the respective RPM packages and therefor beeing replaced by updates. Also I think modifying the startup script would be a bad thing in general
I see, that actually linking both "restarting the webserver" and "clearing my app cache" is not exactly what should be tied together. I will consider other alternatives
My situation is as follows: I can define how the virtual host config looks like, but I can not define how the rest of the servers configuration looks like.
The application is actually PHP based (and runs on the symfony framework). Symfony pre-compiles alot of stuff into dynamic php files from what it finds in the static configuration files. We deploy our apps via RPM and after deployment, an webserver restart is actually initiated already, so I thought it might make sense to tie the cache-cleanup to it. But I think after getting all your feedback, it looks like it is better to put the cache cleanup process into the installation process itself.
You haven't provided a lot of detail here, so it's hard to give a concrete answer, but I would suggest that your best option is to write a script which handles restarting apache, and clearing your cache. It would look something like this:
#!/bin/sh
# restart apache
/etc/init.d/httpd graceful
# whatever needs to be done to clear cache
rm -rf /my/cache/dir
Ramy suggests modifying the system startup script for Apache -- this is a bad idea! If and when you update Apache on your server, there is a good chance that your change will be lost.
Dirk suggests that what you are trying to do is probably misguided, and I think he's right. You haven't told us what platform you are running, but I can think of few situations where restarting your webserver and clearing a cache actually need to happen together.
You can modify Startup script for the Apache Web Server in /etc/init.d/httpd and write your own syntax inside it.
chattr +i /etc/init.d/httpd
If you have (root) access to the server you could do this by shell scripts but I would consider if it is the best way of cache management to rely on apache restarts.

cron jobs to upload a file via FTP

Is it possible to use CRON to upload a file via FTP? If yes how can I call FTP to run an upload?
Assuming a UNIX-like operating system you could setup a cron job that pointed to a shell script like the following:
#!/bin/sh
cd [source directory]
ftp -n [destination host]<<END
user [user] [password]
put [source file]
quit
END
Depending on your ftp client defaults and the source file type you may need to specify binary prior to the put.
You may use ncftp -- they have an handy tools called "ncftpput"
It is easier then using expect -- it is just a single command with useful return code.
You probably are looking for a program called "expect" which is designed for dealing with interactive processes.
http://expect.nist.gov/
If you have "cron", you likely already have "expect" as well, these days.
Schedule a script call from cron.
In the script,
Use Public Key Authentication to open a Secure FTP communication with your server
Execute a batch file of PUTs to your server (there is a -b option in sftp)
For this,
you will need to setup the public key authentication between the server and your client,machine.
you will need a sftp client on the client machine (there are clients for all platforms -- PuTTY, Winscp.net, unix variants usually have this already installed).
finally, try the PUT manually with public key authentication and note down the commands -- you can write them down in to the batch file for automation
Some other notes.
expect is an overkill for this requirement.
More over, any scheme that requires the password to be scripted is bad
ncftp is good for an interactive session (not such automation)
I do not know if wput allows public key authentication (probably not), in which case its not good for such automation either
Just create your CRON jobs to call WGET to upload or download your file via FTP!