Running untrusted code using chroot - virtual-machine

I wish to run some untrusted code using chroot.
However, many claim that chroot is not a security feature and can easily be broken out of.
Therefore my question is how does apps like https://ideone.com/ manage to run untrusted code quickly and securely. Also if chroot can be broken out of, couldn't it be possible to break out of chroot in https://ideone.com/ .

I'm not sure what untrusted code you're referring to, but chroot just changes the apparent file structure -- theoretically you can't see above a certain level. But symbolic links can still work so if your chrooted directory has a symlink to a directory above, chroot doesn't do you any good.
It's also possible for applications can get access to resources through other applications. Some clever hackers know how to exploit running apps, but if you set up the ideone environment as root, well, anything is possible.
More theoretically, you could install a master application that has full access to the file system. Then you run code in a chrooted environment. If that master app is running and listening it can relay resources to your chrooted application.
Quickly? Sure... Securely... well... with my example the master app is a gatekeeper but it's still security through trust of the master app.

Related

How to run grafical tool (e.g. deja-dup) as root on Ubuntu 17.10

When trying to setup automated backup under Ubuntu 17.10 using Deja-Dup I realized that one can not backup the root directory since a normal user starting the deja-dup application does not have all rights to access all files in /.
(german discussion about rather similar situation can be found here: https://forum.ubuntuusers.de/topic/wie-sichert-an-mit-deja-dup-ein-systemverzeich/)
The usual workaround to gksu the deja-dup application does no longer work on Ubuntu 17.10. It seams that a decision has been made to prevent users from starting graphical applications as root on purpose for it is often a bad/risky think to do.
However to create regular backups of a Systems / directory with deja-dup the application has to be configured and later on started as root.
Since the typical ideas like gksu, gksudo, sudo -H do not work unter Ubuntu 17.10 I would highly appreciate any advice on a secure practice to get to run deja-dup as root. Can someone help with advice?

Typechecking Hack code on VirtualBox via NFS shared folder

It seems prudent to first mention this issue and then this aptly-named edit which seems related and has made hh_server refuse to run on NFS file systems. I am not very familar with file systems and have never touched OCaml before, so in trying to accomplish the question title, I have tried editing what I know: /etc/hh.conf and /etc/hhvm/{php, server}.ini, adding hhvm.[server.]enable_on_nfs = true by pure guesswork. No dice.
As I understand it from the issue, the change stems from the hh_server daemon being unable to register changes to the files via inotify on NFS drives, which is totally understandable. However, my VirtualBox is purely a test server for me familiarizing myself with Hack (i.e. only running the typechecker), and I've successfully run hh_client on sshfs-mounted (osxfuse) drives before. Is there another problem I'm not aware of that makes this a bad idea? If not, how might I enable hh_server --check to run on my VBox NFS shared folder?
The main issue is the lack of inotify support for NFS, so hh_server may respond with stale data.
If you accept the risk, you can add enable_on_nfs = true to /etc/hh.conf, which will enable hh_server to check folders on NFS.

Apache custom module, where to put its own log file?

I have a Apache module that acts as a security filter that allows requests to pass or not. This is a custom made module, I don't want to use any existent module.
I have actually two questions:
The module has its own log file. I'm thinking that the best location should be in /var/log/apache2/ but since the Apache process runs on www-data user, it cannot create files on that path. I want to find a solution for the log file in such way that is not much intrusive (in terms of security) for a typical web server. Where would be the best place and what kind of security attributes should be set?
The module communicates with another process using pipes. I would like to spawn this process from Apache module only when I need it. Where should I locate this binary and how should I set the privileges as less intrusive as possible?
Thanks,
Cezane.
Apache starts under the superuser first and performs the module initialization (calling the module_struct::register_hooks function). There you can create the log files and either chown them to www-data or keep the file descriptor open in order to later use it from the forked and setuided worker processes.
(And if you need an alternative, I think it's also possible to log with syslog and configure it to route your log messages to your log file).
Under the worker process you are already running as the www-data user so there isn't much you can do to further secure the execution. For example, AFAIK, you can't setuid to yet another user or chroot to protect the filesystem.
What you can do to improve the security is to use a system firewall. For example, under AppArmor you could tell the operating system what binaries your Apache module can execute, stopping it from executing any unwanted binaries. And you can limit that binary's filesystem access, preventing it from accessing www-data files that doesn't belong to it.

Executing scripts as apache user (www-data) insecure? How does a contemporary setup look like?

My scripts (php, python, etc.) and the scripts of other users on my Linux system are executed by the apache user aka "www-data". Please correct me if I'm wrong, but this might lead to several awkward situations:
I'm able to read the source code of other users' scripts by using a script. I might find hardcoded database passwords.
Files written by scripts and uploads are owned by www-data and might be unreadable or undeleteable by the script owner.
Users will want their upload-folders to be writeable by www-data. Using a script I can now write into other users upload directories.
Users frustrated with these permission problems will start to set file and directory permissions to 777 (just take a look at the Wordpress Support Forum…).
One single exploitable script is enough to endanger all the other users. OS file permission security won't help much to contain the damage.
So how do people nowadays deal with this? What's a reasonable (architecturally correct?) approach to support several web-frameworks on a shared system without weakening traditional file permission based security? Is using fastCGI still the way to go? How do contemporary interfaces (wsgi) and performance strategies fit in?
Thanks for any hints!
as far as i understand this, please correct me if i am wrong!
ad 1. - 4. with wsgi you have the possibility to change and therefor restrict the user/group on per process-basis.
http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess
ad 5. with wsgi you can isolate processes.
http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIProcessGroup
quote from mod_wsgi-page:
"An alternate mode of operation available with Apache 2.X on UNIX is 'daemon' mode. This mode operates in similar ways to FASTCGI/SCGI solutions, whereby distinct processes can be dedicated to run a WSGI application. Unlike FASTCGI/SCGI solutions however, neither a separate process supervisor or WSGI adapter is needed when implementing the WSGI application and everything is handled automatically by mod_wsgi.
Because the WSGI applications in daemon mode are being run in their own processes, the impact on the normal Apache child processes used to serve up static files and host applications using Apache modules for PHP, Perl or some other language is much reduced. Daemon processes may if required also be run as a distinct user ensuring that WSGI applications cannot interfere with each other or access information they shouldn't be able to."
All your point are valid.
Points 1, 3 and 5 are solved by setting the open_basedir directive in your Apache config.
2 and 4 are truly annoying, but files uploaded by an web-application is also (hopefully) removable with the same application.

Structuring a central Git remote to serve production Apache sites

I've been referring to http://toroid.org/ams/git-website-howto as a starting point for a production web server running on a managed VPS. The VPS runs cPanel and WHM, and it will eventually host multiple client websites, each with its own cPanel account (thus, each with its own Linux user and home directory from which sites are served). Each client's site is a separate Git repository.
Currently, I'm pushing each repository via SSH to a bare repo in the client's home folder, e.g. /home/username/git/repository.git/. As per the tutorial above, each repo has been configured to checkout to another directory via a post-receive hook. In this case, each repo checks out to its own /home/username/public_html (the default DocumentRoot for new cPanel accounts), where the files are then served by Apache. While this works, it requires me to set up (in my local development environment) my remotes like this:
url = ssh://username#example.com/home/username/git/repository.git/
It also requires me to enter the user's password every time I push to it, which is less than ideal.
In an attempt to centralize all of my repositories in one folder, I also tried pushing to /root/git/repository.git as root and then checking out to the appropriate home directory from there. However, this causes all of the checked-out files to be owned by root, which prevents Apache from serving the site, with errors like
[error] [client xx.xx.xx.xx] SoftException in Application.cpp:357: UID of script "/home/username/public_html/index.php" is smaller than min_uid
(which is a file ownership/permissions issue, as far as I can tell)
I can solve that problem with chown and chgrp commands in each repo's post-receive hook--however, that also raises the "not quite right" flag in my head. I've also considered gitosis (to centralize all my repos in /home/git/), but I assume that I'd run into the same file ownership problem, since the checked-out files would then be owned by the git user.
Am I just approaching this entire thing the wrong way? I feel like I'm completely missing a third, more elegant solution to the overall problem. Or should I just stick to one of the methods I described above?
It also requires me to enter the user's password every time I push to it, which is less than ideal
It shouldn't be necessary if you publish your public ssh key to the destintion account ".ssh/authorized_keys" file.
See also locking down ssh authorized keys for instance.
But also the official reference Pro Git Book "Setting Up the Server".