Is it possible to execute a branch as a different user in *nix? - process

Is it possible to execute a method as a different user in Linux (or SELinux specifically)? The programs that I have run in individual sandboxes, each with a different user and process id. I have a situation where I have to execute a branch of code as a different user and with different process id to prevent the access of the memory and disk space of the code that's spawning it.
If not possible, can you throw some light on how much of the kernel code has to be changed to achieve it? (I understand its subjective. Alternatively, if you can suggest what and how to go about it, that will be much helpful).

Protecting some resources from other codes executing on the same machine is precisely what lead to the process and UID invention.
If you are searching for a mechanism that looks like a simple function call, I would say it's impossible because it requires the memory to be shared between the caller and the callee. However, using fork/exec (or wrappers like system()) will give you some isolation as long as you deal with parameters/results using system objects like program parameters or pipes.
Although, the fact that *nix user is meant to protect processes from one-another, requires that an explicit relationship be built between two users to have one user act on behalf of the other.
Actually, you may want to:
define a sudoers policy which gives the right to your first user to run a command (or a particular command) as the second one.
use popen() (or system()) in your first program to call the less privileged code.
if any, pass the parameters and parse the result from stdout
As an extra, you may use the same binary for both executions, this way, all the code can be at the same location.

Related

A safe way to avoid ABAP program running in productive ERP system

I need to develope an ABAP program which does some actions for SAP Basis. This program will be run in test/development systems only and it's not safe to run the program in productive system.
I need any safe way how to prevent the program running in productive. I can read a category field in T000 table and check if the system is a productive or not, but this way is not 100% safe. Any user with debug/variable modification authorizations will be able to avoid this.
A possible solution is not import the ABAP program to productive system at all. At the same time we have a system copy from productive to QA (the Oracle DB is copied from PROD to QA completely and renamed). This means the new program will be erased in QA after each PROD->QA copy and we will need to import it from DEV to QA again. So, this way is not convinient.
Is there any way which is more safe?
There are very few safeguards against someone who maliciously uses the debugger to change values in a running program (and has the permissions to do so). If someone with that permission wants to actively harm your system, he/she/it will be able to do so one way or another.
Manage that risk through strict permissions management.
If that is not sufficient, do not transport the program, however inconvenient that may seem.
Still, you should guard against accidental execution, and for that, the role of the client (can be "productive", "customizing", "test"; via transaction code SCC4; it's stored in table column T000-CCCATEGORY and can be read via function module TR_SYS_PARAMS) should be sufficient.
Anyone with a developer/debug authorization basically can do everything in your system. I mean even you do not ship your program, I myself can create a z-program to make the same thing as your program do if I have a dev role.
so let's focus your statement here: Productive System. How many users can have the dev authorization? I think it should be strictly controlled by your Admin.
In addition to T000 "Productive" check, you can also add authority check, for example, S_ADMI_FCD and logging in your code to restrict and safe the program.
Hope it helps. Thank you!
The solution would be to call an operating system command which could be found only in the test/quality system and not on the productive system.

Is it possible to accurately log what applications the user has launched through the linux kernel?

My goal is to write to a file (that the user whenever the user launches an application, such as FireFox) and timestamp the event.
The tricky part is having to do this from the kernel (or a module loaded onto the kernel).
From the research I've done so far (sources listed below), the execve system call seemed the most viable. As it had the filename of the process it was handling which seemed like gold at the time, but I quickly learned that it wasn't as useful as I thought since this system call isn't limited to user-related operations.
So then I thought of using ps -ef as it listed all the current running processes and I would just have to filter through which ones were applications opened by the user.
But the issue with that method is that I would have to poll every X seconds so, it has the potential to miss something if the user launched and closed an application within the time that I didn't call ps -ef.
I've also realized that writing to a file would be a challenge as well, since you don't have access to the standard library from the kernel. So my guess for that would be making use of proc somehow to allow the user to actually access the information that I'm trying to log.
Basically I'm running out of leads and I'd greatly appreciate it if anyone could point me in the right direction.
Thanks.
Sources:
http://tldp.org/LDP/lkmpg/2.6/html/x978.html (not very recent)
https://0xax.gitbooks.io/linux-insides/content/SysCall/syscall-4.html
First, writing to a file or reading a real file from the kernel is a bad idea which is not used in the kernel. There is of course VFS files, like /sys/fs or /proc, but this is a special case and this is allowed.
See this article in Linux Journal,
"Driving Me Nuts - Things You Never Should Do in the Kernel" by Greg Kroach-Hrtman
http://www.linuxjournal.com/article/8110
Every new process that is created in Linux, adds an entry under /proc,
as /proc/pidNum, where pidNum is the Process ID of the new process.
You can find out the name of the new application which was invoked simply by
cat /proc/pidNum/cmdline.
So for example, if your crond daemon has pid 1336, then
$cat /proc/1336/cmdline
will give
cron
And there are ways to monitor adding entries to a folder in Linux.

Setting permissions based on the program trying to access a kernel module

I have written a kernel module that creates a /proc file and reads values written into it from a user program,say user.c
Now I want restrict permissions for this /proc file.I have restricted permissions based on userid using the 'current' kernel variable by checking current->euid.
My question: Is there a way to restrict this based on the program too? i.e. only user.c should be able to write to this proc file and not any other program.I could not find any parameters in task_struct that would help me do this. Can you please suggest a way to do this?
In your proc writer implementation (that is, inside the kernel module) the best you can do is check the value of current (a struct task *), which holds (among other things) valuable fields such as comm (16-character argv[0]), pid, uid, etc (Basically, everything you see in /proc//status. You can also check the original exe name (like you see in /proc//exe), to see if it's a well known path. You can then return an error.
Caveat: Anyone could rename their opening process to be one of your "allowed" programs, if you go by "comm", and there are ways to defeat the "exe" protection. This will only make it slightly harder, but not impossible for someone to get around. A more comprehensive and stronger solution would require you to peek at the user mode memory of the program, which is possible, but too complicated for a brief answer.
Note: Permission parameters won't work, don't even bother. They go by classic UNIX ACL, which is u/g/o - so you can't filter by PID.

How to run an application as root without asking for an admin password?

I am writing a program in Objective-C (Xcode 3.2, on Snow Leopard) that is capable of either selectively blocking certain sites for a duration or only allow certain sites (and thus block all others) for a duration. The reasoning behind this program is rather simple. I tend to get distracted when I have full internet access, but I do need internet access during my working hours to get to a number of work-related websites. Clearly, this is not a permanent block, but only helps me to focus whenever I find myself wandering a bit too much.
At the moment, I am using a Unix script that is called via AppleScript to obtain Administrator permissions. It then activates a number of ipfw rules and clears those after a specific duration to restore full internet access. Simple and effective, but since I am running as a standard user, it gets cumbersome to enter my administrator password each and every time I want to go "offline". Furthermore, this is a great opportunity to learn to work with XCode and Objective-C. At the moment, everything works as expected, minus the actual blocking. I can add a number of sites in a list, specify whether or not I want to block or allow these websites and I can "start" the blocking by specifying a time until which I want to stay "offline".
However, I find it hard to obtain clear information on how I can run a privileged Unix command from Objective-C. Ideally, I would like to be able to store information with respect to the Administrator account into the Keychain to use these later on, so that I can simply move into "offline" mode with the convenience of clicking a button. Even more ideally, there might be some class in Objective-C with which I can block access to some/all websites for this particular user without needing to rely on privileged Unix commands. A third possibility is in starting this program with root permissions and the reducing the permissions until I need them, but since this is a GUI application that is nested in the menu bar of OS X, the results are rather awkward and getting it to run each and every time with root permission is no easy task.
Anyone who can offer me some pointers or advice? Please, no security-warnings, I am fully aware that what I want to do is a potential security threat.
If you want to do something with admin privileges, and you don't want to have to authenticate each time, it sounds like you need to look at setuid.
Make little command-line executable to do the rule changing, and then set that tool's owner to root. Then, set the setuid bit. Now, you can run it as a user and it will run as root.
Look here for more info:
http://en.wikipedia.org/wiki/Setuid
You have to create a separate process that runs with higher privileges. Have a look at the BetterAuthorizationSample on how to run such helper applications using launchd.

Platform independent file locking?

I'm running a very computationally intensive scientific job that spits out results every now and then. The job is basically to just simulate the same thing a whole bunch of times, so it's divided among several computers, which use different OSes. I'd like to direct the output from all these instances to the same file, since all the computers can see the same filesystem via NFS/Samba. Here are the constraints:
Must allow safe concurrent appends. Must block if some other instance on another computer is currently appending to the file.
Performance does not count. I/O for each instance is only a few bytes per minute.
Simplicity does count. The whole point of this (besides pure curiosity) is so I can stop having every instance write to a different file and manually merging these files together.
Must not depend on the details of the filesystem. Must work with an unknown filesystem on an NFS or Samba mount.
The language I'm using is D, in case that matters. I've looked, there's nothing in the standard lib that seems to do this. Both D-specific and general, language-agnostic answers are fully acceptable and appreciated.
Over NFS you face some problems with client side caching and stale data. I have written an OS independent lock module to work over NFS before. The simple idea of creating a [datafile].lock file does not work well over NFS. The basic idea to work around it is to create a lock file [datafile].lock which if present means file is NOT locked and a process that wants to acquire a lock renames the file to a different name like [datafile].lock.[hostname].[pid]. The rename is an atomic enough operation that works well enough over NFS to guarantee exclusivity of the lock. The rest is basically a bunch of fail safe, loops, error checking and lock retrieval in case the process dies before releasing the lock and renaming the lock file back to [datafile].lock
The classic solution is to use a lock file, or more accurately a lock directory. On all common OSs creating a directory is an atomic operation so the routine is:
try to create a lock directory with a fixed name in a fixed location
if the create failed, wait a second or so and try again - repeat until success
write your data to the real data file
delete the lock directory
This has been used by applications such as CVS for many years across many platforms. The only problem occurs in the rare cases when your app crashes while writing and before removing the lock.
Why not just build a simple server which sits between the file and the other computers?
Then if you ever wanted to change the data format, you would only have to modify the server, and not all of the clients.
In my opinion building a server would be much easier than trying to use a Network file system.
Lock File with a twist
Like other answers have mentioned, the easiest method is to create a lock file in the same directory as the datafile.
Since you want to be able to access the same file over multiple PC the best solution I can think of is to just include the identifier of the machine currently writing to the data file.
So the sequence for writing to the data file would be:
Check if there is a lock file present
If there is a lock file, see if I'm the one owning it by checking that its content has my identifier.
If that's the case, just write to the data file then delete the lock file.
If that's not the case, just wait a second or a small random length of time and try the whole cycle again.
If there is no lock file, create one with my identifier and try the whole cycle again to avoid race condition (re-check that the lock file is really mine).
Along with the identifier, I would record a timestamp in the lock file and check whether it's older than a given timeout value.
If the timestamp is too old, then assume that the lock file is stale and just delete it as it would mea one of the PC writing to the data file may have crashed or its connection may have been lost.
Another solution
If you are in control the format of the data file, could be to reserve a structure at the beginning of the file to record whether it is locked or not.
If you just reserve a byte for this purpose, you could assume, for instance, that 00 would mean the data file isn't locked, and that other values would represent the identifier of the machine currently writing to it.
Issues with NFS
OK, I'm adding a few things because Jiri Klouda correctly pointed out that NFS uses client-side caching that will result in the actual lock file being in an undetermined state.
A few ways to solve this issue:
mount the NFS directory with the noac or sync options. This is easy but doesn't completely guarantee data consistency between client and server though so there may still be issues although in your case it may be OK.
Open the lock file or data file using the O_DIRECT, the O_SYNC or O_DSYNC attributes. This is supposed to disable caching altogether.
This will lower performance but will ensure consistency.
You may be able to use flock() to lock the data file but its implementation is spotty and you will need to check if your particular OS actually uses the NFS locking service. It may do nothing at all otherwise.
If the data file is locked, then another client opening it for writing will fail.
Oh yeah, and it doesn't seem to work on SMB shares, so it's probably best to just forget about it.
Don't use NFS and just use Samba instead: there is a good article on the subject and why NFS is probably not the best answer to your usage scenario.
You will also find in this article various methods for locking files.
Jiri's solution is also a good one.
Basically, if you want to keep things simple, don't use NFS for frequently-updated files that are shared amongst multiple machines.
Something different
Use a small database server to save your data into and bypass the NFS/SMB locking issues altogether or keep your current multiple data files system and just write a small utility to concatenate the results.
It may still be the safest and simplest solution to your problem.
I don't know D, but I thing using a mutex file to do the jobe might work. Here's some pseudo-code you might find useful:
do {
// Try to create a new file to use as mutex.
// If it's already created, it will throw some kind of error.
mutex = create_file_for_writing('lock_file');
} while (mutex == null);
// Open your log file and write results
log_file = open_file_for_reading('the_log_file');
write(log_file, data);
close_file(log_file);
close_file(mutex);
// Free mutex and allow other processes to create the same file.
delete_file(mutex);
So, all processes will try to create the mutex file but only the one who wins will be able to continue. Once you write your output, close and delete the mutex so other processes can do the same.