root access from script - passwords

Is there a way to gain root access without user input?
I want a script automatically looks at a stored password, then authenticates with that password. Is there a way to do this without user input? I tried man su and man sudo, but these don't support this. Is this possible?
P.S: I know python, sh and bash.

Check out this link. It explains how to do this in python. Root Access Python
The basic of it is changing the permission of the script itself.

Option 1 - Use cron
If the script is a cron job, it can be kicked off by root's crontab. Then it is automatically running as root. Be REALLY careful though as you are running as root. Anyone who can change your script has just become root.
Option 2 - Set up a new id
If the task can be done by a non-root user, you don't actually need root each time.

Related

Running docker commands with an user without root privileges (possibly with www-data user of Apache)

I am developing a simple Flask application (configured with a Apache webserver) which provides a web interface for docker management. My apache server runs as ‘www-data’ user and it uses the same for all of its API operations.
But i get the ‘Permission denied’ error for the following,
docker images
docker run, etc…
as it doesnt allow ‘www-data’ user to run the above commands.
Can you please provide me a suggestion on using the ‘www-data’ user for docker operations.
I dont want to add ‘www-data’ user to sudoers list.
Is adding the user to docker group alone will be a proper solution ???
Or please suggest me a best practice solution for this.
Thanks
GuruPrasad
It would be easier, clearer, and no less dangerous to tell Apache to run your process as root.
Remember that, if you can run any Docker command at all, you can trivially get unrestricted root-level access to anything on the system. For example, if your tool decides it really does want www-data to be in the host's sudoers list, it can
docker run --rm -v /:/host busybox \
sh -c 'echo www-data ALL = (ALL) NOPASSWD: ALL >> /host/etc/sudoers'
Depending on what your management tool does, it potentially is offering equal unprotected root-level access to the host to anyone who can reach the Web page. Even if it isn't, you need to be extremely careful with how you invoke Docker (another SO answer I was looking at had the potential to root the system if a user could create a directory with an arbitrary name and run the script from there, for instance).

Can't add files to the website using Filezilla

I've been working with the server only for 2 days so I am sorry if that is simple question. I looked everywhere, but didn't find an answer.
So I have a Google compute engine account and I have owner privileges. When I run
gcloud compute ssh instance --zone us-central1-a
it works, but it creates a key with username that it takes from my computer account.
So when I am in google shell I can add or remove files using sudo. But when I go to Filezilla I have to use ssh file key and username from that key. And the only folder that accessible with that username is it's own folder. I am not sure what is the problem so I gave all the facts I could.
I'm not entirely sure I'm answering the right question, but I'll take a stab at it. The ssh keys created by/used by gcloud are specific to a particular linux user on your VM. As you note, you can use sudo when ssh'd in to edit files/directories owned by different users---the way this works is that you (roughly speaking) temporarily switch users to root when doing the file edit.
An scp client like Filezilla isn't going to be able to switch users that way. So you'll need a different technique to edit files with Filezilla.
I suggest ssh-ing in to your vm and using chmod or chown to change the ownership of files/directories that you want to use with Filezilla. Alternatively you could you use useradd -G to add you username to a group that can edit the files you care about.
Exactly what you'll do depends on the security policy you want to enforce for your files, but there a lots of decent options. The key test to run---can you get to a state where you can edit the files when logged in with SSH, but not using sudo? If so then you should be able to edit the files with Filezilla.

Backup server permissions

Currently I'm developing a control website for my home server. The server has LDAP setup for Mac's to login. The home directories are also on the server. I want to create a backup tool for my family, so they can backup while I'm off. I don't want to do this scheduled (at least not allways, since they must be able to start a backup right away).
I got stuck when I was trying to find a way to run the rsync commands as a privileged user.
I've got some ideas on this but I would like to hear the cons and pros of the options.
Create simple deamon that runs as root and backup's folder -arg1 to -arg2 minding the old backup in -arg3.
Run rsync as the logged in user by remembering the users pass at login at the control panel. (Problem: running ps will reveal password).
Create special rsync user (Problem: rsync user can read everything).
The project is located at https://github.com/hermanbanken/ldap-control and this issue is also on GitHub at https://github.com/hermanbanken/ldap-control/issues/1.
sudo is on OSX later versions.
sudo rsync .....

cpanel change ownership of files

I'm in a totally new situation.
I have the access on root on a reseller account.
One of the clients for that reseller has a file, he can't modify. Is a file installed with a plugin in wordpress.
That much I understood. He is not the owner of the file. I have to change the owner of that file.
I have shell acces and cron acces but I was not able to use it to solve the problem.
The solution I come out till now, that doesn't work is adding a new cron job(copy and paste from a forum)
#!/bin/bash cd /var/cpanel/users for user in * do chown -R $user.$user /home/$user/public_html/* done
Doesn't work! First of all there is only one line in the cron filed for code so the code above it goes in one line and it looks like is commented. Second, I have no ideea if the cron works. I put my email address to be notified when a cron job is being executed, and I don't get any email.
The only thing I care is to change the ownership of a file.
(Answered by the OP in a question edit. Converted to a community Wiki Answer. See Question with no answers, but issue solved in the comments (or extended in chat) )
The OP wrote:
Solved!
I used http://winscp.net/eng/index.php
Steps:
WinSCP Login/Session Use you data for Hostname, User name and Password
At File Protocol choose SCP
Login/ Click Yes for the key
Go up a directory by clicking ..
Go to Home and there navigate to your files
right click on file and on properties you will find the owner of the
file and there you can change it

Run script as another user without sudo/su privileges

I'm trying to write a script so that it can be called by one user and is executed as another user. I thought that setuid might be able to do this so I enabled setuid using chmod u+s with the owner of the script being user1. I call the script (which only contains whoami right now) as user2 and it still shows user2 instead of user1. How can I make this be user1.
-- My end result is I want one user to be able to call this script and have it ssh into another server and execute a command as another user.
You can copy that user's key (id_rsa) and pass it to ssh when connecting to the server:
ssh -i user1_id_rsa user1#server
However, this is rather a bad solution, security-wise. Adding the user's key to the authorized keys on the server, as I said in the comment, is the proper way to do it, and you should really look into that.
Sounds like you need a third user in your security model, who can run the program, but is otherwise unprivileged. This third user is an assumable identity for a number of users so they can run the process on the remote server.