I have an account on a server that I need to give sftp access to another person. This person however only needs access to a small subset of directories. Is it possible, without creating another user account, to restrict an ssh key to that subset of directories.
Basically the website on which these directories are located lives within the home directory of a specific user account. I would prefer not to have to create a separate user account just to lock the use down to those directories. If it is possible to lock down the access to specific directories using an ssh key that would be ideal.
It's possible, but it's sort of a hack. The much preferred, simpler way is just to only grant that user permissions to certain files and directories.
This is an answer on how to accomplish your goal using ssh rather than sftp. This has some chance of being acceptable to the OP because it still uses the ssh tool chain.
This technique is using a feature of ssh that allows ssh to run a command based on the private key presented to host machine. When the host sees that key, then it runs the associated command. For the command we will use "cat" which will dump the file.
add a line that looks like this to ~mr_user/.ssh/authorized_keys2
command="/usr/bin/cat ~/sshxfer/myfile.tar.gz.uu",no-port-forwarding ssh-dss xxxPUBLIC_KEYxxx mr_user#tgtmach
populate the file like this:
uuencode -m myfile.tar.gz /dev/stdout >~mr_user/sshxfer/myfile.tar.gz.uu
transfer the file by being on the target machine and running this:
ssh -i ~/keys/privatekey.dsa mr_user#srcmach |sed -e's/
//g' |uudecode >myfile.tar.gz
The tricky part to that command is there is a newline in the sed command to remove the newlines from the .uu file.
I did not found a way to pass in a name of a file to transfer, so I had to make a key for each file I wanted to transfer. This was okay for my use case because I only had two files I wanted to transfer.
Related
How would you, in Ansible, make one remote node connect to another remote node?
My goal is to copy a file from remote node a to remote node b and untar it on the target, however one of the files is extremely large.
So doing it normally via fetching to controller, copy from controller to remote b, then unarchive is unacceptable. Ideally, I would do from _remote_a_ something like:
ssh remote_b cat filename | tar -x
It is to speed things up. I can use shell module to do this, however my main problem is that this way, I lose Ansible's handling of SSH connection parameters. I have to manually pass an SSH private key if any, or password in a non interactive way, or whatever to _remote_b_. Is there any better way to do this without multiple copying?
Also, doing it over SSH is a requirement in this case.
Update/clarification: Actually I know how to do this from shell and I could do same in ansible. I was just wondering if there is a better way to do this that is more ansible-like. The file in question is really large. The main problem is that when ansible executes commands on remote hosts, then I can configure everything in inventory. But in this case, if I would want a similar level of configurability/flexibility when it goes to parameters of that manually established ssh connection I would have to write it from scratch (maybe even as an ansible module), or something similar. Othervise for example trying to just use ssh hostname command would require a passwordless login or default private key, where I wouldn't be able to modify the private key path used in the inventory without adding that manually, and for ssh connection plugin there are actually two possible variables that may be used to set a private key.
Looks like more a shell question than an ansible one.
If the 2 nodes cannot talk to each other you can do a
ssh remote_a cat file | ssh remote_b tar xf -
if they can talk (one of the nodes can connect to the other) you can launch tell one remote node to connect to the other, like
ssh remote_b 'ssh remote_a cat file | tar xf -'
(maybe the quoting is wrong, launching ssh under ssh is sometimes confusing).
In this last case you need probably to insert some password or set properly public/private ssh keys.
I had an user account set up by my collegue weeks ago, to access our server(rhel). Now Im asked to copy my key so I can login to other servers in the cluster.
My first approach was to copy my /home/user/.ssh folder from the (already set-up) server to the new one. This one obviously fails, I found out with ls -a , that in my .ssh directory is only one file - known_hosts.
Im bit confused from my search results, is it necessary to create a new private-public key pair (I dont have any log about creating in before for the first server, so it was probably already setup for me), or is it sufficient to copy files from the first server and setup owners and permissions?
What you're probably looking for is file ~/.ssh/authorized_keys on the server. If you have your key set up, your public key should be stored there. If there is no such file, than you don't have your keys set up(do you have private keys files on your desktop?).
Please note that for usually ssh will require strict access permissions(rwx for user only) for your ~/.ssh directory and authorized_keys file.
Also you can use as many and as few keys as you wish, depending on your security needs. So using single key pair for multiple servers is possible.
I've been working with the server only for 2 days so I am sorry if that is simple question. I looked everywhere, but didn't find an answer.
So I have a Google compute engine account and I have owner privileges. When I run
gcloud compute ssh instance --zone us-central1-a
it works, but it creates a key with username that it takes from my computer account.
So when I am in google shell I can add or remove files using sudo. But when I go to Filezilla I have to use ssh file key and username from that key. And the only folder that accessible with that username is it's own folder. I am not sure what is the problem so I gave all the facts I could.
I'm not entirely sure I'm answering the right question, but I'll take a stab at it. The ssh keys created by/used by gcloud are specific to a particular linux user on your VM. As you note, you can use sudo when ssh'd in to edit files/directories owned by different users---the way this works is that you (roughly speaking) temporarily switch users to root when doing the file edit.
An scp client like Filezilla isn't going to be able to switch users that way. So you'll need a different technique to edit files with Filezilla.
I suggest ssh-ing in to your vm and using chmod or chown to change the ownership of files/directories that you want to use with Filezilla. Alternatively you could you use useradd -G to add you username to a group that can edit the files you care about.
Exactly what you'll do depends on the security policy you want to enforce for your files, but there a lots of decent options. The key test to run---can you get to a state where you can edit the files when logged in with SSH, but not using sudo? If so then you should be able to edit the files with Filezilla.
I know that we shuld do
ssh user#target
but where do we specify the password ?
Hmm thanks for all your replies.
My requirement is I have to start up some servers on different machines. All servers should be started with one shell script. Well, entering password every time seems little bad but I guess I will have to resort to that option. One reason why I don't want to save the public keys is I may not connect to same machines every time. It is easy to go back and modify the script to change target addresses though.
The best way to do this is by generating a private/public key pair, and storing your public key on the remote server. This is a secure way to login w/o typing in a password each time.
Read more here
This cannot be done with a simple ssh command, for security reasons. If you want to use the password route with ssh, the following link shows some scripts to get around this, if you are insistent:
Scripts to automate password entry
The ssh command will prompt for your password. It is unsafe to specify passwords on the commandline, as the full command that is executed is typically world-visible (e.g. ps aux) and also gets saved in plain text in your command history file. Any well written program (including ssh) will prompt for the password when necessary, and will disable teletype echoing so that it isn't visible on the terminal.
If you are attempting to execute ssh from cron or from the background, use ssh-agent.
The way I have done this in the past is just to set up a pair of authentication keys.
That way, you can log in without ever having to specify a password and it works in shell scripts. There is a good tutorial here:
http://linuxproblem.org/art_9.html
SSH Keys are the standard/suggested solution. The keys must be setup for the user that the script will run as.
For that script user, see if you have any keys setup in ~/.ssh/ (Key files will end with a .pub extension)
If you don't have any keys setup you can run:
ssh-keygen -t rsa
which will generate ~/.ssh/id_rsa.pub (the -t option has other types as well)
You can then copy the contents of this file to ~(remote-user)/.ssh/authorized_keys on the remote machine.
As the script user, you can test that it works by:
ssh remote-user#remote-machine
You should be logged in without a password prompt.
Along the same lines, now when your script is run from that user, it can auto SSH to the remote machine.
If you really want to use password authentication , you can try expect. See here for an example
I guess I'm being a little hesitant but I deal with vcs's occasionally and always get asked for some sort of prompt, of course I'm attempting to access an external machine which I'm sshing into.
Basically my question is, say I don't have root access on this machine, would it still be possible to set this up? I've skimmed through reading it a couple times and I'm pretty sure I got the method down - you generate pub/private keys, sftp to the machine and throw your public into some authorized_keys directory. How is this managed with multiple users for example? Could the generic file name ( the .pub ) get overwritten, or am I completely misunderstanding the process here and it's setup to allow multiple keys natively?
If I'm not a sudoer and one of the server's directories needs to be chmod'd to say 700 whereas it's 655, I can't really do anything other than ask for su access, right?
If you have ssh access to the remote machine, you can generate the key pair on your local machine, add the public key to the authorized_users file on the remote machine, and then use this for authentication. You don't need root privileges to do this. The keys and authorized_files usually reside under your home directory ( myhome/.ssh/authorized_keys etc) so they don't get confused between users.
Your questions about setting directory permissions is unrelated, but if you own the directory or its parent (or its parent...) you will be able to set any permissions on the file in that directory.
Sounds to me like it might be time to curl up with a general *nix administration book, perhaps? Not light reading, but it can be useful and I always find it most informative to learn the details when I'm actually struggling with them.
I ssh all the time into a machine that allows su or sudo. But, it's set up not to allow ssh via "ssh root#machine". So to answer your question, yes it's possible.
You can only change the directory permissions if you own the directory or if you have root access.