Limit SSH - bash with no commands - ssh

So I have been working on this for some time. Would like to know if there is a better way or if I am on the right track.
I would basically like to allow some users to login to my server via SSH and then have a squid tunnel via that SSH connection.
The tricky part however is that I dont want these users to be able to execute ANY commands. I mean NOTHING at all.
So at this stage I have setup a Jail via - jailkit. The specific user is then placed in the jail and given the bash shell as a shell.
The next step would be to remove all the commands in the /jail/bin/ directories etc so that they are not able to execute any commands.
Am I on the right path here? What would you suggest?
Also...I see that it will give them many command not found errors...how do I remove these.
Is there any other shell I could look at giving them that would not let them do anything?

You could set their shell to something like /bin/true, or maybe a simple script that will output an informational message, and then have them logon using ssh -N (see the ssh manual page). I believe that allows them to use portforwarding without having an actuall shell on the system.
EDIT:
The equivalent of ssh -N in PuTTY is checking the "Don't start a shell or command at all" checkbox in its SSH configuration tab (Connection->SSH).
EDIT2:
As an alternative to this you could use a script that enters an infinite sleep loop. Until it is interrupted using Ctrl-C the connection will remain alive. I just tried this:
#!/bin/sh
echo "DNSH: Do-Nothing Shell"
while sleep 3600; do :; done
If you use this as a shell (preferrably with a more helpful message) your users will be able to use port-forwarding without an actual shell and without having to know about ssh -N and friends.

Related

SSH over two hops

I have to upload, compile and run some code on a remote system. It turned out, that the following mechanism works fine:
rsync -avz /my/code me#the-remote-host.xyz:/my/code
ssh me#the-remote-host.xyz 'cd /my/code; make; ./my_program'
While it's maybe not the best looking solution, it has the advantage that it's completely self-contained.
Now, the problem is: I need to do the same thing on another remote system which is not directly accessible from the outside by ssh, but via a proxy node. On that system, if I just want to execute a plain ssh command, I need to do the following:
[my local computer]$ ssh me#the-login-node.xyz
[the login node]$ ssh me#the-actual-system.xyz
[the actual system]$ make
How do I need to modify the above script in order to "tunnel" rsync and ssh via the-login-node to the-actual-system? I would also prefer a solution that is completely contained in the script.

how do I run multiple programs on different machines at same time?

I have a 12 computers cluster and I have a java program(the same) on each one, so I want to run these programs at the same time, how can i do this?
I already can copy (scp) files from one computer to another via ssh like
#!/bin/bash
scp /route1/file1 user#computerX:/route2$
scp /route1/file1 user#computerY:/route2$
so I was wondering if something like this can be done to run the programs that I have on each computer
You can run commands via
#!/bin/bash
ssh user#host1 <command>
ssh user#host2 <command>
You will need to use Key Based Auth to avoid entering your password when the script runs.
Alternatively take a look at Fabric for a neat way of controlling multiple hosts.
I recommend typing:
man ssh
and see what it says. That command will run commands remotely for you.

Prevent interactive ssh prompts

I have an ssh script that uses a local key for login to the remote host - nothing too exciting there. The key is passworded and I usually add it to an agent to avoid prompting.
Occasionally I run the program before the agent is running and it will hang waiting for the unlock phrase. In such cases, rather than prompt interactively, I want the command to simply fail.
Anyone know if there's an option for this?
Sure is.
ssh REMOTE_HOST -o "BatchMode yes"

How to use ssh command in shell script?

I know that we shuld do
ssh user#target
but where do we specify the password ?
Hmm thanks for all your replies.
My requirement is I have to start up some servers on different machines. All servers should be started with one shell script. Well, entering password every time seems little bad but I guess I will have to resort to that option. One reason why I don't want to save the public keys is I may not connect to same machines every time. It is easy to go back and modify the script to change target addresses though.
The best way to do this is by generating a private/public key pair, and storing your public key on the remote server. This is a secure way to login w/o typing in a password each time.
Read more here
This cannot be done with a simple ssh command, for security reasons. If you want to use the password route with ssh, the following link shows some scripts to get around this, if you are insistent:
Scripts to automate password entry
The ssh command will prompt for your password. It is unsafe to specify passwords on the commandline, as the full command that is executed is typically world-visible (e.g. ps aux) and also gets saved in plain text in your command history file. Any well written program (including ssh) will prompt for the password when necessary, and will disable teletype echoing so that it isn't visible on the terminal.
If you are attempting to execute ssh from cron or from the background, use ssh-agent.
The way I have done this in the past is just to set up a pair of authentication keys.
That way, you can log in without ever having to specify a password and it works in shell scripts. There is a good tutorial here:
http://linuxproblem.org/art_9.html
SSH Keys are the standard/suggested solution. The keys must be setup for the user that the script will run as.
For that script user, see if you have any keys setup in ~/.ssh/ (Key files will end with a .pub extension)
If you don't have any keys setup you can run:
ssh-keygen -t rsa
which will generate ~/.ssh/id_rsa.pub (the -t option has other types as well)
You can then copy the contents of this file to ~(remote-user)/.ssh/authorized_keys on the remote machine.
As the script user, you can test that it works by:
ssh remote-user#remote-machine
You should be logged in without a password prompt.
Along the same lines, now when your script is run from that user, it can auto SSH to the remote machine.
If you really want to use password authentication , you can try expect. See here for an example

How can I remotely log on to a machine, execute a script which sets up an environment, then accept user input?

I've been trying to figure out a way to do this for a few hours now, and am having no luck.
I have a large environment file that I have saved as a ksh script. This script works perfect if I type . ./setEnv.sh
However, what I'm trying to do is use either ssh or rsh to log on to a remote system, execute this script, then allow me to use the system in it's modified form. I am able to successfully execute the script, but the connection always closes after execution. I would like to be able to keep this connection open.
Any idea on how I can do this?
At the moment, it does not matter if I use SSH or RSH to accomplish this. RSH is preferable. I am using a variety of Linux and Solaris operating systems, so a catch-all method would be nice.
Thanks,
Matt
Couldn't you do something like that ?
ssh user#host "./setEnv.sh && your-command"