when you do an "ssh second_machine" you are able to connect to second_machine on your home directory
But usually i am working in my_machine in directory with very long path, and i want to connect to second_machine and move to my working directory right away. So everytime i have to:
ssh second_machine
cd /very/long/path/to/directory/
Is there a way to make it automatic ?? ( ssh automatically go to the desired directory )
This should work for you
ssh -t second_machine "cd /very/long/path/to/directory/; bash"
Assumes you're wanting to run bash, substitute for a different shell if required.
To make it permanent, use RemoteCommand in your ~/.ssh/config file, e.g.
Host myhost
HostName IP
User ubuntu
IdentityFile ~/.ssh/id_rsa
RemoteCommand cd /path/to/directory; $SHELL -il
Related:
SSH Config File Alias To Get To a Directory On Server
How can I automatically change directory on ssh login?
Run a remote command using ssh config file
You could do something like the one I'm using. Make an alias as the one below.
alias ssh 'ssh -t \!* "cd $PWD; csh"'
(here, csh could also be replaced by bash)
This brings you directly to the 'current' path on the other machine.
The usage would be like [$] ssh some machine
However, I find that it works slow. So, I'm looking for an alternative.
Related
I'm trying to create a function that connects me to a ssh server then become su, and then ssh into another server so I did the following:
function test
ssh -t testuser#server1 'sudo ssh -t testuser#server2'
end
When I execute it I get the following error ssh: command not found
But when I execute it straight to the terminal it works with no problems.
This sounds like a path issue on server1.
From the command line, what do you see if you type the following?
ssh -t testuser#server1 sudo which ssh
If SSH is not in the path for root, you might need to specify full paths, such as something like:
ssh -t testuser#server1 sudo /usr/bin/ssh testuser#server2
You may need to adjust the paths to match your environment, of course.
Also, if you're trying to connect from root at server1 to server2, can you just ssh directly to root#server1? If so, you could perhaps use the "ProxyJump" functionality that was added with OpenSSH 7.3. This depends upon the ability to remotely login as root, which may not be an option, depending on your environment.
ssh -J root#server1 testuser#server2
My problem was solved when I added each one of them to the fish functions folder:
~/.config/fish/functions
I just created a file called myfunction.fish and inside of that file I pasted the function definition:
function myfunction
ssh -t testuser#server1 'sudo ssh -t testuser#server2'
end
saved it, exited fish and now that function is permanent.
pwd over ssh returns the local personal working directory. How can I easily access the remote pwd?
edit
I use ssh-agent forwarding 1x:
local -> server1 -> server2 there I want to execute these scripts https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html e.g. authority=${PWD}/ssl/ca.pem but instead of the remote working directory my local directory from the local computer is used.
If you run
ssh host echo $PWD
the $PWD variable is evaluated in your shell and not in the remote. If you want to evaluate the variable remote, you need tu escape the $ sign:
ssh host echo \$PWD
or put the command into single quotes:
ssh host 'echo $PWD'
I'm stuck in the Permission denied (publickey) hell trying to copy public key to a remote server so Jenkins can rsync files during builds.
Running:
sudo ssh-copy-id -i id_rsa.pub ubuntu#xx.xx.xx.xx
I have done this for another server, but that one has a separate key pair for SSH assigned by EC2, and my current guess is that ssh-copy-id is trying to use wrong private key for this connection. Is there a way to pass -vv to ssh-copy-id so I can see what jey it's trying to use. I've looked into the -o switch, but can't seem to get it right.
Thank you.
So here's what I've done:
added following to /etc/ssh/ssh_config:
Host xx.xx.xx.xx
User ubuntu
IdentityFile ~/.ssh/key-name-for-that-machine.pem
Then copied key-name-for-that-machine.pem into /var/lib/jenkins/.ssh
Didn't run ssh-copy-id again, simply have rsync use that key file when moving stuff, here's the rsync script:
rsync -rvh -e 'ssh -v' "/tmp/project-DEV-${BUILD_ID}/" ubuntu#xx.xx.xx.xx:"/www/www.project-dir.net/"
my guess would by running it without sudo. But that's depending on how you normally log into the server.
If you normally login by using ssh ubuntu#xx.xx.xx.xx then lose the
sudo.
If not than try to login with sudo ssh ubuntu#xx.xx.xx.xx
Reading your question, at least one of these should fail.
I want to sync a directory /var/sites/example.net/ from a remote machine to a directory at the same path on my local machine.
The remote machine only authenticates SSH connections with keys, not passwords.
On my local machine I have an alias set up in ~/.ssh/config so that I can easily run ssh myserver to get in.
I'm trying rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ but it fails because my local user does not have permission to edit the local directory /var/sites/example.net/.
If I try sudo rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ (just adding sudo), I can fix the local permission issue, but then I encounter a different issue -- my local root user does not see the proper ssh key or ssh alias.
Is there a way I can accomplish this file sync by modifying this rsync command? I'd like to avoid changing anything else (e.g. no changes to file perms or ssh setup)
Try this:
sudo rsync -e "sudo -u localuser ssh" -a myserver:/var/sites/example.net/ /var/sites/example.net/
This runs rsync as root, but the -e flag causes rsync to run ssh as your local user (using sudo -u localuser), so the ssh command has access to the necessary credentials. Rsync itself is still running as root, so it has the necessary filesystem permissions.
Just improving on top of larsks's response:
sudo rsync -e "sudo -u $USER ssh" ...
So in your case change rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ to sudo rsync -e "sudo -u $USER ssh" -a myserver:/var/sites/example.net/ /var/sites/example.net/.
With regards to #larsks' answer, If you have your key loaded into the ssh agent, which is my use case, you can instead do:
sudo rsync -e "env SSH_AUTH_SOCK=$SSH_AUTH_SOCK ssh" /source/path /destination/path
Instead of the double sudo.
My use case, if anyone is interested in replicating, is that I'm SSHing to a non-root sudo-er account on remote A, and need to rsync root-owned files between remote A and remote B. Authentication to both remotes is done using keys I have on my real local machine and I use -A to forward the ssh-agent authentication socket to remote A.
Guss's answer works well if you want to use sudo rsync for local file permissions but want to utilise your user's SSH session. However, it falls short when you also want to use your SSH config file.
You can follow Wernight's approach by using sudo to switch the user for the SSH connection and supplying a path to the config file, but this won't work if you have to enter a passphrase. So, you can combine both approaches by making use of the --preserve-env flag:
sudo --preserve-env=SSH_AUTH_SOCK rsync -e "sudo --preserve-env=SSH_AUTH_SOCK -u $USER ssh" hostname:/source/path /destination/path
Note that it's necessary to cascade this flag through both sudo commands so it does look a bit messy!
As requested by Derek above:
when sudo asks for a password then you need to modify the sudoers config with sudo visudo and add a entry with NOPASSWD: in front of the rsync command.
For details you could consult man sudoers.
this will work in every mode, even via cron, at, systemd.service+timer, etc.
test it with: ssh <user>#<your-server> "sudo <your-rsync-command>"
I am using ssh to connect to a remote machine.
Is there a way i can copy an entire directory from a local machine to the remote machine?
I found this link to do it the other way round i.e copying from remote machine to local machine.
Easiest way is scp
scp -r /path/to/local/storage user#remote.host:/path/to/copy
rsync is best for when you want to update versions where it has been previously copied.
If that doesn't work, rerun with -v and see what the error is.
It is very easy with rsync as well:
rsync /path/to/local/storage user#remote.host:/path/to/copy
I recommend the usage of rsync over scp, because it is highly likely that you will one day need a feature that rsync offers and then you benefit from your experience with the tool.
This is worked for me
rsync -avz -e 'ssh' /path/to/local/dir user#remotehost:/path/to/remote/dir
this is if you have to used another ssh port other than 22
rsync -avzh -e 'ssh -p sshPort' /my/local/dir/ remoteUser#host:/path/to/remote/dir
this works if your remote server uses default 22 port
rsync -avzh /my/local/dir/ remoteUser#host:/path/to/remote/dir
This worked for me.
Follow this link for detailed understanding.
we can do this by using scp command for example:
scp -r /path/to/local/machine/directory user#remotehost(server IP Address):/path/to/sever/directory
In case of differnt port
By default, the SCP protocol operates on port 22 but this can be overridden by supplying the -P flag, followed by the port number for example:
scp -P 8563 -r /path/to/local/machine/directory user#remotehost(server IP Address):/path/to/sever/directory
NOTE: we use -r flag to copy directory's files/folders recursively instead of a single file.