I'm trying to copy a file from my local machine macosx 10.11.6 to a remote server (digital ocean droplet) ubuntu 18.04 using scp.
I tried
scp -r /path/to/local/file username#ipaddress:/path/to/folder/where/to/copy
and I got permission denied.
I ssh into the server cd /path/to/folder/where/to/copy and test with touch index.txt and got permission denied.
I tried to touch index.txt with sudo which worked after input the password.
I tried install and use sshpas
sshpass -p 'mypassword' scp -r /path/to/local/file username#ipaddress:/path/to/folder/where/to/copy
And I got permission denied again.
What is the correct way to copy that file from local to the remote server passing the user password?
It sounds like your remote server user doesn't have write permissions to your destination directory, that's why the touch fails.
You can either copy the file to somewhere else (your user's home?) or modify the destination folder's permissions with sudo and chmod to allow your user to write to the destination directory.
As mentioned here SSH SCP Local file to Remote in Terminal Mac Os X , I had to do it in two times.
scp -r /path/to/local/file username#ipAddress:/home/username
then
ssh username#ipAddress
sudo mv file path/to/destination/folder
Related
How to I grant myself permission to transfer a .crt file from my local machine to the aws ubuntu 12.04 server?
I am using the following command from my machine and receiving a permission denied response.
scp -i /Users/me/key_pair.pem /Users/me/ssl-bundle.crt ubuntu#ec2-50-150-63-20.eu-west-1.compute.amazonaws.com:/etc/ssl/certs/
I am following comodo's instruction. Refer to the heading Configure your nginx Virtual Host from the link. I have not set anything up with regards to permission as user. This is a little new to me and will appreciate further sources of information.
I changed the permission of the path on the server and transferred the file!
With reference to File Permissions , I gave the /etc/ssl/certs/ path the "Add other write & execute" permission by this chmod command when ssh'd into the Ubuntu server:
sudo chmod o+wx /etc/ssl/certs/
Then, on my local machine, the following command copied a file on my directory and transferred it to destination:
scp -i /Users/me/key_pair.pem /Users/me/ssl-bundle.crt ubuntu#ec2-50-150-63-20.eu-west-1.compute.amazonaws.com:/etc/ssl/certs/
It is the write permission you need, and depending on your use case, use the appropriate chmod command.
Simplest way to transfer files from local to ec2 (or) ec2 to local by FileZila.
You can connect with your instance by using Filezila, then transfer files from local to server and vice-versa.
I want to sync a directory /var/sites/example.net/ from a remote machine to a directory at the same path on my local machine.
The remote machine only authenticates SSH connections with keys, not passwords.
On my local machine I have an alias set up in ~/.ssh/config so that I can easily run ssh myserver to get in.
I'm trying rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ but it fails because my local user does not have permission to edit the local directory /var/sites/example.net/.
If I try sudo rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ (just adding sudo), I can fix the local permission issue, but then I encounter a different issue -- my local root user does not see the proper ssh key or ssh alias.
Is there a way I can accomplish this file sync by modifying this rsync command? I'd like to avoid changing anything else (e.g. no changes to file perms or ssh setup)
Try this:
sudo rsync -e "sudo -u localuser ssh" -a myserver:/var/sites/example.net/ /var/sites/example.net/
This runs rsync as root, but the -e flag causes rsync to run ssh as your local user (using sudo -u localuser), so the ssh command has access to the necessary credentials. Rsync itself is still running as root, so it has the necessary filesystem permissions.
Just improving on top of larsks's response:
sudo rsync -e "sudo -u $USER ssh" ...
So in your case change rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ to sudo rsync -e "sudo -u $USER ssh" -a myserver:/var/sites/example.net/ /var/sites/example.net/.
With regards to #larsks' answer, If you have your key loaded into the ssh agent, which is my use case, you can instead do:
sudo rsync -e "env SSH_AUTH_SOCK=$SSH_AUTH_SOCK ssh" /source/path /destination/path
Instead of the double sudo.
My use case, if anyone is interested in replicating, is that I'm SSHing to a non-root sudo-er account on remote A, and need to rsync root-owned files between remote A and remote B. Authentication to both remotes is done using keys I have on my real local machine and I use -A to forward the ssh-agent authentication socket to remote A.
Guss's answer works well if you want to use sudo rsync for local file permissions but want to utilise your user's SSH session. However, it falls short when you also want to use your SSH config file.
You can follow Wernight's approach by using sudo to switch the user for the SSH connection and supplying a path to the config file, but this won't work if you have to enter a passphrase. So, you can combine both approaches by making use of the --preserve-env flag:
sudo --preserve-env=SSH_AUTH_SOCK rsync -e "sudo --preserve-env=SSH_AUTH_SOCK -u $USER ssh" hostname:/source/path /destination/path
Note that it's necessary to cascade this flag through both sudo commands so it does look a bit messy!
As requested by Derek above:
when sudo asks for a password then you need to modify the sudoers config with sudo visudo and add a entry with NOPASSWD: in front of the rsync command.
For details you could consult man sudoers.
this will work in every mode, even via cron, at, systemd.service+timer, etc.
test it with: ssh <user>#<your-server> "sudo <your-rsync-command>"
From my laptop, I often ssh into another machine in my university department. I have to put in a password every time currently.
Could someone give me an idiot's guide to having the password be automatically entered each time I log in please.
Thank you in advance.
You can override by enabling Password less authentication. But you should install keys (pub, priv) before going for that.
Execute the following commands at local server.
Local $> ssh-keygen -t rsa
Press ENTER for all options prompetd. No values need to be typed.
Local $> cd .ssh
Local $> scp .ssh/id_rsa.pub user#targetmachine:
Prompts for pwd$> ENTERPASSWORD
Connect to remote server using the following command
Local $> ssh user#targetmachine
Prompts for pwd$> ENTERPASSWORD
Execute the following commands at remote server
Remote $> mkdir .ssh
Remote $> chmod 700 .ssh
Remote $> cat id_rsa.pub >> .ssh/authorized_keys
Remote $> chmod 600 .ssh/authorized_keys
Remote $> exit
Execute the following command at local server to test password-less authentication.
It should be connected without password.
$> ssh user#targetmachine
I assume you are using Linux. Lot of places in the internet where it is already documented.
For example(s):
http://www.rebol.com/docs/ssh-auto-login.html
http://www.linuxproblem.org/art_9.html
You can log in without providing password if PKI (public key infrastructure) is set up.
Otherwise you'll have to look for ssh client that can store passwords and supports your operating system.
Use a tool (such as AutoHotkey, assuming you are using Windows) to record and replay key sequences: http://www.autohotkey.com/
I have managed to connect to a remote server through ssh tunneling. No how can I copy files from remote server to my local computer. Considering that I just want to do it from remote server to my local computer.
I dont know how to write this command
"scp file/I/want/to/copy localhost/home/folder"
thanks a lot
Example:
scp username#server:/home/username/file_name /home/local-username/file-name
check this:
http://www.garron.me/linux/scp-linux-mac-command-windows-copy-files-over-ssh.html
scp -r (source)hostname:/(location of the file to be copied)/(file name) (Destination)hostname:/(location of the folder where the file should be copied to)
For example:
scp -r ram.desktop.overflow.com:/home/Desktop/Ram/abcd.txt rajesh.desktop.overflow.com:/home/documents/
when you do an "ssh second_machine" you are able to connect to second_machine on your home directory
But usually i am working in my_machine in directory with very long path, and i want to connect to second_machine and move to my working directory right away. So everytime i have to:
ssh second_machine
cd /very/long/path/to/directory/
Is there a way to make it automatic ?? ( ssh automatically go to the desired directory )
This should work for you
ssh -t second_machine "cd /very/long/path/to/directory/; bash"
Assumes you're wanting to run bash, substitute for a different shell if required.
To make it permanent, use RemoteCommand in your ~/.ssh/config file, e.g.
Host myhost
HostName IP
User ubuntu
IdentityFile ~/.ssh/id_rsa
RemoteCommand cd /path/to/directory; $SHELL -il
Related:
SSH Config File Alias To Get To a Directory On Server
How can I automatically change directory on ssh login?
Run a remote command using ssh config file
You could do something like the one I'm using. Make an alias as the one below.
alias ssh 'ssh -t \!* "cd $PWD; csh"'
(here, csh could also be replaced by bash)
This brings you directly to the 'current' path on the other machine.
The usage would be like [$] ssh some machine
However, I find that it works slow. So, I'm looking for an alternative.