I have a remote headless server (MacOS BigSur 11.3.1). When I log in via ssh (with either the root user or regular user), I am unable to save to the crontab.
When I use the following command:
% crontab -e
I can see a cronjob that I saved when I was logged in locally (not via ssh). After editing and exiting the crontab, I get the following error:
crontab: installing new crontab
crontab: tmp/tmp.1028: Operation not permitted
crontab: edits left in /tmp/crontab.kKYx3tt4c1
While logged into ssh, I have instead tried to edit the crontab with this command:
% sudo crontab -e
To my surprise, the cronjob that I saved when logged in locally is not listed. It is as if it is a different crontab for a different user. In any case, I can't save to the crontab when using sudo either. It gives the exact same error as above.
I have followed the advice of a few internet posts suggesting allowing the cron and sshd executables "Full Disk Access" through the Mac System Preferences. However, the same error persists.
I'm not sure what to try next.
So the issue was solved by giving sshd-keygen-wrapper full disk access. Don't ask me why that needs it, but it is working now. I hope this helps anyone with the same issue.
We have VirtualBox (using vagrant) env , by mistake made an entry in /etc/security/limits.conf [with out having a root shell open:( ] and now I am unable to ssh (the connections drops immediately).
Previously we had one such scenario (limits done by someone else) , was able to fix using vboxmanage guestcontrol copyto CLI and was able to overwrite limits.conf and then ssh was allowed, this time around the vboxmanage CLI also hangs
Tried to open the VM in GUI and went to console and tried few options , but could not get to single user mode.
Since you already tried vbox cli command and the commands hang, it means even virtualbox cannot access the system or get a shell to open.
In this case you will have to bring up a ubuntu VM and use the qemu-nbd module to fix this. The steps are given below.
Bring up a very simple ubuntu vm using hashicorp’s bionic64 on the same host machine by executing the following steps.
mkdir bionic
cd bionic
vagrant box add hashicorp/bionic64
vagrant init
Open the Vagrantfile and change the config.vm.box = "base" to config.vm.box = "hashicorp/bionic64"
Also mount the folder in the host where the .vdi file for the VM is located by adding the following to the Vagrant file by adding the following line(replace the file path with the correct one corresponding to your system. Here /nbd2 will be created on the ubuntu machine and will contain the files including the .vdi file.
config.vm.synced_folder "/home/topcat/VirtualBox\ VMs/your_vm", "/nbd2"
Now do vagrant up
Once the machine boots up
vagrant ssh #to ssh as vagrant
sudo su #to become root
apt-get update #This will refresh the apt cache
apt-get install qemu
modprobe nbd (to check if the module is loaded successfully. Will exit without any output if it is installed)
qemu-nbd -c /dev/nbd1 "/nbd2/box-disk001.vdi" - (Here change the path to whatever you gave in the config.vm.synced_folder property)
mkdir -p /mnt/vdi-boot
mount /dev/nbd1p1 /mnt/vdi-boot
cd /mnt/vdi-boot/etc/security (This folder will have all the files as it were in your VM)
touch limits.conf (if the file is already there, delete it)
chmod 644 limits.conf
chown root:root limits.conf
open the /mnt/vdi-boot/etc/security/nsswitch.conf file and check if the following three lines are present
passwd: files
shadow: files
group: files
umount /mnt/vdi-boot (unmounts the mounted path)
qemu-nbd -d /dev/nbd1 (disconnects from qemu-nbd)
Exit the VM and start the VM
Open another shell and try to ssh. It should go through fine this time.
I have a vagrant box with CentOS7 running under KVM/QEMU (libvirt) on my Fedora 29 host. vagrant up works fine. vagrant ssh fails with:
/usr/share/vagrant/gems/gems/vagrant-2.1.2/lib/vagrant/util/safe_exec.rb:39:
in `exec': : Permission denied - /home/username/bin/sshPermission denied - /home/username/bin/ssh ( (Errno::EACCESErrno::EACCES)
The doc says: Vagrant will attempt to use the local SSH client installed on the host machine. However, which ssh correctly results in: /usr/bin/ssh. So why vagrant doesn't use it ?
The directory! /home/username/bin/ssh was included in the PATH env when the box was created and vagrant seems to have stored this information somewhere. Removing the directory from PATH didn't help. Only when I rename or remove the directory vagrant ssh does work.
Can anyone tell me where vagrant stored the wrong info ?
Edit: The Vagrantfile is nearly empty, only config.vm.box contained...
Guess I found the reason - seems to be a bug or strange behavior of the vagrant version 2.1.2 that I use:
I still had directory /home/username/bin in the PATH env. Vagrant seems to list all entries in all directories included in PATH to look for ssh and finds subdirectory /home/username/bin/ssh not realizing that this is a directory ...
After removing /home/username/bin the command vagrant ssh works as expected. So unless vagrant is improved I have to permanently rename my /home/username/bin/ssh directory ...
I want to be able to transfer a directory and all its files from my local machine to my remote one. I dont use SCP much so I am a bit confused.
I am connected to my remote machine via ssh and I typed in the command
scp name127.0.0.1:local/machine/path/to/directory filename
the local/machine/path/to/directory is the value i got from using pwd in the desired directory on my local host.
I am currently getting the error
No such file or directory
Looks like you are trying to copy to a local machine with that command.
An example scp looks more like the command below:
Copy the file "foobar.txt" from the local host to a remote host
$ scp foobar.txt your_username#remotehost.edu:/some/remote/directory
scp "the_file" your_username#the_remote_host:the/path/to/the/directory
to send a directory:
Copy the directory "foo" from the local host to a remote host's directory "bar"
$ scp -r foo your_username#remotehost.edu:/some/remote/directory/bar
scp -r "the_directory_to_copy" your_username#the_remote_host:the/path/to/the/directory/to/copy/to
and to copy from remote host to local:
Copy the file "foobar.txt" from a remote host to the local host
$ scp your_username#remotehost.edu:foobar.txt /your/local/directory
scp your_username#the_remote_host:the_file /your/local/directory
and to include port number:
Copy the file "foobar.txt" from a remote host with port 8080 to the local host
$ scp -P 8080 your_username#remotehost.edu:foobar.txt /your/local/directory
scp -P port_number your_username#the_remote_host:the_file /your/local/directory
From a windows machine to linux machine using putty
pscp -r <directory_to_copy> username#remotehost:/path/to/directory/on/remote/host
i had a kind of similar problem. i tried to copy from a server to my desktop and always got the same message for the local path. the problem was, i already was logged in to my server per ssh, so it was searching for the local path in the server path.
solution: i had to log out and run the command again and it worked
In my case I had to specify the Port Number using
scp -P 2222 username#hostip:/directory/ /localdirectory/
Your problem can be caused by different things. I will provide you three possible scenarios in Linux:
The File location
When you use scp name , you mean that your File name is in Home directory. When it is in Home but inside in another Folder, for example, my_folder, you should write:
scp /home/my-username/my_folder/name my-username#127.0.0.1:/Path....
You File Permission
You must know the File Permission your File has. If you have Read-only you should change it.
To change the Permission:
As Root ,sudo caja ( the default file manager for the MATE Desktop) or another file manager ,then with you Mouse , right-click to the File name , select Properties + Permissions
and change it on Group and Other to Read and write .
Or with chmod .
You Port Number
Maybe you remote machine or Server can only communicate with a Port Number, so you should write -P and the Port Number.
scp -P 22 /home/my-username/my_folder/name my-usernamee#127.0.0.1 /var/www/html
You also need to make sure what is in the .bashrc file of the user.
I've also got this ridiculous error because I put cd and ls commands in there, as it was mean to let them see the current files & directories when the user is has logged in from ssh.
The filename should go at the end of the path to the directory. That is, it should be the full path to the file. You are doing this from a command line, and you have a working directory for that command line (on your local machine), this is the directory that your file will be downloaded to. The final argument in your command is only what you want the name of the file to be. So, first, change directory to where you want the file to land. I'm doing this from git bash on a Windows machine, so it looks like this:
cd C:\Users\myUserName\Downloads
Now that I have my working directory where I want the file to go:
scp -i 'c:\Users\myUserName\.ssh\AWSkeyfile.pem' ec2-user#xx.xxx.xxx.xxx:/home/ec2-user/IwantThisFile.tar IgotThisFile.tar
Or, in your case:
cd /local/path/where/you/want/the/file/to/land
scp name#127.0.0.1:/local/machine/path/to/directory/filename filename
Be sure the folder from where you send the file does not contain space !
I was trying to send a file to a remote server from my windows machine from VS code terminal, and I got this error even if the file was here.
It's because the folder where the file was contained space in its name...
If you want to copy everything in a Folder + have a special Port use this one.
Works for me on Ubuntu 18.04 and a local machine with Mac OS X.
-r for recursive
-P for Port
scp -rP 1234 /Your_Directory/Source_Folder/ username#yourdomain.com:/target/folder
As #Astariul said, path to the file might cause this bug.
In addition, any parent directory which contains non-ASCII character, for example Chinese, will cause this.
In that case, you should rename you parent directory
This happened to me and I solved it.
This problem can be because the file you are trying to get is not existing (typo in the name of file or folder?) or because it is invisible to the user that you enter in scp.
The problem in my case was that the files that I wanted to get from remote machine were created by another user (root on my case), so, those files were invisible
To fix, I did:
ssh myuser#myserver
chown myuser:myuser myfile
exit
scp mysuer#myserver:/home/myuser/myfile /localfolder/myfile
For me on my mac,
I just have to run the command from my MAC terminal
scp -r root#ip_addres:/root/source /Users/path/Desktop/others/destination
This question already has answers here:
How to pass password automatically for rsync SSH command?
(15 answers)
Closed 9 years ago.
I have a problem with rsync over SSH. I mean the way of entering a password. I can't enter it immediately after entering the line:
$ rsync -avz -e ssh remoteuser#remotehost:/remote/dir /this/dir/
and i have no idea how to do it. Any ideas?
Put
eval `keychain --eval id_rsa` #Or id_dsa / whatever you key is called
In your .bash_profile and log in to a terminal some ware. (Or you could just run it plain)
Then stick it in you script (you will have to run it onesome ware else as stated before unless your script is interactive)
You will need to install keychain and read a tutorial on making keys with ssh-keygen beforehand.
This is a rough answer for a rough question.