I don't know how to mount a remote directory
"remote_dir" on computer "remote", having remote gid "wgrp".
Help is welcome.
me#local$ sshfs me#remote:/remote_dir remote_as_wgrp ...wanted_options...
A workaround is to create a new user "me_wgrp" belonging to group "wgrp".
But the problem seems conceptually so simple that I'm sure there is a solution.
Context:
I'm able to connect using ssh on remote compute, and then to change my gid:
me#local$ ssh me#remote
me#remote$ newgrp wgrp
Now I can create files in directories which are only writable by the group "wgrp".
I have tried
sshfs me#remote:/remote_dir remote_as_wgrp -o ssh_command='newgrp wgrp'
but sshfs seems blocked.
Also, if I try
ssh me#remote 'newgrp wgrp'
ssh doesn't give the prompt, but it accepts commands.
You should consider changing the default group on the remote host has newgrp will never return because it opens a new shell.
"newgrp - return you to a prompt of a new shell."
unfortunately it looks like newgrp group, or "newgrp group -" causes sshfs to not work in the newgrp environment. You won't be able to cd into the sshfs directory or ls, basically you'll get a permission denied error (and it's not due to permissions issues). You'll also notice that df no longer shows the sshfs fuse mount.
Ten years after the question was asked, I needed this myself.
I found that it is now possible to add a parameter to sshfs:
sshfs -o sftp_server="sg <group> -c '/usr/lib/misc/sftp-server -u <umask>'" ...
where /usr/lib/misc/sftp-server is the location of sftp-server on the remote system, <group> is the desired group on the remote system, and -u <umask> is an optional way of providing a default umask for the particular sftp-server.
Related
This question already has answers here:
WinSCP connect to Amazon AMI EC2 Instance changing user after login to "root"
(6 answers)
Closed last year.
I am trying to use WinSCP to transfer files over to a Linux Instance from Windows.
I'm using private key for my instance to login to Amazon instance using ec2-user. However ec2-user does not have access to write to the Linux instance
How do I sudo su - to access the root directory and write to the Linux box, using WinSCP or any other file transfer method?
Thanks
I know this is old, but it is actually very possible.
Go to your WinSCP profile (Session > Sites > Site Manager)
Click on Edit > Advanced... > Environment > SFTP
Insert sudo su -c /usr/lib/sftp-server in "SFTP Server" (note this path might be different in your system)
Save and connect
Source
AWS Ubuntu 18.04:
There is an option in WinSCP that does exactly what you are looking for:
AFAIK you can't do that.
What I did at my place of work, is transfer the files to your home (~) folder (or really any folder that you have full permissions in, i.e chmod 777 or variants) via WinSCP, and then SSH to to your linux machine and sudo from there to your destination folder.
Another solution would be to change permissions of the directories you are planning on uploading the files to, so your user (which is without sudo privileges) could write to those dirs.
I would also read about WinSCP Remote Commands for further detail.
Usually all users will have write access to /tmp.
Place the file to /tmp and then login to putty , then you can sudo and copy the file.
I just wanted to mention for SUSE Enterprise server V15.2 on an EC2 Instance the command to add to winSCP SFTP server commands is :
sudo su -c /usr/lib/ssh/sftp-server
I didn't have enough Reputation points to add a comment to the original answer but I had to fish this out so I wanted to add it.
ssh to FreePBX and run the commands stated below in your terminal:
sudo nano -f /etc/sudoers.d/my_config_file
YourUserName ALL=(ALL) NOPASSWD:ALL
sudo systemctl restart sshd
Winscp:
under session login ==> Advanced ==> SFTP
Change SFTP Server to:
sudo /usr/libexec/openssh/sftp-server
I do have the same issue, and I am not sure whether it is possible or not,
tried the above solutions are not worked for me.
for a workaround, I am going with moving the files to my HOME directory, editing and replacing the files with SSH.
Tagging this answer which helped me, might not answer the actual question
If you are using password instead of private key, please refer to this answer for tested working solution on Ubuntu 16.04.5 and 20.04.1
https://stackoverflow.com/a/65466397/2457076
I' m sure you will find the question similar to many other posts on stackoverflow or on internet. However, I could not find the solution to my problem precisely. I have list of task to be run on remote server, and passing the script is OK! however does not suit to the requirement.
I' m running following from my server to connect to remote server;
ssh -t user#server << 'HERE'
sudo su - <diff_user>
do task as diff_user
HERE
ssh -tt user#server << 'HERE'
sudo su - <diff_user>
do task as diff_user
HERE
With first option (-t), I' m still not able to do sudo, it says below;
sudo: sorry, you must have a tty to run sudo
With second option above (-tt), I' m getting reverse input/output to current server session, total mess.
I tried passing the content as an script to SSH to run on remote host, however, getting similar results.
Is there a way other than commenting out below?
Defaults requiretty in /etc/sudoers file
I have not tried above though, I know RedHat approved it to be removed/ commented out in future version, whenever that is. If I go with step, I will have get above done in 100's of VM's (moreover, I dont have permission to edit the file on VM's and give it a try).
Bug 1020147
Hence, my issue remains the same, as before. It would be great if I can get some input from experts here :)
Addition Info : Using RedHat RHEL 6, 2.6.32-573.3.1
I do have access to the remote host and once I' m in, my ID does not require password to switch to diff_user.
When you are asking this way, I guess you don't have passwordless sudo.
You can't communicate with the remote process (sudo), when you put the script on stdin.
You should rather use the ssh and su command:
ssh -t user#server "sudo su - <diff_user> -c do task as diff_user"
but it might not work. Interactive session can be initiated using expect (a lot of questions around here).
I was trying to connect to another machine in an automated fashion and check some logs only accessible to root/sudo.
This was done by passing the password, server, user, etc. in a file — I know this is not safe and neither a good practice, but this is the way it will be done in my company.
I have several problems:
tcgetattr: Inappropriate ioctl for device;
tty related problems that I don't remember exactly;
sudo: sorry, you must have a tty to run sudo, etc..
Here is the code that worked for me:
#!/bin/bash
function checkLog(){
FILE=$1
readarray -t LINES < "$FILE"
machine=${LINES[4]}
user=${LINES[5]}
password=${LINES[6]}
fileName=${LINES[7]}
numberOfLines=${LINES[8]}
IFS='' read -r -d '' SSH_COMMAND <<EOT
sudo -S <<< '$password' tail $fileName -n $numberOfLines
EOT
RESULTS=$(sshpass -p $password ssh -tt $user#$machine "${SSH_COMMAND}")
echo "$RESULTS"
}
checkLog $1
I am trying to follow this vagrant tutorial. I get error after my first two command. I wrote these two command from command line
$ vagrant init hashicorp/precise64
$ vagrant up
After I ran vagrant up command I get this message.
The private key to connect to the machine via SSH must be owned
by the user running Vagrant. This is a strict requirement from
SSH itself. Please fix the following key to be owned by the user
running Vagrant:
/media/bcc/Other/Linux/vagrant3/.vagrant/machines/default/virtualbox/private_key
And then if I run any command I get the same error. Even if I run vagrant ssh I get the same error message. Please help me to fix the problem.
I am on linux mint and using virutal box as well.
Exactly as the error message tells you:
The private key to connect to the machine via SSH must be owned
by the user running Vagrant.
Therefore check permissions of file using
stat /media/bcc/Other/Linux/vagrant3/.vagrant/machines/default/virtualbox/private_key
check what user you are running using
id
or
whoami
and then modify owner of the file:
chown `whoami` /media/bcc/Other/Linux/vagrant3/.vagrant/machines/default/virtualbox/private_key
Note that this might not be possible if your /media/bbc/ is some non-linux filesystem that does not support linux permissions. In that case you should choose more suitable location for you private key.
Jakuje has the correct answer - if the file system you are working on supports changing the owner.
If you are trying to mount the vagrant box off of NTFS, it is not possible to change the owner of the key file.
If you want to mount the file on NTFS and you are running a local instance you can try the following which worked for me:
Vagrant Halt
[remove the vagrant box]
[Add the following line to Vagrantfile]
config.ssh.insert_key=false
[** you may need to remove and clone your project again]
Vagrant Provision
This solution may not be suitable for a live instance - it uses the default insecure ssh key. If you require more security you might be able to find a more palatable soultion here https://www.vagrantup.com/docs/vagrantfile/ssh_settings.html
If you put vagrant data on NTFS you can use this trick to bypass the keyfile ownership/permissions check.
Copy your key file to $HOME/.ssh/ or where-ever on a suitable filesystem where you can set it to the correct ownership and permissions. Then simply create a symlink (!) to it inside the NTFS directory (where you have set $VAGRANT_HOME, for example) like this:
ln -sr $HOME/.ssh/your_key_file your_key_file
I'm using flask with apache(mod_wsgi).
When I use ssh module with external command subprocess.call("ssh ......",shell=True)
(My Python Flask code : Not wrong)
ssh = "sshpass -p \""+password+"\" ssh -p 6001 "+username+"#"+servername+" \"mkdir ~/MY_SERVER\""
subprocess.call(ssh, shell=True)
I got this error on Apache error_log : Failed to get a pseudo terminal: Permission denied
How can I fix this?
I've had this problem under RHEL 7. It's due to SELinux blocking apache user to access pty. To solve:
Disable or set SELinux as permissive (check your security needs): edit /etc/selinux/config and reboot.
Allow apache to control its directory for storing SSH keys:
sudo -u apache
chown apache /etc/share/httpd
ssh to desired host, accept key.
I think apache's login shell is "/sbin/nologin".
If you want to allow apache to use shell command, modify /etc/passwd and change the login shell to another shell like "/bin/bash".
However, this method is vulnerable to security. Many python ssh modules are available in internet. Use one of them.
What you are doing seems frightfully insecure. If you cannot use a Python library for your SSH connections, then you should at least plug the hole that is shell=True. There is very little here which is done by the shell anyway; doing it in Python affords you more control, and removes a big number of moving parts.
subprocess.call(['/usr/bin/sshpass', '-p', password,
'/usr/bin/ssh', '-T', '-p', '6001', '{0}#{1}'.format(username, servername),
'mkdir ~/MY_SERVER'])
If you cannot hard-code the paths to sshpass and ssh, you should at least make sure you have a limited, controlled PATH variable in your environment before doing any of this.
The fix for Failed to get a pseudo-terminal is usually to add a -T flag to the ssh command line. I did that above. If your real code actually requires a tty (which mkdir obviously does not), perhaps experiment with -t instead, and/or redirecting standard input and standard output.
I have a headless Ubuntu server. I ran a command on the server (snapraid sync) over SSH from my Mac. The command said it would take about 6 hrs, so I left it over night.
When I came down this morning, the Terminal on the Mac said: "Write failed: broken pipe"
I'm not sure if the command executed fully. Is this a timeout issue? If so, how can I keep the SSH connection alive overnight?
This should resolve the problem for Mac osX version: 10.8.2
add:
ServerAliveInterval 120
TCPKeepAlive no
to this file:
~/.ssh/config
Or, if you want it to be a global change in the SSH client, to this file
/private/etc/ssh_config
"ServerAliveInterval 120" basically says to "ping" the server with a NULL packet every 120s, and "TCPKeepAlive no" means to not set the SO_KEEPALIVE socket option (since you shouldn't need it with ServerAliveInterval already set, and apparently it's "spoofable" or some odd).
The servers similarly have something they could set for the same effect (ClientKeepAliveInterval) but typically you don't have control over those settings as much.
You can use "screen" util for that. Just connect to the server over SSH, start screen session by "screen" command execution, start your command there and disconnect (don't exit screen session). When you think your command already done you can connect to the server and attach to your screen session where you can see the command execution result/progress (in case one should be).
See "man screen" for more details.
This should resolve the problem for ubuntu and linux mint
add:
ServerAliveInterval 120
TCPKeepAlive yes
to
/etc/ssh/ssh_config file
Instead of screen I'd recommend tmux, an (arguably) better competitor to screen
tmux new-session -s {name}
That command creates a session. Any time after that you want to connect:
tmux a -t {name}
there are two solutions
To update server and restart server sshd
echo "ClientAliveInterval 60" | sudo tee -a /etc/ssh/sshd_config
To update client
echo "ServerAliveInterval 60" >> ~/.ssh/config
After having tried to change many of above parameters in sshd_config (ClientAliveInterval, ClientMaxCount,TCPKeepAlive...) nothing had changed. I have spend hours and days to look for a solution on forums and blogs...
It appears that the problem of broken pipe which forbids to connect with ssh/sftp came from permissions settings on ChrootDirectory.
the ChrootDirectory has to be owned by root/root with 755 permision
lower permissions 765/766/775... won't work but strongers do (eg 700)
if you need to give a write permission to connected user, you can give it in sub-directories.
if chroot is owned by sftpUser:sftpGroup, it won't work neither...
chroot-> root:root 755
|
---subdirectories-> sftpUser:sftpGroup 700 up to 770
hope it would help
If you're still having problem after editing /etc/ssh/sshd_config or if ~/.ssh/config
simply does not exist on your machine then I highly recommend reinstalling ssh. This solution took about a minute to fig both "Broken pipe" errors and "closed by remote host" errors.
sudo apt-get purge openssh-server
sudo apt update
sudo apt install openssh-server
jeremyforan's answer is correct, however I've found that if you are trying to use scp it is necessary to explicitly point it to a config file configured as described, it seems to not obey the normal hierarchy of config. For example:
scp -F ~/.ssh/config myfile joe#myserver.com:~
works, while omitting the -F still results in the broken pipe error.
Ubuntu :
ssh -o ServerAliveInterval=5 -o ServerAliveCountMax=1 user#x.x.x.x
I use an ASUS router with two internet input lines. I appoint my IP to a certain line, and it works.