Sudo over SSH mixes up password tty and stdin - ssh

Setup:
Local *nix machine with a SQL script script.sql (Postgres).
Remote machine remote (Debian 7) with Postgres.
I can SSH in as some_user, who is a sudoer.
Anything with Postgres needs to be done as postgres user.
The server only listens on localhost:5432.
How do I execute script.sql on remote without copying it there first?
This works well:
ssh -t some_user#remote 'sudo -u postgres psql -c "COMMANDS FOO BAR"'
The -t flag means that sudo will ask for some_user's password correctly on the local terminal.
One thing remains, to be able to pipe script.sql to psql. This does not work:
ssh -t some_user#remote 'sudo -u postgres psql' < script.sql
It fails with the message:
Pseudo-terminal will not be allocated because stdin is not a terminal.
sudo: no tty present and no askpass program specified
Edit: simplified example
Postgres and psql don't seem to figure much in the problem. The following code has the same issues:
ssh some_user#remote xargs sudo ls < input_file
The problem seems to be: we need to send 2 inputs to sudo, both the password using a tty, and the stdin to pass to ls.
Edit: even simpler
ssh localhost xargs sudo ls < input_file
sudo: no tty present and no askpass program specified
Adding -t does not work:
$ ssh -t localhost xargs sudo ls < input_file
Pseudo-terminal will not be allocated because stdin is not a terminal.
sudo: no tty present and no askpass program specified
Adding another -t does not work either:
$ ssh -t -t localhost xargs sudo ls < input_file
<content of input_file>
<waiting on a prompt>

ssh -T some_user#remote "sudo -u postgres psql -f-" < script.sql
"-f-" will read the script from STDIN. Just redirect the file in there, and there you go.
Don't bother with -t option to ssh, you don't need a full terminal for this.

ssh -T ${user}#${ip} sudo DEBIAN_FRONTEND=noninteractive postgres psql -f- < test.sql
Use DEBIAN_FRONTEND=noninteractive for resolve no tty present or equivalent of your distribution.

Related

Error: 'you must have a tty to run sudo' while using sshpass

I have gitlab CI job which had a script execution like below:
stage: permissions
script:
sshpass -p "${PASSWORD}" ssh ${USER}#${HOST} sudo chown -cv user_a:user_a ${directory}/test.txt
The above gives me following error:
sudo: sorry, you must have a tty to run sudo
If i add -t with ssh i get:
Pseudo-terminal will not be allocated because stdin is not a terminal.
sudo: sorry, you must have a tty to run sudo
If i add -tt with ssh, the job keeps waiting for me to enter the password.
My requirement is to execute a remote command using ssh and text password i.e. sshpass, is there a way i can achieve this without change any sudoers permissions over the server?
Use somethinc like:
sshpass -p "${PASSWORD}" ssh ${USER}#${HOST} sh -c "echo ${PASSWORD} | sudo chown -cv user_a:user_a ${directory}/test.txt"
Example for write password from not tty to sudo:
echo ${PASSWORD} | sudo -S command
p.s. For configure servers use Ansible, he handles such tasks very easily.

How to enable sshpass output to console

Using scp and interactively entering the password the file copy progress is sent to the console but there is no console output when using sshpass in a script to scp files.
$ sshpass -p [password] scp [file] root#[ip]:/[dir]
It seems sshpass is suppressing or hiding the console output of scp. Is there a way to enable the sshpass scp output to console?
After
sudo apt-get install expect
the file send-files.exp works as desired:
#!/usr/bin/expect -f
spawn scp -r $FILES $DEST
match_max 100000
expect "*?assword:*"
send -- "12345\r"
expect eof
Not exactly what was desired, but better than silence:
SSHPASS="12345" sshpass -e scp -v -r $FILES $DEST 2>&1 | grep -v debug1
Note that -e is considered a bit safer than -p.
Output:
Executing: program /usr/bin/ssh host servername, user username, command scp -v -t /src/path/dst_file.txt
OpenSSH_6.6.1, OpenSSL 1.0.1i-fips 6 Aug 2014
Authenticated to servername ([10.11.12.13]:22).
Sending file modes: C0600 590493 src_file.txt
Sink: C0600 590493 src_file.txt
Transferred: sent 594696, received 2600 bytes, in 0.1 seconds
Bytes per second: sent 8920671.8, received 39001.0
In this way:
output=$(sshpass -p $PASSWD scp -v $filename root#192.168.8.1:/root 2>&1)
echo "Output = $output"
you redirect the console output in variable output.
Or, if you only want to see the console output of scp command, you should add only -v command in your ssh pass cmd:
sshpass -p $PASSWD scp -v $filename root#192.168.8.1:/root

how does fabric execute commands?

i am wondering how does fabric execute commands.
Let's say I give him env.user=User, env.host=HOST. Then i ask him to sudo('ls')
Is that equivalent to me typing in a shell : ssh User#host 'sudo(/bin/ls)'
or it's more : ssh User#host in a first time, then sudo ls commande in a seconde time ?
I'm asking that because sometimes using a shell, if the TTY has a bad configuration (I am a bit blurry on this), ssh User#Host 'sudo /bin/ls'
return : sudo: no tty present and no askpass program specified
but you can first log in with ssh User#Host then sudo ls and it works.
I don't know how to replicate the no tty error, but I know it can occurs. Would this block the sudo commande from Fabric?
Basically how it works is:
First a connection is established (equivalent as doing ssh User#host)
Over this connection a command is executed as follows:
sudo -S -p 'sudo password:' /bin/bash -l -c "your_command"
You can also allow Fabric not to request a pty with either pty=False argument, env.always_use_pty=False or --no-pty commandline option.

Permission denied using ssh command in shell

I'm trying to execute this shell with command line
host="192.168.X.XXX"
user="USERNAME"
pass="MYPASS"
sshpass -p "$pass" scp -o StrictHostKeyChecking=no /home/MYPATH/File.import "$user#$host:/"home/MYPATH/
To copy a file from my local server in to remote server. The remote server is a copy of the remote server but when I try to execute this shell I have this error:
**PERMISSION DENIED, PLEASE TRY AGAIN**
I didn't understand why if I try to execute this command in command line is working.
USERNAME#MYSERVER:~$ sshpass -p 'MYPASS' scp -o StrictHostKeyChecking=no /home/MYPATH/File.import USERNAME#192.168.X.XXX:/home/MYPATH/
Somebody have a solution??
Please use a pipe or the -e option for the password anyway.
export SSHPASS=password
sshpass -e ssh user#remote
Your simple command with -e option:
export SSHPASS=password
sshpass -e scp -o StrictHostKeyChecking=no /home/MYPATH/File.import user#192.168.X.XXX:/home/MYPATH/
Please remove the wrong quotes from your command:
sshpass -p "$pass" scp -o StrictHostKeyChecking=no /home/MYPATH/File.import $user#$host:/home/MYPATH/
You should also be able to remove the quotes around $pass.
Please ensure that you have no special characters in your pass variable or escape them correctly (and no typos anywhere).
For simplicity use a ssh command instead of scp for testing
Use the -v or -vvv option for the scp command to check what scp is trying to do. Also check the secure log or auth.log on the remote server
You have to install "sshpass" command then use the below snippet
export SSHPASS=password
sshpass -e sftp user#hostname << !
cd sftp_path
put filename
bye
!
A gotchya that I encountered was escaping special characters in the password which wasn't necessary when entering it in interactive ssh mode.

Running ssh command and keeping connection

Is there a way to execute a command before accessing a remote terminal
When I enter this command:
bash
$> ssh user#server.com 'ls'
The ls command is executed on the remote computer but ssh quits and I cannot continue in my remote session.
Is there a way of keeping the connection? The reason that I am asking this is that I want to create a setup for ssh session without having to modify the remote .bashrc file.
I would force the allocation of a pseudo tty and then run bash after the ls command:
syzdek#host1$ ssh -t host2.example.com 'ls -l /dev/null; bash'
-rwxrwxrwx 1 root other 27 Apr 1 2005 /dev/null
bash-4.1$
You can try using process subsitution on the init file of bash. In the example below, I define a function myfunc:
myfunc () {
echo "Running myfunc"
}
which I transform to a properly-escaped one-liner echoed in the <(...) construct for process subsitution for the --init-file argument of bash:
$ ssh -t localhost 'bash --init-file <( echo "myfunc() { echo \"Running myfunc\" ; }" ) '
Password:
bash-3.2$ myfunc
Running myfunc
bash-3.2$ exit
Note that once connected, my .bashrc is not sourced but myfunc is defined and properly usable in an interactive session.
It might prove a little difficult for more complex bash functions, but it works.