I have a raspbmc running on my rPi. Already configured to use ssh keys but I want to disable the password login option entirely.
I have added the -s option in the /etc/default/dropbear:
# any additional arguments for Dropbear
DROPBEAR_EXTRA_ARGS= -s
I also added it to /etc/init.d/dropbear
However then I read xinetd is used to manage SSH and launch Dropbear. So I went over to /etc/xinetd.d/ssh and changed the following, adding -s:
server_args = -i -s
Now when I stop the dropbear service and restart the xinetd service I still only see dropbear being launched with only -i and password logins still work.
Not sure where else I'd have to change the command line arguments, any hints would be very much appreciated!
From the Dropbear man page
-w Disallow root logins.
-s Disable password logins.
-g Disable password logins for root.
nano /etc/default/dropbear
Find the DROPBEAR_EXTRA_ARGS parameter and change it as shown below.
DROPBEAR_EXTRA_ARGS=ā-s -gā
Finally, restart dropbear.
/etc/init.d/dropbear restart
Related
I'm facing a weird behavior trying to run rsync as sudo through ssh with passwordless login.
This is something I do with dozens of servers, I'm having this frustrating problem connecting to a couple of Ubuntu 18.04.4 servers
PREMISE
the passwordless SSH from CLIENT to SERVER with account USER works
nicely
When I'm logged in SERVER I can sudo everything with
account USER
On SERVER I've added the following to /etc/sudoers
user ALL=NOPASSWD:/usr/bin/rsync
Now, if I launch this simple test from machine CLIENT as user USER, I receive the following sudo error message:
$ ssh utente#192.168.200.135 -p 2310 sudo rsync
sudo: no tty present and no askpass program specified
Moreover, looking in the SERVER's /var/log/auth.log I found this errors:
sudo: pam_unix(sudo:auth): conversation failed
sudo: pam_unix(sudo:auth): auth could not identify password for [user]
am not an PAM expert, but tested the following solution working on Ubuntu 16.04.5 and 20.04.1
NOTE : Configuration set to default on /etc/ssh/sshd_config
$ sudo visudo -f /etc/sudoers.d/my_config_file
add the below lines
my_username ALL=(ALL) NOPASSWD:ALL
and don't forget to restart sshd
$ sudo systemctl restart sshd
I've found a solution thanks to Centos. Infact, because of the more complex configuration of /etc/sudoers in Centos (compared to Ubuntu or Debian), I've been forced to put my additional configurations to an external file in /etc/sudoers.d/ instead than putting it directly into /etc/sudoers
SOLUTION:
Putting additional configurations directly into /etc/sudoers wouldn't work
Putting the needed additional settings in a file within the directory /etc/sudoers.d/ will work
e.g. , these are the config lines put in a file named /etc/sudoers.d/my_config_file:
Host_Alias MYSERVERHOST=192.168.1.135,localhost
# User that will execute Rsync with Sudo from a remote client
rsyncuser MYSERVERHOST=NOPASSWD:/usr/bin/rsync
Why /etc/sudoers didn't work? It's unknown to me even after two days worth of Internet search. I find this very obscure and awful.
What follows is a quote from this useful article: https://askubuntu.com/a/931207
Unlike /etc/sudoers, the contents of /etc/sudoers.d survive system upgrades, so it's preferrable to create a file there than to modify /etc/sudoers.
For the editing of any configuration file to be used by sudo the command visudo is preferable.
i.e.
$ sudo visudo -f /etc/sudoers.d/my_config_file
I had a similar problem on a custom linux server, but the solution was similar to the answers above.
As soon as I removed the line your_user ALL=(ALL) NOPASSWD:ALL from /etc/sudoers, the errors were gone.
I'm using flask with apache(mod_wsgi).
When I use ssh module with external command subprocess.call("ssh ......",shell=True)
(My Python Flask code : Not wrong)
ssh = "sshpass -p \""+password+"\" ssh -p 6001 "+username+"#"+servername+" \"mkdir ~/MY_SERVER\""
subprocess.call(ssh, shell=True)
I got this error on Apache error_log : Failed to get a pseudo terminal: Permission denied
How can I fix this?
I've had this problem under RHEL 7. It's due to SELinux blocking apache user to access pty. To solve:
Disable or set SELinux as permissive (check your security needs): edit /etc/selinux/config and reboot.
Allow apache to control its directory for storing SSH keys:
sudo -u apache
chown apache /etc/share/httpd
ssh to desired host, accept key.
I think apache's login shell is "/sbin/nologin".
If you want to allow apache to use shell command, modify /etc/passwd and change the login shell to another shell like "/bin/bash".
However, this method is vulnerable to security. Many python ssh modules are available in internet. Use one of them.
What you are doing seems frightfully insecure. If you cannot use a Python library for your SSH connections, then you should at least plug the hole that is shell=True. There is very little here which is done by the shell anyway; doing it in Python affords you more control, and removes a big number of moving parts.
subprocess.call(['/usr/bin/sshpass', '-p', password,
'/usr/bin/ssh', '-T', '-p', '6001', '{0}#{1}'.format(username, servername),
'mkdir ~/MY_SERVER'])
If you cannot hard-code the paths to sshpass and ssh, you should at least make sure you have a limited, controlled PATH variable in your environment before doing any of this.
The fix for Failed to get a pseudo-terminal is usually to add a -T flag to the ssh command line. I did that above. If your real code actually requires a tty (which mkdir obviously does not), perhaps experiment with -t instead, and/or redirecting standard input and standard output.
I am trying to connect to a remote host from my local host through the below command.But there was a setting in the remote host that soon after we login it will prompt to enter a badge ID,password and reason for logging in, because it was coded like that in profile file on remote-host How can I overcome those steps and login directly non-interactively, without disturbing the code in profile.
jsmith#local-host$ ssh -t -t generic_userID#remote-host
Enter your badgeID, < exit > to abort:
Enter your password for <badgeID> :
Enter a one line justification for your interactive login to generic_userID
Small amendment: to overcome remote server expect approach is required, but in case local script connects to bunch of remote servers, which configuration may be broken, just use SSH options:
ssh -f -q -o BatchMode=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null USER#TARGETSYSTEM
This will omit ask for password in case there is no ssh_key setup, exit silently and continue with script/other hosts.
Puts ssh to background with -f, which is required when calling ssh command from sh (batch) file to remove local console redirect to remote input (implies -n).
Look into setting up a wrapper script around expect. This should do exactly what you're looking for.
Here are a few examples you can work from.
I have upvoted Marvin Pinto's answer because there is every reason to script this, in case there are other features in the profile that you need, such as Message of the Day motd.
However, there is a quick and dirty alternative if you don't want to make a script and you don't want other features from the profile. Depending on your preferred shell on the remote host, you can insist that the shell bypasses the profile files. For example, if bash is available on the remote host, you can invoke it with:
ssh -t -t generic_userID#remote-host bash --noprofile
I tested the above on the macOS 10.13 version of OpenSSH. Normally the command at the end of the ssh invocation is run non-interactively, but the -t flag allows bash to start an interactive shell.
Details are in the Start-up files section of the Bash Reference Manual.
The first thing I do after vagrant ssh is usually attaching to a tmux session.
I want to automate this, so I try: vagrant ssh -c "tmux attach", but it fails and says "not a terminal".
After some googling I find this article and know that I should force a pseudo-tty allocation before executing a screen-based program, and it can be done with the -t option of ssh.
But I don't know how to use this option with vagrant ssh.
According to this documentation, you should try adding -- to the command.
As I have not used vagrant, I am unsure of the formatting, but assume it would be similar to:
vagrant ssh -- -t
Unless, you need to include the username and host, in which case add the username and host.
Question as title.
Why is this, I have used the ssh command:
ssh -i mykey.pem root#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
But i get that error, find nothing on google. What am I doing wrong?
You log in as ec2-user as Klaus suggested:
ssh -i key.pem ec2-user#host
... and then you use sudo to run commands. E.g., to edit the /etc/hosts file which is owned by root and requires root privileges: sudo nano /etc/hosts.
Or you run sudo su to become the root user.
By default root user is not allowed to login but you can use ec2-user as indicated by others.
Once you login with ec2-user you switch to root and change the SSH configuration.
To become the root user you run:
sudo su -
Edit the SSH daemon configuration file /etc/ssh/sshd_config, e.g. by using vi, and replace the PermitRootLogin entry with the following:
PermitRootLogin without-password
Reload the SSH daemon configuration by running:
/etc/init.d/sshd reload
The message Please login as the ec2-user user rather than root user. is displayed because a command is executed when you login with the private key. To remove that command edit ~/.ssh/authorized_keys file and remove the command option. The line should start with the key type (Eg. ssh-rsa).
(*) Do at your own risk. I recommend you to leave always a console open just in case you're not able to login after you make the configuration changes.
For reference you can read the man pages:
man sshd_config
man sshd
I have encountered a similar problem when setting up a hadoop cluster on Amazon ec2.
My head node needs to have root ssh access to each worker/slave nodes. I aliased the connects by adding each slave node's IP address, private address, and alias name to the /etc/hosts/ file. (I get that data by running the command echo -e "`hostname -i`\t`hostname -f`\talias-name" where alias-name is what I call each node (head or n1 for example). Then I put that output for each node in every node's /etc/hosts file.
The problem I have been encountering is that when I type ssh n1 while in my head node to ssh into my first slave node, I get that same error message: Please login as the use "ec2-user" rather than the user "root".
So after doing some research, I figured out how to fix it.
First:
ssh into your server. non-root (ec2-user) access is fine here.
Then su - your way into root. Now vi /etc/ssh/sshd_config and
un-comment the line PermitRootLogin yes.
Exit vi editor.
Now restart ssh daemon by typing service sshd stop then service
sshd start.
Second:
Now, here is the part I had to dig for,
run vi /root/.ssh/authorized_keys
Comment out everything up to ssh-rsa. Just put a # at the beginning
of the file's content, before no-port-forwarding... and hit enter on ssh-rsa to move it to
the next line (this way you dont have to delete anything in case you
want to backtrack).
exit vi editor
Now you should be able to login to root without that error message popping up.
Also, if you are using aliases for a cluster setup; Repeat the same steps on each node. First ssh in using ec2-user then follow the steps.
After adding the IP address, private address, and alias name info to your /etc/hosts file you should be able to ssh into each node's root using the alias name for example ssh n1.
The tutorial I followed is here: https://www.youtube.com/watch?v=xrxQXfE7t9A
But it didnt discuss the problem with root login.
Hope that helps! It worked for me.
*Keep in mind that I havnt taken any security into concern. This is simply a practice/dev setup.
I think it's just asking you to login with another username. Do you happen to have a user called ec2-user? If so, try this instead:
ssh -i mykey.pem ec2-user#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
I have faced the same problem when I tried to access my EC2 instance as 'root' through Windows PuTTY client, this is how I solved problem.
Access and edit SSH configuration file, to allow root login and password authentication.
Login as ec2-user (by default it is allowed)
Enter below command to open ssh config
sudo vi /etc/ssh/sshd_config
Edit SSH configuration file as below using vi, how to use vi editor
PermitRootLogin yes (remove # at begging if it present)
PasswordAuthentication yes
Restart SSH
sudo /etc/init.d/sshd restart
Change/set root password
sudo passwd root
type new password and re-enter it (at least 8 characters)
Exit current session and close PuTTY
exit
Try again login as root and type previously set password.
Solved!
Try compare root key file and user key file)
diff /root/.ssh/authorized_keys /home/user/.ssh/authorized_keys
...and see
For anyone like me that created a new user, copied root's .ssh dir to the new user, set ownership and STILL got this error - look at the new user's ~/.ssh/authorized_keys file. It has SSH params specified that force the prompt. Delete everything from that line up to the ssh-rsa and you'll be good to go.
Or - copy /home/ec2-user/.ssh to the new user homedir instead of /root/.ssh
Edit /etc/ssh/sshd_config, and make sure this is set:
PasswordAuthentication yes
Then reload SSH:
systemctl reload sshd.service
You can now log in as users other than ec2-user.
ssh -i mykey.pem root#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
just replace above command to this
ssh -i mykey.pem ubuntu#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
its working in my case
For those who are looking for a single, simple line:
sudo ssh -i ./mykey.pem ec2-user#ec2-x-xx-xxx-xxx.us-east-2.compute.amazonaws.com
Note that, you can get the line after the # from the Public IPv4 DNS section in your instance summary page.