RPC Authentication error - authentication

Last week I was using RPC and could run my RPC server program just fine. However, today I tried to start it again and I am getting this error:
Cannot register service: RPC: Authentication error; why = Client
credential too weak unable to register (X_PROG, X_VERS, udp)
Can anybody tell me what the cause of this error can be?
rpcinfo gives me this:
program version netid address service owner
100000 4 tcp6 ::.0.111 portmapper superuser
100000 3 tcp6 ::.0.111 portmapper superuser
100000 4 udp6 ::.0.111 portmapper superuser
100000 3 udp6 ::.0.111 portmapper superuser
100000 4 tcp 0.0.0.0.0.111 portmapper superuser
100000 3 tcp 0.0.0.0.0.111 portmapper superuser
100000 2 tcp 0.0.0.0.0.111 portmapper superuser
100000 4 udp 0.0.0.0.0.111 portmapper superuser
100000 3 udp 0.0.0.0.0.111 portmapper superuser
100000 2 udp 0.0.0.0.0.111 portmapper superuser
100000 4 local /run/rpcbind.sock portmapper superuser
100000 3 local /run/rpcbind.sock portmapper superuser
The weird thing is that I haven't even been using this pc the past week.
Are there any services that should be running?
Hope you can help me out.
Grtz Stefan

this error is linked to rpcbind,so you should stop service portmap like this:
sudo -i service portmap stop
then
sudo -i rpcbind -i -w
at end start service portmap:
sudo -i service portmap start

I realize this is an older thread, but Google finds it among the top 3 results and people are still discovering the nfs service error. Even Red Hat's RHN's fix didn't work.
As of December 2013 on a RHEL 6.4 (x64), and patched as of November 2013, the only solution was changing the permissions on the tcp_wrapper config files. Because we had secured the box pretty heavily, we had permissions of 640 on /etc/hosts.allow and /etc/hosts.deny, both owned by root:root. We did try given these files different group ownership nothing corrected the issue when nfs started.
Once we put the perms back to "out-of-the-box" (644) the nfs (rquotad) service started up as expected. Or if we moved the hosts.allow/deny out of the way entirely.
What a pain that was to figure out. The selinux logs may have helped if I had looked sooner.
Now if we had left selinux in enforcing mode this MAY have not been an issue. I still have to test that theory.
Good luck.

Making the change persistent on Ubuntu12.04
(assuming security implications of running rpcbind with -i are irrelevant):
echo 'OPTIONS="-w -i"' | sudo tee /etc/default/rpcbind
sudo service portmap restart

Yet Another Solution: CentOS 7.3 edition
In addition to rpcbind, I also had to allow mountd in /etc/hosts.allow:
rpcbind : ALL : allow
mountd : ALL : allow
This finally allowed me to not only execute rpcinfo, but showmount and mount as well.

None of the solutions presented here so far worked for me on the Debian Squeeze to Wheezy upgrade.
In my case the sole thing I had to do was to replace all occurrences of "portmapper" (or "portmap", no more sure) in /etc/hosts.allow with "rpcbind". That was all. (Otherwise ypbind couldn't connect to rpcbind via localhost.)

This also happens if iptables is used and it is blocking UDP connections for localhost. Ran into this today. Stopped iptables, connections started working.
You will need to figure out the rules that broke it.

I think that it is worth mentioning that if you see errors like:
0-rpc-service: Could not register with portmap
it can be related to hosts.allow and hosts.deny files set and lacking permissions for localhost in the hosts.allow file.
I had this kind of problem with setting NFS with GlusterFS.
In my /etc/hosts.allow file I have added:
ALL: 127.0.0.1 : ALLOW
and problem with registering service with portmap went away and everything is working.
Note: with GlusterFS remember to restart the glusterd service
/etc/init.d/glusterd restart

I was receiving an error like so on rhel7:
ypserv: Cannot register service: RPC: Authentication error; why = Client credential too weak
when starting ypbind. I tried everything including the '-i' to rpcbind above. In the end as XTaran mentioned modifying /etc/hosts. allow adding this line:
rpcbind: 127.0.0.1
worked for me.

FWIW, here's an 'alternative' solution.
Check the /etc/hosts.deny file. It should say something like:
rpcbind mountd nfsd statd lockd rquotad : ALL
Ensure that there is a blank last line in this file.
Check the /etc/hosts.allow file. It should say something like:
rpcbind mountd nfsd statd lockd rquotad: 127.0.0.1 192.168.1.100
Ensure that there is a blank last line in this file.
The "trick" (for me) was the blank final line in the file(s).

Related

"client_loop: send disconnect: Broken pipe" while running long experiments with bash script

I am connected through ssh to a linux virtual machine to run long experiments (3 hours per program) for academic research. When my computer is not used I get the error message: client_loop: send disconnect: Broken pipe. I have looked at this forum and tried many of the solutions such as:
in my ~/.ssh creating a file config (while creating using sudo chmod 644 ~/.ssh/config) and adding the following lines:
ServerAliveInterval 60
ServerAliveCountMax 100000
In /etc/ssh/ssh_config I have added the following:
Host*
ServerAliveInterval 60
ServerAliveCountMax 100000
And finally /etc/ssh/sshd_config I have the added the following:
TCPKeepAlive yes
ClientAliveInterval 60
ClientAliveCountMax 100000
I have all my macbook settings such that it won't go to sleep by using the following command sudo pmset -a disablesleep 1 and by changing all power saving methods.
However, while going away from the computer for ~1 hour of not using it actively (so screensaver is on the screen) I get this message.
I really don't know where to look at this point. The only things I can consider are MaxStartups 10:30:100 in /etc/ssh/sshd_config or ConnectTimeout 0 in /etc/ssh/ssh_config, but I wasn't entirely sure what the impact of changing these were.
Any suggestions to solve this problem would be appreciated!
Thanks!
edit/update: I notice that when I leave my computer on overnight but I am not running a bash script, that I do not get the broken pipe error.
edit/update 2: I find that I can leave my computer unattended for at least 30 minutes without a broken pipe error
I solve by adding following in my macbook /etc/ssh/ssh_config
Host *
ServerAliveInterval 60 #add this line

Can autossh be used to monitor "ssh -D" local dynamic port forwarding (SOCKS proxy)? If so, how? If not, alternatives?

When I'm teleworking, I need to access some internal web servers. I use ssh -f -N -D 4000 someserver.mywork.com on my home computer to setup local dynamic port forwarding. Then, I configure my web browser to use localhost port 4000 as a SOCKS host, and everything works great, even with HTTPS.
The problem is that the proxy stops working every couple of days. When this happens, the ssh process prints messages like the following:
accept: Too many open files
In this scenario, I have to kill the ssh process and restart it in order to get it working again. Based on my research into this error message, I could increase the limit on the number of open files, but that doesn't seem like a permanent or an ideal solution.
I was hoping autossh might be able to monitor the connection and restart it automatically. Is that possible?
I have tried the following command:
autossh -f -M 0 -N -D 4000 someserver.mywork.com
But it didn't work. The proxy stopped working, and autossh did not restart it. Any suggestions or alternative solutions to automatically restarting my ssh proxy?

Can't Access Webmin on GCE Instance on port 10000

I have a GCE Instance a Debian 1v CPU & 1.7GB. Then I followed the below tutorial and installed webmin on it.
https://www.howtoforge.com/tutorial/how-to-install-webmin-on-ubuntu-15-04/
The installation went successfully. Then I Created a Firewall exception on using the UFW and allowed port 10000.
sudo ufw allow 10000/tcp
But I was not able to access Webmin through the browser.
https://my-gce-instance-ip-address:10000.
Then i created firewall exception using the Google Cloud Console. Again tried the url it didnt work.
Then i thought this might be because of webmin is https mode. So i open the /etc/webmin/miniserv.conf and changed ssl=0. After that i restarted the webmin.
/etc/init.d/webmin restart
Then I tried the the url with Http, still I can't access.
I tried below command and checked the output. Accordingly Webmin is correctly running and listening on port 10000.
netstat -tulpn | grep :10000.
I can't seem to think what I am doing wrong. I have now spent several days on this without and solution in sight. Hope someone can kindly help me?
try this ... it's working for me
iptables -I INPUT 1 -p tcp --dport 10000 -j ACCEPT
service iptables save
/etc/init.d/iptables restart
open both link in Browser
https://your-IP:10000
and
http://your-IP:10000
you need to allow port 10000 from iptables
sudo iptables -A INPUT -p tcp -m tcp --dport 10000 -j ACCEPT
this work for me
i'm using ubuntu 16.04
You don't need to do any firewall configuration in the instance itself. All firewall configuration is done in the Google Cloud console.
The steps I typically follow, as you show to have figured out in your comment, are:
Create the firewall rule, in it opening the particular port you need (10000 in the case of Webmin) for ingress TCP traffic, accepting connections from some IP range (e.g. 0.0.0.0/0), and specifying target tags to be later assigned to instances to which that rule shall apply.
Add one of those tags to the "network tags" section of some particular instance.
This alone should work, opening the port for your instance in the firewall.
I was almost creating another question here on SO when yours was suggested as a possible duplicate. I had followed the steps above on my Webmin machine, and yet the machine refused to connect on port 10000. As I kept writing the question, I figured out my particular problem: in the firewall rule, in the source IP range filter, I set the single meta-address 0.0.0.0 instead of the range 0.0.0.0/0. So, to anyone who has followed the steps above and still can't connect to their webmin installation, do check if your source range filter is correctly set.

Write failed : broken pipe

I have a headless Ubuntu server. I ran a command on the server (snapraid sync) over SSH from my Mac. The command said it would take about 6 hrs, so I left it over night.
When I came down this morning, the Terminal on the Mac said: "Write failed: broken pipe"
I'm not sure if the command executed fully. Is this a timeout issue? If so, how can I keep the SSH connection alive overnight?
This should resolve the problem for Mac osX version: 10.8.2
add:
ServerAliveInterval 120
TCPKeepAlive no
to this file:
~/.ssh/config
Or, if you want it to be a global change in the SSH client, to this file
/private/etc/ssh_config
"ServerAliveInterval 120" basically says to "ping" the server with a NULL packet every 120s, and "TCPKeepAlive no" means to not set the SO_KEEPALIVE socket option (since you shouldn't need it with ServerAliveInterval already set, and apparently it's "spoofable" or some odd).
The servers similarly have something they could set for the same effect (ClientKeepAliveInterval) but typically you don't have control over those settings as much.
You can use "screen" util for that. Just connect to the server over SSH, start screen session by "screen" command execution, start your command there and disconnect (don't exit screen session). When you think your command already done you can connect to the server and attach to your screen session where you can see the command execution result/progress (in case one should be).
See "man screen" for more details.
This should resolve the problem for ubuntu and linux mint
add:
ServerAliveInterval 120
TCPKeepAlive yes
to
/etc/ssh/ssh_config file
Instead of screen I'd recommend tmux, an (arguably) better competitor to screen
tmux new-session -s {name}
That command creates a session. Any time after that you want to connect:
tmux a -t {name}
there are two solutions
To update server and restart server sshd
echo "ClientAliveInterval 60" | sudo tee -a /etc/ssh/sshd_config
To update client
echo "ServerAliveInterval 60" >> ~/.ssh/config
After having tried to change many of above parameters in sshd_config (ClientAliveInterval, ClientMaxCount,TCPKeepAlive...) nothing had changed. I have spend hours and days to look for a solution on forums and blogs...
It appears that the problem of broken pipe which forbids to connect with ssh/sftp came from permissions settings on ChrootDirectory.
the ChrootDirectory has to be owned by root/root with 755 permision
lower permissions 765/766/775... won't work but strongers do (eg 700)
if you need to give a write permission to connected user, you can give it in sub-directories.
if chroot is owned by sftpUser:sftpGroup, it won't work neither...
chroot-> root:root 755
|
---subdirectories-> sftpUser:sftpGroup 700 up to 770
hope it would help
If you're still having problem after editing /etc/ssh/sshd_config or if ~/.ssh/config
simply does not exist on your machine then I highly recommend reinstalling ssh. This solution took about a minute to fig both "Broken pipe" errors and "closed by remote host" errors.
sudo apt-get purge openssh-server
sudo apt update
sudo apt install openssh-server
jeremyforan's answer is correct, however I've found that if you are trying to use scp it is necessary to explicitly point it to a config file configured as described, it seems to not obey the normal hierarchy of config. For example:
scp -F ~/.ssh/config myfile joe#myserver.com:~
works, while omitting the -F still results in the broken pipe error.
Ubuntu :
ssh -o ServerAliveInterval=5 -o ServerAliveCountMax=1 user#x.x.x.x
I use an ASUS router with two internet input lines. I appoint my IP to a certain line, and it works.

Warning: remote port forwarding failed for listen port 52698

I'm using SSH to access my university's afs system. I like to use rmate (remote TextMate), which requires SSH tunneling, so I included this alias in my .bashrc.
alias sshr=ssh -R 52698:localhost:52698 username#corn.myschool.edu
It has always worked until now.
I had the same problem. In order to find the port that is already open, you have to issue this command on the 'corn.myschool.edu' computer:
sudo netstat -plant | grep 52698
And then kill all of the processes that come up with this (replace xxxx with the process ids)
sudo kill -9 xxxx
(UPDATED: changed the option to be -plant as it is a nice mnemonic)
I had another SSH connection open. I just needed to close that connection before I opened my SSH tunnel.
Further Explanation:
Once one ssh connection has been established, subsequent connections will produce a message:
Warning: remote port forwarding failed for listen port 52698
This message is harmless, as the forward can only be set up once and one forward will work for all ssh connections to the same machine. The original ssh session that opened the forward will stay open when you exit the shell until all remote editing sessions are finished.
I experienced this problem, but it was while connecting to a server on which I don't have sudo priviliges, so the top response suggesting runing sudo netstat ... wasn't feasible for me.
I eventually figured out it was because there were still instances of rmate running, so I used ps to list the running processes and then kill -9 pid (where pid is the process ID for rmate).
This solved my problem reported here as well. To avoid this notification "AllowTcpForwarding" should be enabled in SSH config.
In my case, the problem was that the remote system didn't have DNS properly set up, and it couldn't even resolve its own hostname. Make sure you have a working DNS in /etc/resolv.conf at the remote system.