socket: Too many open files (24) Apache bench lighttpd - apache

When I start Apache Bench test:
ab -n 10000 -c 1300 http://example.com/test.php
I get error:
socket: Too many open files (24)
When I change it to '-c 1000' it works fine.
Because I can have more than 1000 concurrent users I would like to fix socket too many open files problem or increase parameter. How to do this and where?
I use lighttpd on CentOS 5.

ulimit -n 10000
That might not work depending on you system configuration
Consult this to configure your system.

to permernent change max opened files limit, you should modify /etc/security/limits.conf and reboot system:
echo -ne " 
* soft nofile 65536 
* hard nofile 65536 
" >>/etc/security/limits.conf 

Check out the documentation for lighty. You might have to set the server.max-fds option. Also server.max-connections should be changed accordingly (again, see the documentation).

Related

"client_loop: send disconnect: Broken pipe" while running long experiments with bash script

I am connected through ssh to a linux virtual machine to run long experiments (3 hours per program) for academic research. When my computer is not used I get the error message: client_loop: send disconnect: Broken pipe. I have looked at this forum and tried many of the solutions such as:
in my ~/.ssh creating a file config (while creating using sudo chmod 644 ~/.ssh/config) and adding the following lines:
ServerAliveInterval 60
ServerAliveCountMax 100000
In /etc/ssh/ssh_config I have added the following:
Host*
ServerAliveInterval 60
ServerAliveCountMax 100000
And finally /etc/ssh/sshd_config I have the added the following:
TCPKeepAlive yes
ClientAliveInterval 60
ClientAliveCountMax 100000
I have all my macbook settings such that it won't go to sleep by using the following command sudo pmset -a disablesleep 1 and by changing all power saving methods.
However, while going away from the computer for ~1 hour of not using it actively (so screensaver is on the screen) I get this message.
I really don't know where to look at this point. The only things I can consider are MaxStartups 10:30:100 in /etc/ssh/sshd_config or ConnectTimeout 0 in /etc/ssh/ssh_config, but I wasn't entirely sure what the impact of changing these were.
Any suggestions to solve this problem would be appreciated!
Thanks!
edit/update: I notice that when I leave my computer on overnight but I am not running a bash script, that I do not get the broken pipe error.
edit/update 2: I find that I can leave my computer unattended for at least 30 minutes without a broken pipe error
I solve by adding following in my macbook /etc/ssh/ssh_config
Host *
ServerAliveInterval 60 #add this line

How to disable NFS client caching?

I have a trouble with NFS client file caching. The client read the file which was removed from the server many minutes before.
My two servers are both CentOS 6.5 (kernel: 2.6.32-431.el6.x86_64)
I'm using server A as the NFS server, /etc/exports is written as:
/path/folder 192.168.1.20(rw,no_root_squash,no_all_squash,sync)
And server B is used as the client, the mount options are:
nfsstat -m
/mnt/folder from 192.168.1.7:/path/folder
Flags: rw,sync,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,nosharecache,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.20,minorversion=0,lookupcache=none,local_lock=none,addr=192.168.1.7
As you can see, "lookupcache=none,noac" options are already used to disable the caching, but seems doesn't work...
I did the following steps:
Create a simple text file from server A
Print the file from the server B by cmd "cat", and it's there
Remove the file from the server A
Wait couple minutes and print the file from the server B, and it's still there!
But if I do "ls" from the server B at that time, the file is not in the output. The inconsistent state may last a few minutes.
I think I've checked all the NFS mount options...but can't find the solution.
Is there any other options I missed? Or maybe the issue is not about NFS?
Any ideas would be appreciated :)
I have tested the same steps you have given with below parameters. Its working perfectly. I have added one more parameter "fg" in the client side mounting.
sudo mount -t nfs -o fg,noac,lookupcache=none XXX.XX.XX.XX:/var/shared/ /mnt/nfs/fuse-shared/

Redis crashes instantly without error

I've got redis installed on my VM, and I haven't used it in a while. (Last I was using it, it did work, and now it doesn't.. nothing's changed in that time (about a month)). Needless to say I'm deeply confused but I'll post as much info as I can.
$ redis-server
Server starts, but throws a warning about overcommit memory being set to 0. I'm on a VM, so I can't change this setting from 0 to 1 if I wanted, which I wouldn't want to anyway for my purposes. I've written a custom redis.config file though, which I want it to use (and which I was using in the past), so starting it with the default config file doesn't do me much good. Let's try this again.
$ redis-server redis.config
$
Nothing. Silence. No error message, just didn't start.
$ nohup redis-server redis.config > nohup.out&
I get a process ID, but then $ ps and I see the the process is listed as stop and shortly disappears. Again, no errors, and no output in nohup.out nor in the log file for redis. Below is the redis.config I'm using (without the comments to keep it short)
daemonize yes
pidfile [my-user-account-path]/redis/redis.pid
port 0
bind 127.0.0.1
unixsocket [my-user-account-path]/tmp/redis.sock
unixsocketperm 770
timeout 10
tcp-keepalive 60
loglevel warning
logfile [my-user-account-path]/redis/logs/redis.log
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error no
rdbcompression no
rdbchecksum no
dbfilename dump.rdb
dir [my-user-account-path]/redis/db
slave-serve-stale-data yes
slave-priority 100
appendonly no
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
# ADVANCED CONFIG is set to all default settings#
I'm sure it's probably something stupid, probably even a permissions thing somewhere (I've tried executing this as root, fyi), to no avail. Anyone ever experience something similar with Redis?
i have been experiencing redis crashes as well. just an fyi - the guy responsible for much of redis' development, Salvatore Sanfilippo, aka antirez, keeps an interesting blog that has some insight on redis crashes:
http://antirez.com/news/43

RPC Authentication error

Last week I was using RPC and could run my RPC server program just fine. However, today I tried to start it again and I am getting this error:
Cannot register service: RPC: Authentication error; why = Client
credential too weak unable to register (X_PROG, X_VERS, udp)
Can anybody tell me what the cause of this error can be?
rpcinfo gives me this:
program version netid address service owner
100000 4 tcp6 ::.0.111 portmapper superuser
100000 3 tcp6 ::.0.111 portmapper superuser
100000 4 udp6 ::.0.111 portmapper superuser
100000 3 udp6 ::.0.111 portmapper superuser
100000 4 tcp 0.0.0.0.0.111 portmapper superuser
100000 3 tcp 0.0.0.0.0.111 portmapper superuser
100000 2 tcp 0.0.0.0.0.111 portmapper superuser
100000 4 udp 0.0.0.0.0.111 portmapper superuser
100000 3 udp 0.0.0.0.0.111 portmapper superuser
100000 2 udp 0.0.0.0.0.111 portmapper superuser
100000 4 local /run/rpcbind.sock portmapper superuser
100000 3 local /run/rpcbind.sock portmapper superuser
The weird thing is that I haven't even been using this pc the past week.
Are there any services that should be running?
Hope you can help me out.
Grtz Stefan
this error is linked to rpcbind,so you should stop service portmap like this:
sudo -i service portmap stop
then
sudo -i rpcbind -i -w
at end start service portmap:
sudo -i service portmap start
I realize this is an older thread, but Google finds it among the top 3 results and people are still discovering the nfs service error. Even Red Hat's RHN's fix didn't work.
As of December 2013 on a RHEL 6.4 (x64), and patched as of November 2013, the only solution was changing the permissions on the tcp_wrapper config files. Because we had secured the box pretty heavily, we had permissions of 640 on /etc/hosts.allow and /etc/hosts.deny, both owned by root:root. We did try given these files different group ownership nothing corrected the issue when nfs started.
Once we put the perms back to "out-of-the-box" (644) the nfs (rquotad) service started up as expected. Or if we moved the hosts.allow/deny out of the way entirely.
What a pain that was to figure out. The selinux logs may have helped if I had looked sooner.
Now if we had left selinux in enforcing mode this MAY have not been an issue. I still have to test that theory.
Good luck.
Making the change persistent on Ubuntu12.04
(assuming security implications of running rpcbind with -i are irrelevant):
echo 'OPTIONS="-w -i"' | sudo tee /etc/default/rpcbind
sudo service portmap restart
Yet Another Solution: CentOS 7.3 edition
In addition to rpcbind, I also had to allow mountd in /etc/hosts.allow:
rpcbind : ALL : allow
mountd : ALL : allow
This finally allowed me to not only execute rpcinfo, but showmount and mount as well.
None of the solutions presented here so far worked for me on the Debian Squeeze to Wheezy upgrade.
In my case the sole thing I had to do was to replace all occurrences of "portmapper" (or "portmap", no more sure) in /etc/hosts.allow with "rpcbind". That was all. (Otherwise ypbind couldn't connect to rpcbind via localhost.)
This also happens if iptables is used and it is blocking UDP connections for localhost. Ran into this today. Stopped iptables, connections started working.
You will need to figure out the rules that broke it.
I think that it is worth mentioning that if you see errors like:
0-rpc-service: Could not register with portmap
it can be related to hosts.allow and hosts.deny files set and lacking permissions for localhost in the hosts.allow file.
I had this kind of problem with setting NFS with GlusterFS.
In my /etc/hosts.allow file I have added:
ALL: 127.0.0.1 : ALLOW
and problem with registering service with portmap went away and everything is working.
Note: with GlusterFS remember to restart the glusterd service
/etc/init.d/glusterd restart
I was receiving an error like so on rhel7:
ypserv: Cannot register service: RPC: Authentication error; why = Client credential too weak
when starting ypbind. I tried everything including the '-i' to rpcbind above. In the end as XTaran mentioned modifying /etc/hosts. allow adding this line:
rpcbind: 127.0.0.1
worked for me.
FWIW, here's an 'alternative' solution.
Check the /etc/hosts.deny file. It should say something like:
rpcbind mountd nfsd statd lockd rquotad : ALL
Ensure that there is a blank last line in this file.
Check the /etc/hosts.allow file. It should say something like:
rpcbind mountd nfsd statd lockd rquotad: 127.0.0.1 192.168.1.100
Ensure that there is a blank last line in this file.
The "trick" (for me) was the blank final line in the file(s).

How to change the Admin port on Glassfish inside a script

Got a weird Glassfish issue here. Here's how to reproduce it:
(1) Install Glassfish v3
(2) Start your default domain:
$GLASSFISH_HOME/bin/asadmin start-domain domain1
(3) Change the admin port (you'll need to enter admin uid & password, in our script we use the -u & -W parameters):
$GLASSFISH_HOME/bin/asadmin set configs.config.server-config.network-config.network-listeners.network-listener.admin-listener.port=34848
(4) Shut down the domain:
$GLASSFISH_HOME/bin/asadmin stop-domain domain1
You'll see this doesn't work. You get:
CLI306 Warning - server is not running.
Command stop-domain executed successfully.
But your Glassfish process is still running. Worse, when you attempt to start the process you'll get a warning that some of your ports are already in use. Of course they are, the old process has still got 'em! Your only way out is killall -9 java
While some of the config changes are dynamic it seems this one isn't but the domain stop assumes it is dynamic and uses the new port to try and execute the command.
Possible solutions are:
(1) Use sed on domain.xml - would prefer not to as it's complicated & risky grepping through XML code. I've seen Glassfish change the order of attributes in this file so we can't just sed for port="4848"
(2) Use the scripted installer rather than the zip file and feed the parameters to the setup program as an answer file - this is problematic for our install scripts which are required to be idem potent.
(3) Use a custom crafted zip of the Glassfish install archive with domain.xml already changed - not an option as the port we are setting may change in the future.
This is almost the definition of a corner case but one we need to solve. For now we're going to sed domain.xml but it would be nice to know if there was a way that's possible via the CLI.
You might want to do the following instead...
install v3 by unzipping
delete domain1
create a new domain1 using the ports that you prefer.
The man page for the create-domain subcommand will have all the details
start this new domain...
No extra start or stop necessary (and you can skip step 2 if you are willing to remember to say 'asadmin start-domain mydomain' instead of 'asadmin start-domain'
Sed wasn't as bad as I thought it might be, here's what I did:
cd $GLASSFISH_HOME
sed -i.bak '/<network-listener[^>]*name="admin-listener"/s/port="4848"/port="34848"/g' glassfish/domains/domain1/config/domain.xml
It's still a bug that asadmin thinks the port change is dynamic when it isn't but I can live with this hack.