Process Core dumps are not created after crash - crash

I have configured system configurations to create process core dumps.
Below are my configurations.
/etc/sysctl.conf
kernel.core_uses_pid = 1
kernel.core_pattern = /var/core/core.%e.%p.%h.%t
fs.suid_dumpable = 2
/etc/security/limits.conf
* soft core unlimited
root soft core unlimited
Here are the steps which I am following to generate process coredumps.
1) I have restarted mysql service and executed command "kill -s SEGV <mysql_pid>" then I got the core dump file in /var/core location.
2) Then I have started my service mysql say "/etc/init.d/mysql start" or "service mysql start". Now if I give "kill -s SEGV <mysql_pid>" then core dump file is not getting created.
3) To get crash file again I have to restart the mysql service then only if I give "kill -s SEGV <mysql_pid>" i'm getting core dump file.
Can anyone please help me how to resolve this?

First of all, you can verify that core dumps are disabled for the MySQL process by running:
# cat /proc/`pidof -s mysqld`/limits|egrep '(Limit|core)'
Limit Soft Limit Hard Limit Units
Max core file size 0 unlimited bytes
The "soft" limit is the one to look for, zero in this case means core dumps are disabled.
Limits set in /etc/security/limits.conf by default only apply to programs started interactively. You may have to include 'ulimit -c unlimited' in the mysqld startup script to enable coredumps permanently.
If you're lucky, then you can enable coredumps for your current shell and restart the daemon using its init.d script:
# ulimit -c unlimited
# /etc/init.d/mysql restart
* Stopping MySQL database server mysqld [ OK ]
* Starting MySQL database server mysqld [ OK ]
* Checking for tables which need an upgrade, are corrupt
or were not closed cleanly.
# cat /proc/`pidof -s mysqld`/limits|egrep '(Limit|core)'
Limit Soft Limit Hard Limit Units
Max core file size unlimited unlimited bytes
As you can see, this works for MySQL on my system.
Please note that this won't work for applications like Apache, which call ulimit internally to disable core dumps, not for init.d script that use upstart.

Related

redis snapshot location not as specified in config

After running fine for a while, I am getting write error on my redis instance:
(error) MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.
In the log I see:
9948:C 22 Mar 20:49:32.241 # Failed opening the RDB file root (in server root dir /var/spool/cron) for saving: Read-only file system
However, my redis config file is /etc/redis/redis.conf as confirmed by:
redis-cli -p 6379 info | grep 'config_file'
config_file:/etc/redis/redis.conf
And there I have:
dir /mnt/data/redis
And indeed, there is a snapshot there.
But despite the above, redis now thinks my data directory is
redis-cli -p 6379 CONFIG GET dir
1) "dir"
2) "/var/spool/cron"
Corresponding to the error I was getting as quoted above.
Can anyone tell me why/how my data directory is changing after redis starts, such that it is no longer what is specified in the config file?
So the answer is that the redis server was hacked and the configuration changed, which is very easy to do as it turns out. (I should point out that I had no reason to think it wasn't easy to do. I just assumed security by obscurity was sufficient in this case--wrong. No matter, this was just a playground not any sort of production server).
So don't open your redis port to the world. Use security groups if on AWS to limit access to machines that need it, or use AUTH (which is still not awesome because then all clients need to know the single password which also apparently gets sent in the clear), or have some middleware controlling access.
Hacking redis is easy to do, can compromise your data, and even enable unauthorized SSH access to your server. And that's why you shouldn't highline.

ssh v2 maximum compression (xzip/7zip)

I am on a slow dial-up connection, but I have root access to a fast server.
Currently I use ssh v2 to connect to the server with Compression enabled in ~/.ssh/config. However this only uses gzip level 6 (as mentioned here https://serverfault.com/questions/388658/ssh-compression/.
However, it is possible to use better algorithms (like xzip with -e9 or 7zip with -mx=9 ) using pipes as mentioned here [https://serverfault.com/a/586739/506887]. The example in that answer:
ssh ${SSH_USER}#${SSH_HOST} "
echo 'string to be compressed' | gzip -9
" | zcat | echo -
compresses a single string using xzip and pipes on the remote server.
1) I would like to do this (compress with xzip) for all traffic.How can that be done.
2) To save data, when I run firefox on my client, I use a socks v5 proxy with ssh to take advantage of compression
ssh -D 8123 -C -v -N root#myserver
and I point firefox to socks://localhost:8123. Again this using gzip level 6. Can this example be similarly modified to use xzip or 7zip.
I am aware that the bandwidth advantage of using xzip vs gzip may not be significant for a single connection. But I am hoping the bandwith savings will accumulate to a significant amount over a period of time.
Thanks

How to disable NFS client caching?

I have a trouble with NFS client file caching. The client read the file which was removed from the server many minutes before.
My two servers are both CentOS 6.5 (kernel: 2.6.32-431.el6.x86_64)
I'm using server A as the NFS server, /etc/exports is written as:
/path/folder 192.168.1.20(rw,no_root_squash,no_all_squash,sync)
And server B is used as the client, the mount options are:
nfsstat -m
/mnt/folder from 192.168.1.7:/path/folder
Flags: rw,sync,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,nosharecache,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.20,minorversion=0,lookupcache=none,local_lock=none,addr=192.168.1.7
As you can see, "lookupcache=none,noac" options are already used to disable the caching, but seems doesn't work...
I did the following steps:
Create a simple text file from server A
Print the file from the server B by cmd "cat", and it's there
Remove the file from the server A
Wait couple minutes and print the file from the server B, and it's still there!
But if I do "ls" from the server B at that time, the file is not in the output. The inconsistent state may last a few minutes.
I think I've checked all the NFS mount options...but can't find the solution.
Is there any other options I missed? Or maybe the issue is not about NFS?
Any ideas would be appreciated :)
I have tested the same steps you have given with below parameters. Its working perfectly. I have added one more parameter "fg" in the client side mounting.
sudo mount -t nfs -o fg,noac,lookupcache=none XXX.XX.XX.XX:/var/shared/ /mnt/nfs/fuse-shared/

Redis is configured to save RDB snapshots, but is currently not able to persist on disk

I get the following error, whenever I execute any commands that modify data in redis
Redis is configured to save RDB snapshots, but is currently not able to persist on disk.
Commands that may modify the data set are disabled.
Please check Redis logs for details about the error.
I installed redis using brew on mac. How can I get the location of log files where redis-server logs information to. I tried looking for redis conf. file, but couldn't find it either.
What is the default location of [1] redis conf file [2] redis log file.
How do I get rid of the above error, and be able to execute commands that modify data in redis.
When installing with brew the logfile is set to stdout. You need to edit /usr/local/etc/redis.conf and change logfile to something else. I set mine to:
logfile /var/log/redis-server.log
You'll also make sure the user that runs redis has write permissions to the logfile, or redis will simply fail to launch completely. Then just restart redis:
brew services restart redis
After restarting it'll take a while for the error to show up in the logs, because it happens after redis fails its timed flushes. You should be seeing something like:
[7051] 29 Dec 02:37:47.164 # Background saving error
[7051] 29 Dec 02:37:53.009 * 10 changes in 300 seconds. Saving...
[7051] 29 Dec 02:37:53.010 * Background saving started by pid 7274
[7274] 29 Dec 02:37:53.010 # Failed opening .rdb for saving: Permission denied
After a brew install it attempts to save to /usr/local/var/db/redis/ and since redis is probably running as your current user and not root, it can't write to it. Once redis has permission to write to the directory, your logfile will say:
[7051] 29 Dec 03:08:59.098 * 1 changes in 900 seconds. Saving...
[7051] 29 Dec 03:08:59.098 * Background saving started by pid 8833
[8833] 29 Dec 03:08:59.099 * DB saved on disk
[7051] 29 Dec 03:08:59.200 * Background saving terminated with success
and the stop-writes-on-bgsave-error error will no longer get raised.
So I guess it is a bit late for adding an answer here but since I wondered on your question as I had the same error. I got it solved by changing my redis.conf 's dir variable like this:
# The filename where to dump the DB
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /root/path/to/dir/with/write/access/
The default value is: ./, so depending on how you launch your redis server you might not be able to save snapshots.
Hope it helps someone !
In my case i resolved this issue with below steps
Cause : By default redis store data # ./ and if redis runs with redis user this means redis will not be able to write data in ./ file then you will face above error.
Resolution :
Step # 1 (Enter a valid location where redis can do write operations)
root#fpe:/var/lib/redis# vim /etc/redis/redis.conf
dir /var/lib/redis # ( This location must have right for redis user to write)
Step # 2 (Connect to redis cli and map directory to write and issue below variable)
127.0.0.1:6379> CONFIG SET dir "/var/lib/redis"
127.0.0.1:6379> BGSAVE -
This will enable redis to write data on dump file.
Was going through the github discussion and the proposed solution is
to run
config set stop-writes-on-bgsave-error no
in the redis-cli.
here's the link
https://github.com/redis/redis/issues/584#issuecomment-11416418
Steps to fix this error:
Go to redis cli by typing redis-cli
127.0.0.1:6379> config set stop-writes-on-bgsave-error no
after that try to set key value
127.0.0.1:6379> set test_key 'Test Value'
127.0.0.1:6379> get test_key
"Test Value"
Check the following places:
/usr/local/Cellar/redis...
/usr/local/var/log/redis.log
/usr/local/etc/redis.conf
This error often indicates an issue with write permissions, make sure you're RDB directory is writable.
It is usually because permission limits. In my case, it's redis disabled write options.
You can try to run redis-cli in the shell, and then run the following command:
set stop-writes-on-bgsave-error yes

socket: Too many open files (24) Apache bench lighttpd

When I start Apache Bench test:
ab -n 10000 -c 1300 http://example.com/test.php
I get error:
socket: Too many open files (24)
When I change it to '-c 1000' it works fine.
Because I can have more than 1000 concurrent users I would like to fix socket too many open files problem or increase parameter. How to do this and where?
I use lighttpd on CentOS 5.
ulimit -n 10000
That might not work depending on you system configuration
Consult this to configure your system.
to permernent change max opened files limit, you should modify /etc/security/limits.conf and reboot system:
echo -ne " 
* soft nofile 65536 
* hard nofile 65536 
" >>/etc/security/limits.conf 
Check out the documentation for lighty. You might have to set the server.max-fds option. Also server.max-connections should be changed accordingly (again, see the documentation).