rabbitmq server disk space alert - rabbitmq

I have been receiving alerts regarding the disk space utilization and would like to increase disk space but not sure where the increased usage occurs. The following alert appears:
'Rabbit-Disk-Alert' threshold
Description: Average Disk utilization during the past 15 minutes exceeds
75%
Now when I log on the server and run
df -h
It shows the drive that is getting full but I do not know how to find the directory or files that are causing this issue. Is there a way to diagnose this or determine the root of the alert?

Use the du command to find the directory taking up space:
cd /var
du -hs *
More than likely a sub-directory of /var is the culprit. The above command will show you which sub-directory (or directories) takes up the most space, and you can change to those directories, re-run du -hs *, and continue "downward".

Related

How to change block size on XFS

Situation: Server uses XFS filesystem, block size 4kB. There are a lot of small files.
Result: Some directories take 2+GB space, but actual file size is less then 200MB.
Solution: Change XFS block size to least possible, 512B. I am aware it means more overhead and some performance loss, I haven't been able to find out how much.
Question: How to do it?
I am aware XFS uses xfsdump to backup data. So lets presume /dev/sda2 is actual XFS filesystem I want to change and /mnt/export is where I want to dump it using xfsdump. Manual says that dumped blocksize has to be same as restored blocksize and -b parameter "Specifies the blocksize, in bytes, to be used for the dump." . I am bit worried because manual also says "The default block size is 1Mb"
Is this then correct way to dump my complete filesystem (in this case, /dev/sda2 is mounted to /home)?
xfsdump -b 512 -f /mnt/export /home
If the above command is correct, what I need to do to correctly get my files back, only with 512B blocksize? My guess is reformat /dev/sda2 to 512B blocksize using
mkfs.xfs /dev/sda2 -b 512
and then use
xfsrestore -f /mnt/export /home
but I am not sure how and there isn't a good way to test and see if I am right.
So I did more research and came up with this:
xfsdump -f /mnt/export_file /home
umount /home
mkfs.xfs -b size=1024 /dev/sda2 -f
#Minimum block size for CRC enabled filesystems is 1024 bytes.
mount /home
xfsrestore -f /mnt/export_file /home
As you can see, I was unable to change blocksize to 512B, because of minimum requirement for CRC enabled filesystems, but otherwise, complete success.
I didn't concider I have to unmount the filesystem first. You won't be able to do that if you are logging in as someone who has homedir in /home, so log in as root.

Could not update .ICEauthority file

Recently I changed the permissions of the file system and gave myself all the rights. I logged out of the system and I couldn't log back in. I got the error message
Could not update ICEauthority file /home/marundu/.ICEauthority</>
I did a live boot with a Fed 17 disc and replaced my .ICEauthority file with the live-user version and it worked for a time, until I logged out again. Now, the login progress screen is all that shows. I can log into command mode (Ctrl-Alt-F2) but I can't sudo - I get the error messages:
sudo:/usr/libexec/sudoers.so must be only writable by owner and sudo: fatal error. Unable to load plugins.
I just found a good link on Ubuntu:
Ask Ubuntu: ICEauthority permissions problem
Some things to note:
I tried the obvious things like changing file permissions, but found my whole home directory was somehow owned by root. I believe this was due to a failed package update.
I used a recovery disk (Knoppix ISO) for ease of use: Better UI
When mounting the bad home partition, I used the most common Linux file type (Ext4)
I used 'sudo mount -o r,w -t ext4 /dev/sda1 /mnt'
When changing ownership, I used the numeric user:group specification, since the recovery disk doesn't have the symbolic users and groups: 'sudo chown -R 1000:1000 /mnt/home/userdir'
I verified that /home/userdir had rwx for owner, r-x for group / other. This is noted as a valid set of permissions for ICEauthority; others can work. See the linked discussion.
Hope that helps someone...
I got the “Could not update ICEauthority file” error and found that my home partition was in "Read-Only" mode. Thus, this error made sense.
The real question was what caused the "Read-Only" attribute on the partition. I ran "dmesg | read-only" and found that there were serious errors with the file system on my home partition which the kernel had set to "read-only during the boot process.
I then booted from a USB key (CDROM would do as well) and ran "sudo fsck /dev/sdXY" where /dev/sdXY is the partition containing my home directory. fsck corrected a number of file system errors on my home partition.
I then reboot after removing the USB key/CDROM and the problem went away.
Bottom line: Check if your home partition has file system errors. They might be the cause of this error. If so, run fsck from an external device on the partition containing your home directory.

rsync "failed to set times on "XYZ": No such files or directory (2)

I have a Dlink NAS (dns-323) in RAID1 that I use to backup family photos, videos and some other data. I also manually rsync to a dedicated backup drive on a little Atom Linux box whenever we add a lot of new files to the NAS. I finally lost a drive on the NAS and through a misstep of my own, also lost the entire volume. No problem, that's what the backup drive is for. I used the same rsync command in reverse to restore files to the NAS after I replaced the bad drive and created a new RAID volume. This worked well, except that after the command finished, I noticed that it did not preserve timestamps. Timestamps were preserved in the NAS->backup direction, but not the backup->NAS direction.
I run the rsync command on the Atom Linux box with these options (this does preserve timestamps):
rsync --archive --human-readable --inplace --numeric-ids --delete /mnt/dns-323 /mnt/dlink_backup --progress --verbose --itemize-changes
The reverse command to restore the volume from the backup (which did not preserve timestamps) is very similar:
rsync --archive --human-readable --inplace --numeric-ids --delete /mnt/dlink_backup/dns-323/ /mnt/dns-323/ --progress --verbose --itemize-changes
which actually restores the files, but gives many errors like:
rsync: failed to set times on "/mnt/dns-323/Rich/Code/.emacs": No such file or directory (2)
I've been googling most of the afternoon and trying different things, but so far haven't solved my problem. I used the 'touch' command to successfully modify the times of one or two files on the NAS, just to prove that it can be done since I believe that is one thing that rsync must do. I've tried doing this as my user and as root. By this I mean that I've run sudo rsync ..... as well as rsync --rsync-path='/usr/bin/sudo /usr/bin/rsync' ..... where ..... is all of the previously mentioned parameters. My /etc/fstab has these entries for the NAS and the backup drive, respectively:
# the dns-323
//192.168.1.202/Volume_1 /mnt/dns-323 cifs guest,rw,uid=1000,gid=1000,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0
# the dlink_backup drive
/dev/sdb /mnt/dlink_backup ext3 defaults 0 0
It's not absolutely critical to preserve timestamps if it just plain can't be done, but it seems like it should be possible - I'm just stumped.
Thanks in advance. Let me know if I can provide any additional information.
I've earned my "tumbleweed" badge as a result of this one. pats self on back
What I've learned:
My solution:
1) Removed the left hard drive from the dns-323, which is half of the RAID1 volume.
2) Mounted (ext3) this drive using a USB-to-SATA adapter to the machine where I run rsync.
3) Performed the rsync command for the restore outlined above. I removed the --delete option which really shouldn't be there and I added the option --size-only. The size-only option made it so that timestamps were essentially the only thing that got restored, since files had already restored properly.
4) Unmounted the left drive from the Atom machine and returned that drive to the dns-323, while also removing the right drive. The right drive needs to be removed so that the dns-323 recognizes that the RAID volume is degraded.
5) Re-add the right drive to the dns-323 and tell it to rebuild the RAID volume.
6) All timestamps are now good.
A possible alternate solution:
I've read enough about rsync and NFS/Samba/cifs now to understand that this problem is likely related to permissions on the NFS server (dns-323). Internally, the user/group ids in the dns-323 are 501/501. No permutation of how I mounted the dns-323 on the Atom box would allow rsync to properly set timestamps. I do believe that changing my user account on the Atom box to have uid/gid of 501/501 would have worked, though. My user had the default 1000/1000 and root had 0/0 IIRC.

root space full How to take backup?

I 'm using ubuntu and windows in parallel. In my hard disk I left some space for windows and linux also. Now disk apace is full. How can I transfer some data from root to other derive without affecting any applications? plz suggest me the best approch
I'm attaching the screen shot of disk usage analyzer!
I experienced same kind of problem and could move the /home partition to some mounted device in 2-steps after logging in as root(On 'Ubuntu 12.04.1 LTS' server).
Step 1: Move /home to /mounteddevice/home
Step 2: Update /etc/passwd file replacing 'home' with 'mounteddevice/home'
And the other directory which consumes more space is /lib and /var/lib and I am searching for a proper solution to move /lib and /var/lib

Setting memory consumption limits with Upstart

I've recently become quite fond of Upstart. Previously I've been using God, Monit and Bluepill but I don't really like these solutions so I'm giving Upstart a try.
I've been using the Foreman gem to generate some basic Upstart configuration files for my processes in /etc/init. However, these generated files only handle the respawning of a crashed process. I was wondering whether it's possible to tell Upstart to restart a process that's consuming for example > 150mb of memory, as you would with Monit, God or Bluepill.
I read through the Upstart docs and this looks like the thing I'm looking for. Though I have no clue how to config something like this.
What I basically want is quite simple. I want to restart my web process if the memory usage is > 150mb ram. These are the files I have:
|-- myapp-web-1.conf
|-- myapp-web-2.conf
|-- myapp-web-3.conf
|-- myapp-web.conf
|-- myapp.conf
And their contents are:
myapp.conf
pre-start script
bash << "EOF"
mkdir -p /var/log/myapp
chown -R deployer /var/log/myapp
EOF
end script
myapp-web.conf
start on starting myapp
stop on stopping myapp
myapp-web-1.conf / myapp-web-2.conf / myapp-web-3.conf
start on starting myapp-web
stop on stopping myapp-web
respawn
exec su - deployer -c 'cd /var/applications/releases/20110607140607; cd myapp && bundle exec unicorn -p $PORT >> /var/log/myapp/web-1.log 2>&1'
Any help much appreciated!
Appending this to the end of myapp-web-*.conf will cause any allocation calls trying to allocate more than 150mb of memory to return ENOMEM:
limit rss 157286400 157286400
The process might crash at this point, or it might not. That's up to the process!
Here's a test for this in the Upstart Source.
From the Upstart docs, the limits come from the rlimit system call options. (http://upstart.ubuntu.com/cookbook/#limit)
Since Linux 2.4+ setting the rss (Resident Set Size) has no effect.
An alternative already suggested in other answers is as which sets the virtual memory Address Space size limits. This will have a very different effect of setting 'real' memory limits.
limit as <soft limit> <hard limit>
Excerpt from man pages for setrlimit:
RLIMIT_AS
The maximum size of the process's virtual memory (address space) in bytes. This limit affects calls to brk(2), mmap(2), and mremap(2),
which fail with the error ENOMEM upon exceeding this limit. Also automatic stack expansion will fail (and generate a SIGSEGV that
kills the process if no alternate stack has been made available via sigaltstack(2)). Since the value is a long, on machines with a
32-bit long either this limit is at most 2 GiB, or this resource is unlimited.