How to change block size on XFS - xfs

Situation: Server uses XFS filesystem, block size 4kB. There are a lot of small files.
Result: Some directories take 2+GB space, but actual file size is less then 200MB.
Solution: Change XFS block size to least possible, 512B. I am aware it means more overhead and some performance loss, I haven't been able to find out how much.
Question: How to do it?
I am aware XFS uses xfsdump to backup data. So lets presume /dev/sda2 is actual XFS filesystem I want to change and /mnt/export is where I want to dump it using xfsdump. Manual says that dumped blocksize has to be same as restored blocksize and -b parameter "Specifies the blocksize, in bytes, to be used for the dump." . I am bit worried because manual also says "The default block size is 1Mb"
Is this then correct way to dump my complete filesystem (in this case, /dev/sda2 is mounted to /home)?
xfsdump -b 512 -f /mnt/export /home
If the above command is correct, what I need to do to correctly get my files back, only with 512B blocksize? My guess is reformat /dev/sda2 to 512B blocksize using
mkfs.xfs /dev/sda2 -b 512
and then use
xfsrestore -f /mnt/export /home
but I am not sure how and there isn't a good way to test and see if I am right.

So I did more research and came up with this:
xfsdump -f /mnt/export_file /home
umount /home
mkfs.xfs -b size=1024 /dev/sda2 -f
#Minimum block size for CRC enabled filesystems is 1024 bytes.
mount /home
xfsrestore -f /mnt/export_file /home
As you can see, I was unable to change blocksize to 512B, because of minimum requirement for CRC enabled filesystems, but otherwise, complete success.
I didn't concider I have to unmount the filesystem first. You won't be able to do that if you are logging in as someone who has homedir in /home, so log in as root.

Related

GnuCobol compiler COBC does not work correctly with -I option for NFS mounted disks

I try to compile cbl file that includes statement like
01 WS02-RETURN-STATUS COPY "boolind.ef"..
COPY "boolind.va".
Both files (boolind.ef and boolind.va) location is NOT local disk
df -k /clchome/clc/ccclc/mb_ccclc/proj/clb134/clcgdd
Filesystem 1K-blocks Used Available Use% Mounted on
isiloncorp1:/dev_storage 1073741824 373998592 699743232 35% /AMD/DEV/storage
When i run compilation like
cobc -c fcd_demo.cbl -I/clchome/clc/ccclc/mb_ccclc/proj/clb134/clcgdd
I receive an error like
fcd_demo.cbl:57: error: boolind.ef: No such file or directory
but when I copy file boolind.ef to local directory or to /tmp
it works good.
How to use "include" on not local disk ?
Could you help ?
I have tried to use -ext option...
Unfortunately it does not help.
When I try to add something like –ext=ef I receive the same compilation error.
Interesting that it happens when "copybook files" location is NFS disk only.
When I copy them to local disk it works well.
I mean : cobc –c myfile.cbl –I/tmp/ttt
Maybe exists some option that allows to use NFS ?

Icache and Dcache in Simple.py configuration of gem5

I am trying to understand the models generated using gem5. I simulated a build/X86/gem5.opt with the gem5/configs/learning_gem5/part1/simple.py configuration file provided in gem5 repo.
In the output directory I get the following .dot graph:
I have the following doubts:
Does this design not have any Instruction and Data Cache? I checked the config.ini file there were no configuration statistics such as ICache/Dcache size.
What is the purpose of adding the icache_port and dcache_port?
system.cpu.icache_port = system.membus.slave
system.cpu.dcache_port = system.membus.slave
Does this design not have any Instruction and Data Cache? I checked the config.ini file there were no configuration statistics such as ICache/Dcache size.
I'm not very familiar with that config, but unless caches were added explicitly somewhere, then there aren't caches.
Just compare it to an se.py run e.g.:
build/ARM/gem5.opt configs/example/se.py --cmd hello.out \
--caches --l2cache --l1d_size=64kB --l1i_size=64kB --l2_size=256kB`
which definitely has caches, e.g. that config.ini at gem5 211869ea950f3cc3116655f06b1d46d3fa39fb3a contains:
[system.cpu.dcache]
size=65536
What is the purpose of adding the icache_port and dcache_port?
I'm not very familiar with the port system.
I think ports are used as a way for components to communicate, often in master / slave pairs, e.g. CPU is a master and the cache is a slave. So here I think that the CPU port is there but there is nothing attached to it, so no caches.
For example on the above se.py example we see this clearly:

rabbitmq server disk space alert

I have been receiving alerts regarding the disk space utilization and would like to increase disk space but not sure where the increased usage occurs. The following alert appears:
'Rabbit-Disk-Alert' threshold
Description: Average Disk utilization during the past 15 minutes exceeds
75%
Now when I log on the server and run
df -h
It shows the drive that is getting full but I do not know how to find the directory or files that are causing this issue. Is there a way to diagnose this or determine the root of the alert?
Use the du command to find the directory taking up space:
cd /var
du -hs *
More than likely a sub-directory of /var is the culprit. The above command will show you which sub-directory (or directories) takes up the most space, and you can change to those directories, re-run du -hs *, and continue "downward".

rsync "failed to set times on "XYZ": No such files or directory (2)

I have a Dlink NAS (dns-323) in RAID1 that I use to backup family photos, videos and some other data. I also manually rsync to a dedicated backup drive on a little Atom Linux box whenever we add a lot of new files to the NAS. I finally lost a drive on the NAS and through a misstep of my own, also lost the entire volume. No problem, that's what the backup drive is for. I used the same rsync command in reverse to restore files to the NAS after I replaced the bad drive and created a new RAID volume. This worked well, except that after the command finished, I noticed that it did not preserve timestamps. Timestamps were preserved in the NAS->backup direction, but not the backup->NAS direction.
I run the rsync command on the Atom Linux box with these options (this does preserve timestamps):
rsync --archive --human-readable --inplace --numeric-ids --delete /mnt/dns-323 /mnt/dlink_backup --progress --verbose --itemize-changes
The reverse command to restore the volume from the backup (which did not preserve timestamps) is very similar:
rsync --archive --human-readable --inplace --numeric-ids --delete /mnt/dlink_backup/dns-323/ /mnt/dns-323/ --progress --verbose --itemize-changes
which actually restores the files, but gives many errors like:
rsync: failed to set times on "/mnt/dns-323/Rich/Code/.emacs": No such file or directory (2)
I've been googling most of the afternoon and trying different things, but so far haven't solved my problem. I used the 'touch' command to successfully modify the times of one or two files on the NAS, just to prove that it can be done since I believe that is one thing that rsync must do. I've tried doing this as my user and as root. By this I mean that I've run sudo rsync ..... as well as rsync --rsync-path='/usr/bin/sudo /usr/bin/rsync' ..... where ..... is all of the previously mentioned parameters. My /etc/fstab has these entries for the NAS and the backup drive, respectively:
# the dns-323
//192.168.1.202/Volume_1 /mnt/dns-323 cifs guest,rw,uid=1000,gid=1000,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0
# the dlink_backup drive
/dev/sdb /mnt/dlink_backup ext3 defaults 0 0
It's not absolutely critical to preserve timestamps if it just plain can't be done, but it seems like it should be possible - I'm just stumped.
Thanks in advance. Let me know if I can provide any additional information.
I've earned my "tumbleweed" badge as a result of this one. pats self on back
What I've learned:
My solution:
1) Removed the left hard drive from the dns-323, which is half of the RAID1 volume.
2) Mounted (ext3) this drive using a USB-to-SATA adapter to the machine where I run rsync.
3) Performed the rsync command for the restore outlined above. I removed the --delete option which really shouldn't be there and I added the option --size-only. The size-only option made it so that timestamps were essentially the only thing that got restored, since files had already restored properly.
4) Unmounted the left drive from the Atom machine and returned that drive to the dns-323, while also removing the right drive. The right drive needs to be removed so that the dns-323 recognizes that the RAID volume is degraded.
5) Re-add the right drive to the dns-323 and tell it to rebuild the RAID volume.
6) All timestamps are now good.
A possible alternate solution:
I've read enough about rsync and NFS/Samba/cifs now to understand that this problem is likely related to permissions on the NFS server (dns-323). Internally, the user/group ids in the dns-323 are 501/501. No permutation of how I mounted the dns-323 on the Atom box would allow rsync to properly set timestamps. I do believe that changing my user account on the Atom box to have uid/gid of 501/501 would have worked, though. My user had the default 1000/1000 and root had 0/0 IIRC.

Setting memory consumption limits with Upstart

I've recently become quite fond of Upstart. Previously I've been using God, Monit and Bluepill but I don't really like these solutions so I'm giving Upstart a try.
I've been using the Foreman gem to generate some basic Upstart configuration files for my processes in /etc/init. However, these generated files only handle the respawning of a crashed process. I was wondering whether it's possible to tell Upstart to restart a process that's consuming for example > 150mb of memory, as you would with Monit, God or Bluepill.
I read through the Upstart docs and this looks like the thing I'm looking for. Though I have no clue how to config something like this.
What I basically want is quite simple. I want to restart my web process if the memory usage is > 150mb ram. These are the files I have:
|-- myapp-web-1.conf
|-- myapp-web-2.conf
|-- myapp-web-3.conf
|-- myapp-web.conf
|-- myapp.conf
And their contents are:
myapp.conf
pre-start script
bash << "EOF"
mkdir -p /var/log/myapp
chown -R deployer /var/log/myapp
EOF
end script
myapp-web.conf
start on starting myapp
stop on stopping myapp
myapp-web-1.conf / myapp-web-2.conf / myapp-web-3.conf
start on starting myapp-web
stop on stopping myapp-web
respawn
exec su - deployer -c 'cd /var/applications/releases/20110607140607; cd myapp && bundle exec unicorn -p $PORT >> /var/log/myapp/web-1.log 2>&1'
Any help much appreciated!
Appending this to the end of myapp-web-*.conf will cause any allocation calls trying to allocate more than 150mb of memory to return ENOMEM:
limit rss 157286400 157286400
The process might crash at this point, or it might not. That's up to the process!
Here's a test for this in the Upstart Source.
From the Upstart docs, the limits come from the rlimit system call options. (http://upstart.ubuntu.com/cookbook/#limit)
Since Linux 2.4+ setting the rss (Resident Set Size) has no effect.
An alternative already suggested in other answers is as which sets the virtual memory Address Space size limits. This will have a very different effect of setting 'real' memory limits.
limit as <soft limit> <hard limit>
Excerpt from man pages for setrlimit:
RLIMIT_AS
The maximum size of the process's virtual memory (address space) in bytes. This limit affects calls to brk(2), mmap(2), and mremap(2),
which fail with the error ENOMEM upon exceeding this limit. Also automatic stack expansion will fail (and generate a SIGSEGV that
kills the process if no alternate stack has been made available via sigaltstack(2)). Since the value is a long, on machines with a
32-bit long either this limit is at most 2 GiB, or this resource is unlimited.