Where is the default max locked memory value coming from? - apc

So on one system, I have values that are pretty wide open:
$ ulimit -a | grep mem
max locked memory (kbytes, -l) 40000
max memory size (kbytes, -m) unlimited
virtual memory (kbytes, -v) unlimited
Another system has much more limiting values, but I can't for the life of me find out where the 32MB upper limit (it is 32MB despite the mislabling) is being set:
# ulimit -a | grep mem
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
virtual memory (kbytes, -v) unlimited
The second system is a RHEL 5.5 box. I am looking to increase this limit for at least one user- I need a bigger APC mmap memory allocation, but I can't go above 30 MB without running into the above limit, and I would rather not hack the provided apache init script. Where should I be trying to override the system default value so I can map a bigger segment of memory? Doing it in limits.conf for the apache user doesn't do a whole lot; probably because the init script doesn't do anything through PAM.

If the user granularity setting you tried isn't working, you should make sure that's you've correctly matched which user is hitting the limit.
You should also be able to add a line like this to limits.conf:
* hard memlock 40000
That'll change the default setting for all users.
From the limits.conf manpage:
The syntax of the lines is as follows:
<domain> <type> <item> <value>
The fields listed above should be filled as follows:
<domain>
[snip]
· the wildcard *, for default entry.

Related

How to increase the flashing speed using SAM-BA?

I am trying to flash my yocto-generated image into a SAM9X60EK board but it's insanely slow, I think it's because of the write buffer size which is only 8KB (and the image is around 150MB) as shown below,
$ sam-ba -p serial -b sam9x60-ek -a nandflash -c write:microchip-headless-image-sam9x60ek.ubi:0x800000
Output:
Opening serial port 'ttyACM0' Connection opened.
Detected memory size is 536870912 bytes.
Page size is 4096 bytes.
Buffer is 8192 bytes (2 pages) at address 0x0030aca0.
NAND header value is 0xc1e04e07.
Supported erase block size: 256KB
or
Wrote 8192 bytes at address 0x00800000 (0.01%)
Wrote 8192 bytes at address 0x00802000 (0.01%)
…
…
Wrote 8192 bytes at address 0x0015c000 (99.83%)
Wrote 8192 bytes at address 0x0015e000 (100%)
Is there any possibility to make this process any faster?
You can try running this command before burning:
sam-ba -p serial -b sam9x60-ek -a lowlevel

Why is df command showing incorrect size?

I am using a 64 MB QSPI formatted in some UBI partitions.
df is an applet of busybox 1.27.2
Actually,
~ # df -h
Filesystem Size Used Available Use% Mounted on
/dev/ubi0_0 3.1T 1.9T 1.2T 63% /
/dev/ubi1_0 1.6T 21.8G 1.5T 1% /conf
But, obviously, the size cannot be that! Anyway, the use % seems to be correct, for the files contained in the partitions weight few MB.
How do you explain that?
I have been able to fix the issue.
Busybox 1.28.0 commit d1535216 substitutes use of statfs with statvfs (https://github.com/mirror/busybox/commit/d1535216ca27047e3962d61b975bd2a638aa45a2).
I applied the commit to my project using Busybox 1.27.2 and, now, sizes are correct!
Thanks anyway.

Why received ZFS dataset uses less space than original?

I have a dataset on the server1 that I want to back up to the second server2.
Server1 (original):
zfs list -o name,used,avail,refer,creation,usedds,usedsnap,origin,compression,compressratio,refcompressratio,mounted,atime,lused storage/iscsi/webhost-old produces:
NAME USED AVAIL REFER CREATION USEDDS USEDSNAP ORIGIN COMPRESS RATIO REFRATIO MOUNTED ATIME LUSED
storage/iscsi/webhost-old 67,8G 1,87T 67,8G Út kvě 31 6:54 2016 67,8G 16K - lz4 1.00x 1.00x - - 67,4G
Sending volume to the 2nd server:
zfs send storage/iscsi/webhost-old | pv | ssh -c arcfour,aes128-gcm#openssh.com root#10.0.0.2 zfs receive -Fduv pool/bkp-storage
received 69,6GB stream in 378 seconds (189MB/sec)
Server2 zfs list produces:
NAME USED AVAIL REFER CREATION USEDDS USEDSNAP ORIGIN COMPRESS RATIO REFRATIO MOUNTED ATIME LUSED
pool/bkp-storage/iscsi/webhost-old 36,1G 3,01T 36,1G Pá pro 29 10:25 2017 36,1G 0 - lz4 1.15x 1.15x - - 28,4G
Why is there such a difference in sizes? Thanks.
From what you posted, I noticed 3 things that seemed odd:
the compressratio is 1.15x on system 2, but 1.00x on system 1
on system 2, used is 1.27x higher than logicalused
the logicalused and the number zfs receive report are ~2.3x higher on system 1 than system 2
These terms are all defined in the man page, but are still confusing to reverse-engineer explanations for in practice.
(1) could happen if you enabled compression on the source dataset after you wrote all the data to it, since ZFS doesn't rewrite the data to compress it when you enable that setting. The data sent by zfs send is uncompressed unless you use -c, but system 2 will try to compress it as it runs zfs receive if the setting is enabled on the destination dataset. If both system 1 and system 2 had the same compression settings before the data was written, they would have the same compressratio as well.
(2) can happen due to metadata written along with your data, but in this case it's too high for "normal" metadata, which accounts for 1-2% of most pools. It's probably caused by a pool-wide setting, like configuring RAID-Z, or a weird combination of striping and mirroring (like 4 stripes, but with one of them being a mirror).
For (3), I re-read the man page to try to figure it out:
logicalused
The amount of space that is "logically" consumed by this dataset and
all its descendents. See the used property. The logical space
ignores the effect of the compression and copies properties, giving a
quantity closer to the amount of data that applications see.
If you were sending a dataset (instead of a single iSCSI volume) and the send size matched system 2's logicalused value (instead of system 1's), I would guess you forgot to send some child datasets (i.e. by using zfs send -R). However, neither of those are true in this case.
I had to do some additional digging -- this blog post from 2005 might contain the explanation. If system 1 didn't have compression enabled when the data was written (like I guessed above for (1)), the function responsible for not writing zeroed-out blocks (zio_compress_data) would not be run, so you probably have a bunch of empty blocks written to disk, and accounted for in the logicalused size. However, since lz4 is configured on system 2, it would run there, and those blocks would not be counted.

Why MaxNewSize is larger than MaxHeapSize in JVM?

I only setup some of JVM configuration on startup: -Xmx1024m -Xms1024m -XX:MaxPermSize=128m -XX:PermSize=128m
From HotSpot sources:
product(uintx, MaxNewSize, max_uintx, \
"Maximum new generation size (in bytes), max_uintx means set " \
"ergonomically") \
Since you haven't set MaxNewSize explicitly, the default value is taken which is treated specially.
Anyway, MaxNewSize value is only a hint, while NewSize holds the real size of young generation.
The size of the young generation to the old generation is controlled by NewRatio. So, even though MaxNewSize > MaxHeapSize, with NewRatio=2, the effective max size of new space is 1:2. The old generation occupies 2/3 of the heap while the new generation occupies 1/3.
In your case, that is 2/3* 1024 = 682.6M for old space and 1/3 * 1024 = 341M for new space.
The threshold for MaxNewSize would only kick in if it was lower than that provided with NewRatio. Think of these as multiple, independent knobs with which to configure memory. The JVM will choose a setting conforming with all settings.

Redis - why is redis-server decreasing in memory?

I'm running Redis on windows and have noticed that the size of redis-server.exe decreases over time. When I open Redis, it reads from a dump file and loads all of the hashed key values into memory to about 1.4 GB. However, over time, the amount of memory that redis-server.exe takes up decreases. I have seen it go down to less than 100 MB.
The only reason that I could see this happening is that the keys are expiring and leaving memory, however I have set Redis up so that they never expire. I have also made sure that I have given enough memory.
Some of my settings include:
maxmemory 2gb
maxmemory-policy noeviction
hash-max-zipmap-entries 512
hash-max-zipmap-value 64
activerehashing no
If it's of interest, when I first loaded the keys into Redis, I did it through Python like so:
r.hset(key, field, value)
Any help would be appreciated. I want the keys to be there forever.
This is my output from the INFO command right after I first run it:
redis 127.0.0.1:6379> INFO
redis_version:2.4.6
redis_git_sha1:26cdd13a
redis_git_dirty:0
arch_bits:64
multiplexing_api:winsock2
gcc_version:4.6.1
process_id:9092
uptime_in_seconds:69
uptime_in_days:0
lru_clock:248011
used_cpu_sys:3.34
used_cpu_user:10.06
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
connected_clients:1
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:1129560232
used_memory_human:1.05G
used_memory_rss:1129560232
used_memory_peak:1129560144
used_memory_peak_human:1.05G
mem_fragmentation_ratio:1.00
mem_allocator:libc
loading:0
aof_enabled:0
changes_since_last_save:0
bgsave_in_progress:0
last_save_time:1386600366
bgrewriteaof_in_progress:0
total_connections_received:1
total_commands_processed:0
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
vm_enabled:0
role:master
db0:keys=4007989,expires=0
After I run it when I noticed the memory has decreased in Windows Task Manager, there are not many differences:
uptime_in_seconds:4412 (from 69)
lru_clock:248445 (from 248011)
used_cpu_sys:4.59 (from 3.34)
used_cpu_user:10.25 (from 10.06)
used_memory:1129561240 (from 1129560232)
used_memory_human:1.05G (same!)
used_memory_rss:1129561240 (from 1129560232)
used_memory_peak:1129568960 (from 1129560144)
used_memory_peak_human:1.05G (same!)
mem_fragmentation_ratio:1.00 (same!)
last_save_time:1386600366 (same!)
total_connections_received:4 (from 1)
total_commands_processed:10 (from 0)
expired_keys:0 (same!)
evicted_keys:0 (same!)
keyspace_hits:0 (same!)
keyspace_misses:2 (from 0)
The lookups are taking longer when the memory size is lower. What is going on here?
What version of Redis are you using ?
Do you have a cron of some sort that removes key ? (do a grep on the del command on your codebase just to be sure)
Redis usually runs a single process to manage the in-memory data. However, when the data is persisted to the RDB file, a second process starts to save all the data. During that process, you can see your memory use increase up to double the size of your data set.
I am familiar with how it is done in linux, but I don't know the details about the windows port, so maybe the differences in size you are seeing are because of this second process that is launched periodically? You can easily try if this is the case by issuing a BGSAVE command in redis. This will start the synchronization of data to RDB on the background, so you can see if the memory usage pattern is the one you observed.
If that is the case, then you already know what is going on :)
good luck