Why is the max memory calculated wrong by the jvm using the maxRamPercentage in an openshift environment? [duplicate] - jvm

I have a 64-bit hotspot JDK version 1.7.0 installed on a 64-bit RHEL 6 machine. I use the following JVM options for my tomcat application.
CATALINA_OPTS="${CATALINA_OPTS} -Dfile.encoding=UTF8 -Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES=false -Duser.timezone=EST5EDT"
# General Heap sizing
CATALINA_OPTS="${CATALINA_OPTS} -Xms4096m -Xmx4096m -XX:NewSize=2048m -XX:MaxNewSize=2048m -XX:PermSize=512m -XX:MaxPermSize=512m -XX:+UseCompressedOops -XX:+DisableExplicitGC"
# Enable the CMS GC policy
CATALINA_OPTS="${CATALINA_OPTS} -XX:+UseConcMarkSweepGC -XX:CMSWaitDuration=15000 -XX:+CMSParallelRemarkEnabled -XX:+CMSCompactWhenClearAllSoftRefs -XX:+CMSConcurrentMTEnabled -XX:+CMSScavengeBeforeRemark -XX:+CMSClassUnloadingEnabled"
# Verbose Garbage Collection Logging
CURRENT_DATE=`date +%Y%m%d%H%M%S`
CATALINA_OPTS="${CATALINA_OPTS} -verbose:gc -XX:+PrintGCDetails -Xloggc:${CATALINA_BASE}/logs/gc-${CURRENT_DATE}.log -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution"
When I have a Garbage Collection analysis, the GC logs show a maximum available heap of only 3.8GB instead of 4GB allocated to the JVM. Why is that?

New Generation (2048M) consists of 80% Eden (1638.4M) and two Survivor Spaces (10% or 204.8M each):
Heap
par new generation total 1887488K, used 134226K [0x00000006fae00000, 0x000000077ae00000, 0x000000077ae00000)
eden space 1677824K, 8% used [0x00000006fae00000, 0x00000007031148e0, 0x0000000761480000)
from space 209664K, 0% used [0x0000000761480000, 0x0000000761480000, 0x000000076e140000)
to space 209664K, 0% used [0x000000076e140000, 0x000000076e140000, 0x000000077ae00000)
concurrent mark-sweep generation total 2097152K, used 242K [0x000000077ae00000, 0x00000007fae00000, 0x00000007fae00000)
At any time one of survivor spaces is empty (see Generations).
So, the useful heap size is 1638.4 + 204.8 + 2048 = 3891.2 MB

Related

jcmd difference in heap usage numbers reported in jcmd commands heap_info v/s class_histogram v/s native.memory

can someone explain the difference in heap usage numbers reported by following commands on the same program around the same time? 
root#8fd0f20ba530:/# sudo -u xxx jcmd 477 GC.heap_info
477:
PSYoungGen total 934912K, used 221097K [0x0000000670700000, 0x00000006b2180000, 0x00000007c0000000)
eden space 933376K, 23% used [0x0000000670700000,0x000000067dd9dfd8,0x00000006a9680000)
from space 1536K, 86% used [0x00000006b2000000,0x00000006b214c538,0x00000006b2180000)
to space 32256K, 0% used [0x00000006ae280000,0x00000006ae280000,0x00000006b0200000)
ParOldGen total 672256K, used 244738K [0x00000003d1400000, 0x00000003fa480000, 0x0000000670700000)
object space 672256K, 36% used [0x00000003d1400000,0x00000003e0300928,0x00000003fa480000)
Metaspace used 112665K, capacity 119872K, committed 119936K, reserved 1155072K
class space used 13760K, capacity 15212K, committed 15232K, reserved 1048576K
root#8fd0f20ba530:/# sudo -u xxxjcmd 477 VM.native_memory
477:
Native Memory Tracking:
Total: reserved=18811604KB, committed=2677784KB
- Java Heap (reserved=16494592KB, committed=1615360KB)
(mmap: reserved=16494592KB, committed=1615360KB)
- Class (reserved=1167716KB, committed=132580KB)
(classes #21174)
(malloc=12644KB #49240)
(mmap: reserved=1155072KB, committed=119936KB)
- Thread (reserved=217029KB, committed=217029KB)
(thread #211)
(stack: reserved=215880KB, committed=215880KB)
(malloc=715KB #1260)
(arena=434KB #416)
- Code (reserved=266396KB, committed=96164KB)
(malloc=16796KB #20839)
(mmap: reserved=249600KB, committed=79368KB)
- GC (reserved=613051KB, committed=563831KB)
(malloc=10419KB #1196)
(mmap: reserved=602632KB, committed=553412KB)
- Compiler (reserved=721KB, committed=721KB)
(malloc=587KB #1723)
(arena=135KB #7)
- Internal (reserved=19877KB, committed=19877KB)
(malloc=19845KB #29606)
(mmap: reserved=32KB, committed=32KB)
- Symbol (reserved=26534KB, committed=26534KB)
(malloc=22914KB #244319)
(arena=3620KB #1)
- Native Memory Tracking (reserved=5483KB, committed=5483KB)
(malloc=29KB #331)
(tracking overhead=5454KB)
- Arena Chunk (reserved=203KB, committed=203KB)
(malloc=203KB)
root#8fd0f20ba530:/# sudo -u xxx jcmd 477 GC.class_histogram | more
477:
num #instances #bytes class name
----------------------------------------------
1: 159930 57884816 [C
2: 10625 10583856 [B
3: 14628 5228552 [I
4: 134612 4307584 java.util.concurrent.ConcurrentHashMap$Node
.
.
.
9425: 1 16 sun.util.locale.provider.TimeZoneNameUtility$TimeZoneNameGetter
9426: 1 16 sun.util.resources.LocaleData
9427: 1 16 sun.util.resources.LocaleData$LocaleDataResourceBundleControl
Total 1325671 114470096
so now from above jcmd GC.class_histogram is showing heap used = 114M
but jcmd GC.heap_info is showing heap used = 221M + 244M = 465M and total = 934M + 672M = 1.6G
and jcmd VM.native_memory is showing heap committed = 2.6G 
using OpenJDK 64-Bit Server VM version 25.292-b10
any pointers to understand which numbers to follow? 

Cassandra service cannot start on CentOS 6.4, starting process is mysteriously killed

Cassandra version: 2.2
OS: CentOS 6.4
I just follow the documentation here to run
[user#centOS conf]$ sudo yum install dsc22
[user#centOS conf]$ sudo service cassandra start
Starting Cassandra: OK
But I check the process status by "ps" and no cassandra process running, since the starting process is mysteriously killed. (I run the same thing on my Mac and all works fine, and on CentOS directly type 'cassandra' in command line also works. #update: I tried "yum install dsc20" and working fine too...)
In '/var/log/cassandra/cassandra.log' is like below, no error message, no warning. Could anyone please have any idea on this? Thanks a lot.
tail: /var/log/cassandra/cassandra.log: file truncated
CompilerOracle: inline org/apache/cassandra/db/AbstractNativeCell.compareTo (Lorg/apache/cassandra/db/composites/Composite;)I
CompilerOracle: inline org/apache/cassandra/db/composites/AbstractSimpleCellNameType.compareUnsigned (Lorg/apache/cassandra/db/composites/Composite;Lorg/apache/cassandra/db/composites/Composite;)I
CompilerOracle: inline org/apache/cassandra/io/util/Memory.checkBounds (JJ)V
CompilerOracle: inline org/apache/cassandra/io/util/SafeMemory.checkBounds (JJ)V
CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.selectBoundary (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;II)I
CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.strictnessOfLessThan (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare (Ljava/nio/ByteBuffer;[B)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare ([BLjava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/lang/Object;JI)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
INFO 23:47:52 Loading settings from file:/etc/cassandra/default.conf/cassandra.yaml
INFO 23:47:52 Node configuration:[authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_snapshot=true; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_directory=/data/cassandra/commitlog; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_period_in_ms=10000; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_node_timeout=false; data_file_directories=[/data/cassandra/data]; disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_user_defined_functions=false; endpoint_snitch=SimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_compression=all; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=localhost; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_tokens=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_validity_in_ms=2000; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_validity_in_ms=2000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=localhost; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; saved_caches_directory=/data/cassandra/saved_caches; seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, parameters=[{seeds=127.0.0.1}]}]; server_encryption_options=<REDACTED>; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; windows_timer_interval=1; write_request_timeout_in_ms=2000]
INFO 23:47:52 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO 23:47:52 Global memtable on-heap threshold is enabled at 121MB
INFO 23:47:52 Global memtable off-heap threshold is enabled at 121MB
INFO 23:47:52 Loading settings from file:/etc/cassandra/default.conf/cassandra.yaml
INFO 23:47:52 Node configuration:[authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_snapshot=true; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_directory=/data/cassandra/commitlog; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_period_in_ms=10000; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_node_timeout=false; data_file_directories=[/data/cassandra/data]; disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_user_defined_functions=false; endpoint_snitch=SimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_compression=all; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=localhost; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_tokens=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_validity_in_ms=2000; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_validity_in_ms=2000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=localhost; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; saved_caches_directory=/data/cassandra/saved_caches; seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, parameters=[{seeds=127.0.0.1}]}]; server_encryption_options=<REDACTED>; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; windows_timer_interval=1; write_request_timeout_in_ms=2000]
INFO 23:47:52 Hostname: centos-vm.com
INFO 23:47:52 JVM vendor/version: OpenJDK 64-Bit Server VM/1.7.0_45
INFO 23:47:52 Heap size: 509214720/509214720
INFO 23:47:52 Code Cache Non-heap memory: init = 2555904(2496K) used = 636288(621K) committed = 2555904(2496K) max = 50331648(49152K)
INFO 23:47:52 Par Eden Space Heap memory: init = 104071168(101632K) used = 50207568(49030K) committed = 104071168(101632K) max = 104071168(101632K)
INFO 23:47:52 Par Survivor Space Heap memory: init = 12976128(12672K) used = 0(0K) committed = 12976128(12672K) max = 12976128(12672K)
INFO 23:47:52 CMS Old Gen Heap memory: init = 392167424(382976K) used = 0(0K) committed = 392167424(382976K) max = 392167424(382976K)
INFO 23:47:52 CMS Perm Gen Non-heap memory: init = 21757952(21248K) used = 12843096(12542K) committed = 21757952(21248K) max = 174063616(169984K)
INFO 23:47:52 Classpath: /etc/cassandra/conf:/usr/share/cassandra/lib/airline-0.6.jar:/usr/share/cassandra/lib/antlr-runtime-3.5.2.jar:/usr/share/cassandra/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-20150617-shaded.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/commons-math3-3.2.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.4.jar:/usr/share/cassandra/lib/crc32ex-0.1.1.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/ecj-4.4.2.jar:/usr/share/cassandra/lib/guava-16.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.0.6.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.3.0.jar:/usr/share/cassandra/lib/javax.inject.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jcl-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/jna-4.0.0.jar:/usr/share/cassandra/lib/joda-time-2.4.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.9.2.jar:/usr/share/cassandra/lib/log4j-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/logback-classic-1.1.3.jar:/usr/share/cassandra/lib/logback-core-1.1.3.jar:/usr/share/cassandra/lib/lz4-1.3.0.jar:/usr/share/cassandra/lib/metrics-core-3.1.0.jar:/usr/share/cassandra/lib/metrics-logback-3.1.0.jar:/usr/share/cassandra/lib/netty-all-4.0.23.Final.jar:/usr/share/cassandra/lib/ohc-core-0.3.4.jar:/usr/share/cassandra/lib/ohc-core-j8-0.3.4.jar:/usr/share/cassandra/lib/reporter-config3-3.0.0.jar:/usr/share/cassandra/lib/reporter-config-base-3.0.0.jar:/usr/share/cassandra/lib/sigar-1.6.4.jar:/usr/share/cassandra/lib/slf4j-api-1.7.7.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.0.4.1.jar:/usr/share/cassandra/lib/snappy-java-1.1.1.7.jar:/usr/share/cassandra/lib/ST4-4.0.8.jar:/usr/share/cassandra/lib/stream-2.5.2.jar:/usr/share/cassandra/lib/super-csv-2.1.0.jar:/usr/share/cassandra/lib/thrift-server-0.3.7.jar:/usr/share/cassandra/apache-cassandra-2.2.0.jar:/usr/share/cassandra/apache-cassandra-thrift-2.2.0.jar:/usr/share/cassandra/stress.jar::/usr/share/cassandra/lib/jamm-0.3.0.jar
It turns out to be a memory problem. Since the starting process is mysteriously killed, the fact is it's killed by kernel.
I'm running it on virtual machine with 1G memory.
When I run "ps -ef | grep cassandra" and then "dmesg | egrep -i -B100 'cassandra-pid'", I got
Out of memory: Kill process 9450 (java) score 218 or sacrifice child
Killed process 9450, UID 0, (java) total-vm:1155600kB, anon-rss:797292kB, file-rss:100956kB
Thus confirmed it's a memory issue. Then I just modified the memory allocation for Cassandra and all worked fine.
in cassandra-env.sh
change:
half_system_memory_in_mb=`expr $system_memory_in_mb / 2`
quarter_system_memory_in_mb=`expr $half_system_memory_in_mb / 2`
to be:
half_system_memory_in_mb="300"
quarter_system_memory_in_mb="300"
you can seet by typing command netstat -tunlp on the prompt , there you will find process id for the port 7199 . then you have to kill it if you want to stop .
netstat -tunlp tell us port wise process id .

Why MaxNewSize is larger than MaxHeapSize in JVM?

I only setup some of JVM configuration on startup: -Xmx1024m -Xms1024m -XX:MaxPermSize=128m -XX:PermSize=128m
From HotSpot sources:
product(uintx, MaxNewSize, max_uintx, \
"Maximum new generation size (in bytes), max_uintx means set " \
"ergonomically") \
Since you haven't set MaxNewSize explicitly, the default value is taken which is treated specially.
Anyway, MaxNewSize value is only a hint, while NewSize holds the real size of young generation.
The size of the young generation to the old generation is controlled by NewRatio. So, even though MaxNewSize > MaxHeapSize, with NewRatio=2, the effective max size of new space is 1:2. The old generation occupies 2/3 of the heap while the new generation occupies 1/3.
In your case, that is 2/3* 1024 = 682.6M for old space and 1/3 * 1024 = 341M for new space.
The threshold for MaxNewSize would only kick in if it was lower than that provided with NewRatio. Think of these as multiple, independent knobs with which to configure memory. The JVM will choose a setting conforming with all settings.

Redis - why is redis-server decreasing in memory?

I'm running Redis on windows and have noticed that the size of redis-server.exe decreases over time. When I open Redis, it reads from a dump file and loads all of the hashed key values into memory to about 1.4 GB. However, over time, the amount of memory that redis-server.exe takes up decreases. I have seen it go down to less than 100 MB.
The only reason that I could see this happening is that the keys are expiring and leaving memory, however I have set Redis up so that they never expire. I have also made sure that I have given enough memory.
Some of my settings include:
maxmemory 2gb
maxmemory-policy noeviction
hash-max-zipmap-entries 512
hash-max-zipmap-value 64
activerehashing no
If it's of interest, when I first loaded the keys into Redis, I did it through Python like so:
r.hset(key, field, value)
Any help would be appreciated. I want the keys to be there forever.
This is my output from the INFO command right after I first run it:
redis 127.0.0.1:6379> INFO
redis_version:2.4.6
redis_git_sha1:26cdd13a
redis_git_dirty:0
arch_bits:64
multiplexing_api:winsock2
gcc_version:4.6.1
process_id:9092
uptime_in_seconds:69
uptime_in_days:0
lru_clock:248011
used_cpu_sys:3.34
used_cpu_user:10.06
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
connected_clients:1
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:1129560232
used_memory_human:1.05G
used_memory_rss:1129560232
used_memory_peak:1129560144
used_memory_peak_human:1.05G
mem_fragmentation_ratio:1.00
mem_allocator:libc
loading:0
aof_enabled:0
changes_since_last_save:0
bgsave_in_progress:0
last_save_time:1386600366
bgrewriteaof_in_progress:0
total_connections_received:1
total_commands_processed:0
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
vm_enabled:0
role:master
db0:keys=4007989,expires=0
After I run it when I noticed the memory has decreased in Windows Task Manager, there are not many differences:
uptime_in_seconds:4412 (from 69)
lru_clock:248445 (from 248011)
used_cpu_sys:4.59 (from 3.34)
used_cpu_user:10.25 (from 10.06)
used_memory:1129561240 (from 1129560232)
used_memory_human:1.05G (same!)
used_memory_rss:1129561240 (from 1129560232)
used_memory_peak:1129568960 (from 1129560144)
used_memory_peak_human:1.05G (same!)
mem_fragmentation_ratio:1.00 (same!)
last_save_time:1386600366 (same!)
total_connections_received:4 (from 1)
total_commands_processed:10 (from 0)
expired_keys:0 (same!)
evicted_keys:0 (same!)
keyspace_hits:0 (same!)
keyspace_misses:2 (from 0)
The lookups are taking longer when the memory size is lower. What is going on here?
What version of Redis are you using ?
Do you have a cron of some sort that removes key ? (do a grep on the del command on your codebase just to be sure)
Redis usually runs a single process to manage the in-memory data. However, when the data is persisted to the RDB file, a second process starts to save all the data. During that process, you can see your memory use increase up to double the size of your data set.
I am familiar with how it is done in linux, but I don't know the details about the windows port, so maybe the differences in size you are seeing are because of this second process that is launched periodically? You can easily try if this is the case by issuing a BGSAVE command in redis. This will start the synchronization of data to RDB on the background, so you can see if the memory usage pattern is the one you observed.
If that is the case, then you already know what is going on :)
good luck

What is the maximum number of pages tha apache fop can generate?

Hi I was working with apache fop and when the number of pages exceeds about 130 pages ,it could not generate the pdf ....
Is there any limit to page number or the length of xml file...
Exception in thread "main" java.lang.OutOfMemoryError: Java heap
space
at java.io.BufferedReader.(BufferedReader.java:80)
at java.io.BufferedReader.(BufferedReader.java:91)
at org.apache.xml.dtm.ObjectFactory.findJarServiceProviderName(ObjectFac
tory.java:579)
at org.apache.xml.dtm.ObjectFactory.lookUpFactoryClassName(ObjectFactory
.java:373)
at org.apache.xml.dtm.ObjectFactory.lookUpFactoryClass(ObjectFactory.jav
a:206)
at org.apache.xml.dtm.ObjectFactory.createObject(ObjectFactory.java:131)
at org.apache.xml.dtm.ObjectFactory.createObject(ObjectFactory.java:101)
at org.apache.xml.dtm.DTMManager.newInstance(DTMManager.java:135)
at org.apache.xpath.XPathContext.reset(XPathContext.java:350)
at org.apache.xalan.transformer.TransformerImpl.reset(TransformerImpl.ja
va:505)
at org.apache.xalan.transformer.TransformerImpl.transformNode(Transforme
rImpl.java:1436)
at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImp
l.java:709)
at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImp
l.java:1284)
at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImp
l.java:1262)
at org.apache.fop.cli.InputHandler.transformTo(InputHandler.java:214)
at org.apache.fop.cli.InputHandler.renderTo(InputHandler.java:125)
at org.apache.fop.cli.Main.startFOP(Main.java:166)
at org.apache.fop.cli.Main.main(Main.java:197)
I've created reports that are made from xmls that were several hundred thousand lines long. However I have had some issues creating smaller reports filled with svgs.
Your issue is probably that java by default only allocates 32 MB memory (if I recall correctly) so it's running out of memory.
In the fop.bat file (assumimg you're running on windows) add the following setting
rem Increase standard Java VM heap size, so that bigger reports get enough memory
set JAVAOPTS=-Xmx512M
and alter the execution line as follows
"%JAVACMD%" %JAVAOPTS% %LOGCHOICE% %LOGLEVEL% -cp "%LOCALCLASSPATH%" org.apache.fop.cli.Main %FOP_CMD_LINE_ARGS%
This should work with 0.95 at least