I am trying to find Java API to get the Memory, CPU and Disk Usage consumed by HSQL DB for monitoring HSQL DB. As far as I know, HSQL doesn't have any exposed API.
Can anyone please give some pointers on how to achieve it?
Thanks.
Related
Ignite version : 2.12
OS : Windows 10
I am trying to understand ignites heap usage.
I started Ignite server with below command and no special vm args. As suggested by https://ignite.apache.org/docs/latest/quick-start/java
ignite.bat -v ..\examples\config\example-ignite.xml
Post that started analyzing heap usage of same with visualvm tool and the heap usage looks like this
Next thing that I tried is increase the heap memory and restart the server.
Surprisingly Now ignite is consuming even more memory as seen in this graph
I Know the GC is working its way to clear the heap, but why does ignite memory consumption increases with increase in heap space ?
How will this impact a server with ~40-60G memory, how much memory I can expect to be consumed by Ignite?
I'm planning to use ignite as in memory cache along with Cassandra as DB.
Just like Cassandra, Hadoop or Kafka, Ignite is a Java middleware that uses the Java Heap for various needs. But your data is always stored in an off-heap memory that allows utilizing all available memory space without worrying about garbage collection. This gives Ignite complete control over how the data is managed, and ensures the long-term performance of the system.
Ignite uses a page memory model for storing everything, including user data, indices, meta information, etc. This allows Ignite to utilize memory management, improve performance and it also can use the whole disk without any data modifications.
In other words, you might think that direct page memory access is being performed by memory pointers (outside of JVM), but some internal tasks like bootstrapping Ignite itself, performing local SQL processing tasks, etc. do require JVM heap because Ignite itself is written in Java.
Check this and that pages for details.
How will this impact a server with ~40-60G memory, how much memory I
can expect to be consumed by Ignite?
You would need 40-60 GB of RAM + something for JVM itself (Java heap), recommended values might differ, but 2GB of Java heap should be enough.
We have a lot of Redis instances, consuming TBs of memory and hundreds of machines.
With our business activities goes up and down, some Redis instances are just not used that frequent any more -- they are "unpopular" or "cold". But Redis stores everything in memory, so a lot of infrequent data that should have been stored in cheap disk are occupying expensive memory.
We are exploring a way to save the memory from these unpopular/cold Redis, as to reduce our machines usage.
We cannot delete data, nor can we migrate to other database. Are there some way to achieve our goals?
PS: We are thinking of some Redis compatible product that can "mix" memory and disk, i.e. it stores hot data in memory but cold in disk, and USING LIMITED RESOURCES. We know RedisLabs' "Redis on Flash(ROF)" solution, but it uses RocksDB, which is very memory unfriendly. What we want is a very memory restrained product. Besides, ROF is not open source :(
Thanks in advance!
ElastiCache Redis now supports data tiering. Data tiering provides a new cost optimal option for storing data in Redis by utilizing lower-cost local NVMe SSDs in each cluster node in addition to storing data in memory. It is ideal for workloads that access up to 20 percent of their overall dataset regularly, and for applications that can tolerate additional latency when accessing data on SSD. More details about data tiering can be found here.
Your problem might be solved by using an orchestrator approach: scaledown when not in use, scale up when in demand.
Implementation depends much on your infrastructure, but a base requirement is proper monitoring of Redis instances usage.
Based on that, if you are running on Kubernetes, you can leverage pod autoscaling.
Otherwise you can implement Consul and use HAProxy to handle the shutdown/spin-up logic. A starting point for that strategy is this article.
Of course Reiner's idea of using swap is a quick win if it works the intended way!
Our application in prod environment is generating frequent heap/thread dumps while running very large reports eventually resulting in JVM failure. WebSphere is the server and heap size is set to 1024/2048(initial/max) across all nodes.
What are some ways to tackle this issue? I could think about the following options. Is there anything else I am missing?
Set min/max heap size to 2048 or even higher?
Enable verbose garbage collection in WebSphere and analyze optimal heap size?
Thread Analysis:
Runnable : 123(67%)
Blocked : 16(9%)
Waiting on Condition : 43(23%)
A good starting place to start investigating the OOM is this IBM KnowledgeCenter topic
Since it seems you experience an OutOfMemory Issue there are three possibilities to consider:
Your Apps consistently need more memory to handle the current load.
Solution: You have to load test you application with production-like traffic and tune your Min/Max Heap Size accordingly.
You have a Memory Leak issue.
Solution: Analyze the heapdumps/coredumps produced using IBM Support Assistant tools. A PMR to IBM would help.
Websphere has a memory leak.
Solution: Open a PMR
Here is a nice read about Java Memory Management in WAS environments.
Try to capture the memory, garbage collection information from the production environment. I am not sure if GC log has any performance impact. However, jstat is an extremely light weight tool and can be used in production environment with out any performance impact. Dump the output of jstat at regular intervals using the following command (Here I am setting the interval to 1 hour):
jstat -gc <PID> 3600s
Redis is "memory monster". Storing data as "compressed json string" minimizes memory usage.
Is there any built-in compression option in Redis Db?
Redis uses LZF light data compressor at the dump time, so it won't lessen the memory consumption. Implying that the redis does not compresses the data in memory and stores it as a string.You must deploy your own client side compression code.
The lua scripting also provides the compression algorithm but the branch is relatively new and therefore won't be advisable to use at production level.
No, there isn't any runtime compression option.
However, as dan-boa said - it might be a good idea to implement compression on your application side. Doing it that way will let to save CPU on the Redis server. Your Database server won't be affected of cpu time needed for compression.
In one of our Redis cluster we saved like 82% of memory (from circa 340GB to 60GB) thanks to GZIPing our json-based blobs. Some more thoughts about it and other ways of optimizing memory usage can be found in our article:
http://labs.octivi.com/how-we-cut-down-memory-usage-by-82/
Note: link moved to archive.org backup
I'm rather new to Redis and before using it I'd like to learn some important (as for me) details on it. So....
Redis is using RAM and HDD for storing data. RAM is used as fast read/write storage, HDD is used to make this data persistant. When Redis is started it loads all data from HDD to RAM or it loads only often queried data to the RAM? What if I have 500Mb Redis storage on HDD, but I have only 100Mb or RAM for Redis. Where can I read about it?
Redis loads everything into RAM. All the data is written to disk, but will only be read for things like restarting the server or making a backup.
There are a couple of ways you can use it with less RAM than data though. You can set it up in combination with MySQL or another disk based store to work much like memcached - you manage cache misses and persistence manually.
Redis has a VM mode where all keys must fit in RAM but infrequently accessed data can be on disk. However, I'm not sure if this is in the stable builds yet.
Recent versions (>2.0) have improved significantly and memory management is more efficient. See this blog post that explains how to use hashes to optimize RAM memory footprint: http://antirez.com/post/redis-weekly-update-7.html
The feature called Virtual Memory and it official deprecated
Redis VM is now deprecated. Redis 2.4 will be the latest Redis version featuring Virtual Memory (but it also warns you that Virtual Memory usage is discouraged). We found that using VM has several disadvantages and problems. In the future of Redis we want to simply provide the best in-memory database (but persistent on disk as usual) ever, without considering at least for now the support for databases bigger than RAM. Our future efforts are focused into providing scripting, cluster, and better persistence.
more information about VM: https://redis.io/topics/virtual-memory