Gemfire Persistent Overflow - gemfire

I'm using Gemfire v7.0.1.3 on Linux. Below is my cache xml.
<?xml version.....>
<!DOCTYPE....>
<cache is-server="true">
<disk-store name="myStore" auto-compact="false" max-oplog-size="1000" queue-size="10000" time-interval="150">
<disk-dirs>
<disk-dir>.....</disk-dir>
</disk-dirs>
</disk-store>
<region name="myRegion" refid="PARTITION_PARSISTENT_OVERFLOW">
<region-attributes disk-store-name="myStore" disk-synchronous="true">
<eviction-attributes>
<lru-entry-count maximum="500" action="overflow-to-disk" />
</eviction-attributes>
</region-attributes>
</region>
</cache>
Now I start cache server allocating 8GB. When I'm using String as cache key and a custom object (each object has 4 double arrays, each of 10000 size) as value, I can store 500 millions objects in the cache without any issue. I can see the disk store directory having .crf, .krf, .drf files. If I restart the cache, the elements are getting restored, all good stuff. But, if I use the custom object as key and value, I start getting low memory exception after creating 25000 (approx) entries in region. Is it expected behavior? Because Gemfire documentation says when we use persistence and overflow together, all the keys and least recently used values are overflowed to disk and most active entry values are kept in memory. So, I was expecting that, I can store any number of objects in the region as long as I have space available in my disk store. But I'm getting low memory exception. Please help me understand.
Thanks

Keys are never overflown to disk, so your memory must be large enough to accommodate all keys. For a persistent region, the keys are also written to disk, but that is only for recovery purpose. So, this behavior is expected if the size of your object keys much larger than the size of your string keys.

Related

Can I write around 10MB of value against a single key in azure redis cache

Can I write around 10MB of value (JSON Data as a string) against a key (string - xyz) in Azure redis cache
Size- Standard 1 GB
Version - 4.0.14
I am able to insert 3MB of value , but while inserting 7MB of value it gives network error.
I am using StackExchange.Redis.2.1.58 client from .net console app.
From Redis website:
Strings are the most basic kind of Redis value. Redis Strings are
binary safe, this means that a Redis string can contain any kind of
data, for instance a JPEG image or a serialized Ruby object.
A String value can be at max 512 Megabytes in length.
You could put 'syncTimeout' parameter in ConnectionString like this "RedisConfiguration": {"ConnectionString": "mycache.redis.cache.windows.net:6380,password=$$$$$$$$$$$$$$$$$$=,ssl=True,abortConnect=False,syncTimeout=150000"," DatabaseNumber ": 1}. This parameter sets the "Time (ms) to allow synchronous operations", as can be seen at https://stackexchange.github.io/StackExchange.Redis/Configuration.html. I had this problem when I stored items that took more than 5 seconds to be processed, since this is the default value. You can try increasing this value using the parameter inserted in the connection string. I hope I can help you with this, regards.

How to use VkPipelineCache?

If I understand it correctly, I'm supposed to create an empty VkPipelineCache object, pass it into vkCreateGraphicsPipelinesand data will be written into it. I can then reuse it with other pipelines I'm creating or save it to a file and use it on the next run.
I've tried following the LunarG example to extra the info:
uint32_t headerLength = pData[0];
uint32_t cacheHeaderVersion = pData[1];
uint32_t vendorID = pData[2];
uint32_t deviceID = pData[3];
But I always get headerLength is 32 and the rest 0. Looking at the spec (https://vulkan.lunarg.com/doc/view/1.0.26.0/linux/vkspec.chunked/ch09s06.html Table 9.1), the cacheHeaderVersion should always be 1, as the only available cache header version is VK_PIPELINE_CACHE_HEADER_VERSION_ONE = 1.
Also the size of pData is usually only 32 bytes, even when I create 10 pipelines with it. What am I doing wrong?
A Vulkan pipeline cache is an opaque object that's only meaningful to the driver. There are very few operations that you're supposed to use on it.
Creating a pipeline cache, optionally with a block of opaque binary data that was saved from an earlier run
Getting the opaque binary data from an existing pipeline cache, typically to serialize to disk before exiting your application
Destroying a pipeline cache as part of the proper shutdown process.
The idea is that the driver can use the cache to speed up creation of pipelines within your program, and also to speed up pipeline creation on subsequent runs of your application.
You should not be attempting to interpret the cache data returned from vkGetPipelineCacheData at all. The only purpose for that data is to be passed into a later call to vkCreatePipelineCache.
Also the size of pData is usually only 32 bytes, even when I create 10 pipelines with it. What am I doing wrong?
Drivers must implement vkCreatePipelineCache, vkGetPipelineCacheData, etc. But they don't actually have to support caching. So if you're working with a driver that doesn't have anything it can cache, or hasn't done the work to support caching, then you'd naturally get back an empty cache (other than the header).

How to read data from gpu memory ,not using memcpy?

In vulkan API,how can we read data from gpu memory,like some data which were calculated by compute shader?
First wait on the fence related to the compute invocation. Then map the memory you wrote the result into and if the memory is not coherent you need to invalidate the range.
Read the data out of the pointer you got from the mapping operation.
I've just gone through the same issue. I think #ratchet freak's comment 1 has got to the point. In my case, I was trying to transfer data from a texture(VkImage) to host memory. I used a linear buffer(VkBuffer) as the staging buffer. I originally used
VkMemoryPropertyFlags flag = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
and found memcpy() very slow. Then I added VK_MEMORY_PROPERTY_HOST_CACHED_BIT and the speed becomes about 10x.

Number of GC perfomred for various generations from a dump file

Is there anyway to get information about how many Garbage collection been performed for different generations from a dump file. When I try to run some psscor4 commands I get following.
0:003> !GCUsage
The garbage collector data structures are not in a valid state for traversal.
It is either in the "plan phase," where objects are being moved around, or
we are at the initialization or shutdown of the gc heap. Commands related to
displaying, finding or traversing objects as well as gc heap segments may not
work properly. !dumpheap and !verifyheap may incorrectly complain of heap
consistency errors.
Error: Requesting GC Heap data
0:003> !CLRUsage
The garbage collector data structures are not in a valid state for traversal.
It is either in the "plan phase," where objects are being moved around, or
we are at the initialization or shutdown of the gc heap. Commands related to
displaying, finding or traversing objects as well as gc heap segments may not
work properly. !dumpheap and !verifyheap may incorrectly complain of heap
consistency errors.
Error: Requesting GC Heap data
I can get output from eehpeap though, but it does not give me what I am looking for.
0:003> !EEHeap -gc
Number of GC Heaps: 1
generation 0 starts at 0x0000000002c81030
generation 1 starts at 0x0000000002c81018
generation 2 starts at 0x0000000002c81000
ephemeral segment allocation context: none
segment begin allocated size
0000000002c80000 0000000002c81000 0000000002c87fe8 0x6fe8(28648)
Large object heap starts at 0x0000000012c81000
segment begin allocated size
0000000012c80000 0000000012c81000 0000000012c9e358 0x1d358(119640)
Total Size: Size: 0x24340 (148288) bytes.
------------------------------
GC Heap Size: Size: 0x24340 (148288) bytes.
Dumps
You can see the number of garbage collections in performance monitor. However, the way performance counters work makes me believe that this information is not available in a dump file and probably even not available during live debugging.
Think of Debug.WriteLine(): once the text was written to the debug output, it is gone. If you didn't have DebugView running at the time, the information is lost. And that's good, otherwise it would look like a memory leak.
Performance counters (as I understand them) work in a similar fashion. Various "pings" are sent out for someone else (the performance monitor) to be recorded. If noone does, the ping with all its information is gone.
Live debugging
As already mentioned, you can try performance monitor. If you prefer WinDbg, you can use sxe clrn to see garbage collections happen.
PSSCOR
The commands you mentioned, do not show information about garbage collection count:
0:016> !gcusage
Number of GC Heaps: 1
------------------------------
GC Heap Size 0x36d498(3,593,368)
Total Commit Size 0000000000384000 (3 MB)
Total Reserved Size 0000000017c7c000 (380 MB)
0:016> !clrusage
Number of GC Heaps: 1
------------------------------
GC Heap Size 0x36d498(3,593,368)
Total Commit Size 0000000000384000 (3 MB)
Total Reserved Size 0000000017c7c000 (380 MB)
Note: I'm using PSSCOR2 here, since I have the same .NET 4.5 issue on this machine. But I expect the output of PSSCOR4 to be similar.

What's the PHP APC cache's apc.shm_strings_buffer setting for?

I'm trying to understand the apc.shm_strings_buffer setting in apc.ini. After restarting PHP, the pie chart in the APC admin shows 8MB of cache is already used, even though there are no cached entries (except for apc.php, of course). I've found this relates to the apc.shm_strings_buffer setting.
Can someone help me understand what the setting means? The config file notes that this is the "shared memory size reserved for strings, with M/G suffixe", but I fail to comprehend.
I'm using APC with PHP-FPM.
The easy part to explain is "with M/G suffixe" which means that if you set it to 8M, then 8 megabytes would be allocated, or 1G would allocated 1 gigabyte of memory.
The more difficult bit to explain is that it's a cache for storing strings that are used internally by APC when it's compiling and caching opcode.
The config value was introduced in this change and the bulk of the change was to add apc_string.c to the APC project. The main function that is defined in that C file is apc_new_interned_string which is then used in apc_string_pmemcpy in apc_compile.c. the rest of the APC module to store strings.
For example in apc_compile.c
/* private members are stored inside property_info as a mangled
* string of the form:
* \0<classname>\0<membername>\0
*/
CHECK((dst->name = apc_string_pmemcpy((char *)src->name, src->name_length+1, pool TSRMLS_CC)));
When APC goes to store a string, the function apc_new_interned_string looks to see if it that string is already saved in memory by doing a hash on the string, and if it is already stored in memory, it returns the previous instance of the stored string.
Only if that string is not already stored in the cache does a new piece of memory get allocated to store the string.
If you're running PHP with PHP-FPM, I'm 90% confident that the cache of stored strings is shared amongst all the workers in a single pool, but am still double-checking that.
The whole size allocated to storing shared strings is allocated when PHP starts up - it's not allocated dynamically. So it's to be expected that APC shows the 8MB used for the string cache, even though hardly any strings have actually been cached yet.
Edit
Although this answers what it does, I have no idea how to see how much of the shared string buffer is being used, so there's no way of knowing what it should be set to.