Can I write around 10MB of value against a single key in azure redis cache - redis

Can I write around 10MB of value (JSON Data as a string) against a key (string - xyz) in Azure redis cache
Size- Standard 1 GB
Version - 4.0.14
I am able to insert 3MB of value , but while inserting 7MB of value it gives network error.
I am using StackExchange.Redis.2.1.58 client from .net console app.

From Redis website:
Strings are the most basic kind of Redis value. Redis Strings are
binary safe, this means that a Redis string can contain any kind of
data, for instance a JPEG image or a serialized Ruby object.
A String value can be at max 512 Megabytes in length.

You could put 'syncTimeout' parameter in ConnectionString like this "RedisConfiguration": {"ConnectionString": "mycache.redis.cache.windows.net:6380,password=$$$$$$$$$$$$$$$$$$=,ssl=True,abortConnect=False,syncTimeout=150000"," DatabaseNumber ": 1}. This parameter sets the "Time (ms) to allow synchronous operations", as can be seen at https://stackexchange.github.io/StackExchange.Redis/Configuration.html. I had this problem when I stored items that took more than 5 seconds to be processed, since this is the default value. You can try increasing this value using the parameter inserted in the connection string. I hope I can help you with this, regards.

Related

Is there a way to emulate a memcpy to store value to a redis key?

I have a data buffer that I would want to set/store into a redis DB to be reused/get by downstream modules. I have a pointer to the data buffer and I would want to emulate some kind of memcpy directly from the buffer pointer into the redis key value and I know the exact length of the data I want to copy.
I can do this in 2 stages
1) Fwrite the buffer into a file say buffer.bin for the size data length
2) Simulate 'redis-cli -x set buffer1
I confirmed I can get the file contents back with
redis-cli -x get buffer1 > /home/buffer-copy.bin
But I would want to avoid additional file operation which I see as completely redundant/costly operation if I can save from my memory pointer directly into the redis key-value. Can you please share your thoughts on how I can do this?
Edit: trying to use "C" hiredis interfaces to access redis

Gemfire Persistent Overflow

I'm using Gemfire v7.0.1.3 on Linux. Below is my cache xml.
<?xml version.....>
<!DOCTYPE....>
<cache is-server="true">
<disk-store name="myStore" auto-compact="false" max-oplog-size="1000" queue-size="10000" time-interval="150">
<disk-dirs>
<disk-dir>.....</disk-dir>
</disk-dirs>
</disk-store>
<region name="myRegion" refid="PARTITION_PARSISTENT_OVERFLOW">
<region-attributes disk-store-name="myStore" disk-synchronous="true">
<eviction-attributes>
<lru-entry-count maximum="500" action="overflow-to-disk" />
</eviction-attributes>
</region-attributes>
</region>
</cache>
Now I start cache server allocating 8GB. When I'm using String as cache key and a custom object (each object has 4 double arrays, each of 10000 size) as value, I can store 500 millions objects in the cache without any issue. I can see the disk store directory having .crf, .krf, .drf files. If I restart the cache, the elements are getting restored, all good stuff. But, if I use the custom object as key and value, I start getting low memory exception after creating 25000 (approx) entries in region. Is it expected behavior? Because Gemfire documentation says when we use persistence and overflow together, all the keys and least recently used values are overflowed to disk and most active entry values are kept in memory. So, I was expecting that, I can store any number of objects in the region as long as I have space available in my disk store. But I'm getting low memory exception. Please help me understand.
Thanks
Keys are never overflown to disk, so your memory must be large enough to accommodate all keys. For a persistent region, the keys are also written to disk, but that is only for recovery purpose. So, this behavior is expected if the size of your object keys much larger than the size of your string keys.

Erlang binary protocol serialization

I'm currently using Erlang for a big project but i have a question regarding a proper proceeding.
I receive bytes over a tcp socket. The bytes are according to a fixed protocol, the sender is a pyton client. The python client uses class inheritance to create bytes from the objects.
Now i would like to (in Erlang) take the bytes and convert these to their equivelant messages, they all have a common message header.
How can i do this as generic as possible in Erlang?
Kind Regards,
Me
Pattern matching/binary header consumption using Erlang's binary syntax. But you will need to know either exactly what bytes or bits your are expecting to receive, or the field sizes in bytes or bits.
For example, let's say that you are expecting a string of bytes that will either begin with the equivalent of the ASCII strings "PUSH" or "PULL", followed by some other data you will place somewhere. You can create a function head that matches those, and captures the rest to pass on to a function that does "push()" or "pull()" based on the byte header:
operation_type(<<"PUSH", Rest/binary>>) -> push(Rest);
operation_type(<<"PULL", Rest/binary>>) -> pull(Rest).
The bytes after the first four will now be in Rest, leaving you free to interpret whatever subsequent headers or data remain in turn. You could also match on the whole binary:
operation_type(Bin = <<"PUSH", _/binary>>) -> push(Bin);
operation_type(Bin = <<"PULL", _/binary>>) -> pull(Bin).
In this case the "_" variable works like it always does -- you're just checking for the lead, essentially peeking the buffer and passing the whole thing on based on the initial contents.
You could also skip around in it. Say you knew you were going to receive a binary with 4 bytes of fluff at the front, 6 bytes of type data, and then the rest you want to pass on:
filter_thingy(<<_:4/binary, Type:6/binary, Rest/binary>>) ->
% Do stuff with Rest based on Type...
It becomes very natural to split binaries in function headers (whether the data equates to character strings or not), letting the "Rest" fall through to appropriate functions as you go along. If you are receiving Python pickle data or something similar, you would want to write the parsing routine in a recursive way, so that the conclusion of each data type returns you to the top to determine the next type, with an accumulated tree that represents the data read so far.
I only covered 8-bit bytes above, but there is also a pure bitstring syntax, which lets you go as far into the weeds with bits and bytes as you need with the same ease of syntax. Matching is a real lifesaver here.
Hopefully this informed more than confused. Binary syntax in Erlang makes this the most pleasant binary parsing environment in a general programming language I've yet encountered.
http://www.erlang.org/doc/programming_examples/bit_syntax.html

What's the PHP APC cache's apc.shm_strings_buffer setting for?

I'm trying to understand the apc.shm_strings_buffer setting in apc.ini. After restarting PHP, the pie chart in the APC admin shows 8MB of cache is already used, even though there are no cached entries (except for apc.php, of course). I've found this relates to the apc.shm_strings_buffer setting.
Can someone help me understand what the setting means? The config file notes that this is the "shared memory size reserved for strings, with M/G suffixe", but I fail to comprehend.
I'm using APC with PHP-FPM.
The easy part to explain is "with M/G suffixe" which means that if you set it to 8M, then 8 megabytes would be allocated, or 1G would allocated 1 gigabyte of memory.
The more difficult bit to explain is that it's a cache for storing strings that are used internally by APC when it's compiling and caching opcode.
The config value was introduced in this change and the bulk of the change was to add apc_string.c to the APC project. The main function that is defined in that C file is apc_new_interned_string which is then used in apc_string_pmemcpy in apc_compile.c. the rest of the APC module to store strings.
For example in apc_compile.c
/* private members are stored inside property_info as a mangled
* string of the form:
* \0<classname>\0<membername>\0
*/
CHECK((dst->name = apc_string_pmemcpy((char *)src->name, src->name_length+1, pool TSRMLS_CC)));
When APC goes to store a string, the function apc_new_interned_string looks to see if it that string is already saved in memory by doing a hash on the string, and if it is already stored in memory, it returns the previous instance of the stored string.
Only if that string is not already stored in the cache does a new piece of memory get allocated to store the string.
If you're running PHP with PHP-FPM, I'm 90% confident that the cache of stored strings is shared amongst all the workers in a single pool, but am still double-checking that.
The whole size allocated to storing shared strings is allocated when PHP starts up - it's not allocated dynamically. So it's to be expected that APC shows the 8MB used for the string cache, even though hardly any strings have actually been cached yet.
Edit
Although this answers what it does, I have no idea how to see how much of the shared string buffer is being used, so there's no way of knowing what it should be set to.

Maximal input length/Variable input length for TinyGP

i am planning to use tinyGP as a way to train a set of Input variables (Around 400 or so) to a value set before. Is there a maximum size of Input variables? Do i need to specify the same amount of variables each time?
I have a lot of computation power (500 core cluster for a weekend) so any thoughts on what parameters to use for such a large problem?
cheers
In TinyGP your constant and variable pool share the same space. The total of these two spaces cannot exceede FSET_START, which is essentially the opcode of your first operator. By default is 110. So your 400 is already over this. This should be just a matter of increasing the opcode of the first instruction up to make enough space. You will also want to make sure you still have a big enough "constant pool".
You can see this checked with the following line in TinyGP:
if (varnumber + randomnumber >= FSET_START )
System.out.println("too many variables and constants");