When we are trying to extract a large table from a sql server, we are getting an error:
Containerized process terminated by signal 119.
As per my understanding, kubernetes containers have a limit of how many GB is allocated to memory for each POD.
So suppose if we have a limitation on memory and the table size is expected to be larger then what is the option we have?
A Container can exceed its memory request if the Node has memory available. But a Container is not allowed to use more than its memory limit. If a Container allocates more memory than its limit, the Container becomes a candidate for termination. If the Container continues to consume memory beyond its limit, the Container is terminated. If a terminated Container can be restarted, the kubelet restarts it, as with any other type of runtime failure.
[source]
There are two possible reasons:
Your container exceeds it's memory limits set in spec.containers[].resources.limits.memory field; or
Your container exceeds node available memory.
In the first case you can increase memory limit by changing spec.containers[].resources.limits.memory value.
In the second case you can either increase node's resources or make sure the pod is being scheduled on a node with more available memory.
Related
I'm assuming that during replica resynchronisation (full or partial), the master will attempt to send data as fast as possible to the replica. Wouldn't this mean the replica output buffer on the master would rapidly fill up since the speed the master can write is likely to be faster than the throughput of the network? If I have client-output-buffer-limit set for replicas, wouldn't the master end up closing the connection before the resynchronisation can complete?
Yes, Redis Master will close the connection and the synchronization will be started from beginning again. But, please find some details below:
Do you need to touch this configuration parameter and what is the purpose/benefit/cost of it?
There is a zero (almost) chance it will happen with default configuration and pretty much moderate modern hardware.
"By default normal clients are not limited because they don't receive data
without asking (in a push way), but just after a request, so only asynchronous clients may create a scenario where data is requested faster than it can read." - the chunk from documentation .
Even if that happens, the replication will be started from beginning but it may lead up to infinite loop when slaves will continuously ask for synchronization over and over. Redis Master will need to fork whole memory snapshot (perform BGSAVE) and use up to 3 times of RAM from initial snapshot size each time during synchronization. That will be causing higher CPU utilization, memory spikes network utilization (if any) and IO.
General recommendations to avoid production issues tweaking this configuration parameter:
Don't decrease this buffer and before increasing the size of the buffer make sure you have enough memory on your box.
Please consider total amount of RAM as snapshot memory size (doubled for copy-on-write BGSAVE process) plus the size of any other buffers configured plus some extra capacity.
Please find more details here
I'm looking at Redis backed up rdb files for a web application. There are 4 such files (for 4 different redis servers working concurrently), sizes being: 13G + 1.6G + 66M + 14M = ~15G
However, these same 4 instances seem to be taking 43.8GB of memory (according to new relic). Why such a large discrepancy between how much space redis data takes up in mem vs disk? Could it be a misconfiguration and can the issue be helped?
I don't think there is any problem.
First of all, the data is stored in compressed format in rdb file so that the size is less than what it is in memory. How small the rdb file is depends on the type of data, but it can be around 20-80% of the memory used by redis
Another reason your memory usage could be more than the actual usage(you can compare the memory from new relic to the one obtained from redis-cli info memory command) is because of memory fragmentation. Whenever redis needs more memory, it will get the memory allocated from the OS, but will not release it easilyly(when the key expires or is deleted). This is not a big issue, as redis will ask for more memory only after using the extra memory that it has. You can also check the memory fragmentation using redis-cli info memory command.
I deployed an ignite cluster in yarn. The cluster has 5 servers. Each server has 10GB memory and 8GB heap. I was trying to write a lot of data to ignite cache. Each item is an integer array whose length is 100K. The backups is 2. When I write 3980 items to the ignite cache, the cluster's heap is almost full. But instead of reject writing, the servers went down one by one.
My questions are:
Is there a configuration or way to control the cache ratio of servers, so the heap won't be full and servers won't go down?
Apparently, servers go down when write too much into cache seems not good for users. I'm wondering why ignite will let this happen, if user uses default configuration.
Apache Ignite, as well as Java Virtual Machine, is NOT responsible for managing or controlling a size of data sets that are placed into Java heap. This is the reason why OutOfMemoryError is presented in Java API because it's a responsibility of an application to handle its data sets and make sure that they fit into the heap.
You can set up eviction policy and Ignite may either move data to off-heap region or swap or completely remove from the memory.
Refer to my foreword above. This is the responsibility of an application. Ignite can assist here wit its eviction policy, off-heap mode and ability to scale out.
We recently migrated to Couchbase 3.1.0. The odd thing is - when performing full backup of a bucket, web UI alerts "Hard Out Of Memory Error. Bucket X on node Y is full. All memory allocated to this bucket is used for metadata". The numbers from RAM usage in the web UI contradict that - about 75% is used, but not 100%. I looked into the logs, but haven't find any similar errors there.
Is that even normal?
This is a known issue in the Couchbase Server 3.x releases.
To understand the problem, we must also first understand Database Change Protocol (DCP), the protocol used to transfer data throughout the system. At a high level the flow-control for DCP is as follows:
The Consumer creates a connection with the Producer and sends an Open Connection message. The Consumer then sends a Control message to indicate per stream flow control. This messages will contain “stream_buffer_size” in the key section and the buffer size the Consumer would like each stream to have in the value section.
The Consumer will then start opening streams so that is can receive data from the server.
The Producer will then continue to send data for the stream that has buffer space available until it reaches the maximum send size.
Steps 1-3 continue until the connection is closed, as the Consumer continues to consume items from the stream.
The cbbackup utility does not implement any flow control (data buffer limits) however, and it will try to stream all vbuckets from all nodes at once, with no cap on the buffer size.
While this does not mean that it will use the same amount of memory as your overall data size (as the streams are being drained slowly by the cbbackup process), it does mean that a large memory overhead is required to be able to store the data streams.
When you are in a heavy DGM (disk greater than memory) scenario, the amount of memory required to store the streams is likely to grow more rapidly than cbbackup can drain them as it is streaming large quantities of data off of disk, leading to very large streams, which take up a lot of memory as previously mentioned.
The slightly misleading message about metadata taking up all of the memory is displayed as there is no memory left for the data, so all of the remaining memory is allocated to the metadata, which when using value eviction cannot be ejected from memory.
The reason that this only affects Couchbase Server versions prior to 4.0 is that in 4.0 a server-side improvement to DCP stream management was made that allows the pausing of DCP streams to keep the memory footprint down, this is tracked as MB-12179.
As a result, you should not experience the same issue on Couchbase Server versions 4.x+, regardless of how DGM your bucket is.
Workaround
If you find yourself in a situation where this issue is occurring, then terminating the backup job should release all of the memory consumed by the streams immediately.
Unfortunately if you have already had most of your data evicted from memory as a result of the backup, then you will have to retrieve a large quantity of data off of disk instead of RAM for a small period of time, which is likely to increase your get latencies.
Over time 'hot' data will be brought into memory when requested, so this will only be a problem for a small period of time, however this is still a fairly undesirable situation to be in.
The workaround to avoid this issue completely is to only stream a small number of vbuckets at once when performing the backup, as opposed to all vbuckets which cbbackup does by default.
This can be achieved using cbbackupwrapper which comes bundled with all Couchbase Server releases 3.1.0 and later, details of using cbbackupwrapper can be found in the Couchbase Server documentation.
In particular the parameter to pay attention to is the -n flag, which specifies the number of vbuckets to be backed up in a batch at once.
As the name suggests, cbbackupwrapper is simply a wrapper script on top of cbbackup which partitions the vbuckets up and automatically handles all of the directory creation and backup generation, while still using cbbackup under the hood.
As an example, with a batch size of 50, cbbackupwrapper would backup vbuckets 0-49 first, followed by 50-99, then 100-149 etc.
It is suggested that you test with cbbackupwrapper in a testing environment which mirrors your production environment to find a suitable value for -n and -P (which controls how many backup processes run at once, the combination of these two controls the amount of memory pressure caused by backup as well as the overall speed).
You should not find that lowering the value of -n from its default 100 decreases the backup speed, in some cases you may find that the backup speed actually increases due to the fact that there is far less memory pressure on the server.
You may however wish to sensibly adjust the -P parameter if you wish to speed up the backup further.
Below is an example command:
cbbackupwrapper http://[host]:8091 [backup_dir] -u [user_name] -p [password] -n 50
It should be noted that if you use cbbackupwrapper to perform your backup then you must also use cbrestorewrapper to restore the data, as cbrestorewrapper is automatically aware of the directory structures used by cbbackupwrapper.
When you run a full backup, by default the backup tool streams data from all nodes over the network. This is not the best way, because it causes a lot of extra load and increased memory usage, especially of you run cbbackup on one of the Couchbase nodes. I would use the data-copy mode of cbbackup, which copies data directly from the files on disk:
> sudo /opt/couchbase/bin/cbbackup couchstore-files:///opt/couchbase/var/lib/couchbase/data/ /tmp/backup
Of course, change the data path to wherever your Couchbase data is actually stored. (In my example it runs as sudo because only root has read access to /opt/couchbase/blabla..) Do this on every node, then collect all the backup folders and put them somewhere. Note that the backups are very compressible, so you might want to zip them before copying over the network.
I am traying to reduce memory usage by Apache on the server.
My actual Max Connections Per Child is 10k
According to the following recommendation
the Max Connections Per Child should be reduced to 1000
http://www.lophost.com/tutorials/how-to-reduce-high-memory-usage-by-apache-httpd-on-a-cpanel-server/
What is the recommended max value for Max Connections Per Child in Apache configuration?
The only time when this directive affects anything is when your Apache workers are leaking memory. One way this happens is that memory is allocated (via malloc() or whatever) and never freed. It's the result of design/implementation flaws in Apache or its modules.
This directive is somewhat of a hack, really -- but if there's some module that's loaded into Apache that leaks, say, 8 bytes every request, then after a lot of requests, you'll run out of memory. So the quick fix is to just kill the process every MaxConnectionsPerChild requests and start a new one.
This will only affect your memory usage if you see it gradually increase over the span of lots of requests when setting MaxConnectionsPerChild to zero.
The default is 0 (which implies no maximum connections per child) so unless you have memory leakage I'm unaware of any need to change this setting - I agree with Hut8.
Sharing here FYI from the Apache 2.4 Performance Tuning page:
Related to process creation is process death induced by the MaxConnectionsPerChild setting. By default this is 0, which means that there is no limit to the number of connections handled per child. If your configuration currently has this set to some very low number, such as 30, you may want to bump this up significantly. If you are running SunOS or an old version of Solaris, limit this to 10000 or so because of memory leaks.
And from the Apache 2.4 docs on MaxConnectionsPerChild:
Setting MaxConnectionsPerChild to a non-zero value limits the amount of memory that process can consume by (accidental) memory leakage.