Cypher query aborts/stops and the memory consumption is not going down - cypher

My query aborts with the memory limit exceeded. Memgraph keeps running, but even though the query was aborted, the memory isn't getting freed. How do I reset this behavior?

You can try running the FREE MEMORY query which tries to free unused memory chunks in different parts of the storage.

Related

.NET Core application running on fargate with memory issues

We are running a .NET application in fargate via terraform where we specify CPU and memory in the aws_ecs_task_definition resource.
The service has just 1 task e.g.
resource "aws_ecs_task_definition" "test" {
....
cpu = 256
memory = 512
....
From the documentation this is required for Fargate.
You can also specify cpu and memory in the container_definitions, but the documentation states that the field is optional, and as we are already setting values at the task level we did not set them here.
We have observed that our memory was growing after the tasks started, depending on application, sometimes quite fast and others over a period of time.
So we starting thinking we had a memory leak and went to profile using the dotnet-monitor tool as a sidecar.
As part of introducing the sidecar we set cpu and memory values for our .NET application at the container_definitions level.
After we done this, we have observed that our memory in our applications is behaving much better.
From .NET monitor traces we are seeing that when we set memory at the container_definitions level:
Working Set is much smaller
Gen 0/1/2 GC Count is above 1(GC occurring early)
GC 0/1/2 Size is less
GC Committed Bytes is smaller
So to summarize when we do not set memory at container_definitions level, memory continues to grow and no GC occurring until we are almost running out of memory.
When we set memory at container_definitions level, GC occurring regularly and memory not spiking up.
So we have a solution, but do not understand why this is the case.
Would like to know why it is so

Is redis using cache?

I restarted my redis server after 120 days.
Before restart, memory usage 29.5GB
After restarted, memory usage 27.5GB
So, how 2GB reduced comes?
Free memory in ram like this article https://redis.io/topics/memory-optimization
Redis will not always free up (return) memory to the OS when keys are
removed. This is not something special about Redis, but it is how most
malloc() implementations work. For example if you fill an instance
with 5GB worth of data, and then remove the equivalent of 2GB of data,
the Resident Set Size (also known as the RSS, which is the number of
memory pages consumed by the process) will probably still be around
5GB, even if Redis will claim that the user memory is around 3GB. This
happens because the underlying allocator can't easily release the
memory. For example often most of the removed keys were allocated in
the same pages as the other keys that still exist. The previous point
means that you need to provision memory based on your peak memory
usage. If your workload from time to time requires 10GB, even if most
of the times 5GB could do, you need to provision for 10GB.
However allocators are smart and are able to reuse free chunks of
memory, so after you freed 2GB of your 5GB data set, when you start
adding more keys again, you'll see the RSS (Resident Set Size) to stay
steady and don't grow more, as you add up to 2GB of additional keys.
The allocator is basically trying to reuse the 2GB of memory
previously (logically) freed.
Because of all this, the fragmentation ratio is not reliable when you
had a memory usage that at peak is much larger than the currently used
memory. The fragmentation is calculated as the amount of memory
currently in use (as the sum of all the allocations performed by
Redis) divided by the physical memory actually used (the RSS value).
Because the RSS reflects the peak memory, when the (virtually) used
memory is low since a lot of keys / values were freed, but the RSS is
high, the ratio mem_used / RSS will be very high.
Or free memory of caches which was used by my redis server?
Is redis using cache? Cache of cache?
Thanks!

Activity monitor - Memory usage when profiling / not profiling

Any idea why my app's memory usage does not increase whilst using Instruments profiler (searching for leaks), but does when I don't use any profiler? To the tune of 1MB per operation performed. Instruments does not show any leaks.
OS memory management is a complex thing. It is likely that when you free memory it is not returned immediately to the system, but instead it is still "attached" to your process to make any future allocations your application needs more efficient. Although it is recorded as part of your process's memory space, it would be marked as unused, and when the system is running out of memory (or when your application exits), it would then reclaim the unused memory from your application.
If Instruments isn't reporting any leaks, you should be fine.

How to reduce commit size of memory in vb.net application?

I am developing windows application in VB.Net. My problem is after some time of running application commit size of memory get increased. I have used Memory profiler (Ant Profiler, CL R Profiler ) to identified the problem in application. it suggest me to dispose the object which is alive or not unregistered after close the form. Accordingly i dispose all the objects which can affect the memory leak.
But still cant get reduce the commit size once its go high.
Can anyone give me suggestion what to do?
The .NET garbage collector does not guarantee to free memory in any particular timeframe. It might, for example, wait until the memory is needed before freeing up used memory.
You can force a garbage collection by calling
GC.Collect
These articles explain things in a bit more depth:
http://msdn.microsoft.com/en-us/library/ms973837.aspx
http://www.simple-talk.com/dotnet/.net-framework/understanding-garbage-collection-in-.net/

Recover memory from w3wp.exe

Is it possible to recover memory lost from w3wp.exe? I thought a session.abandon() should clear up the resources like that? The thing is I have a web application, certain pages make w3wp.exe grow significantly. Like from 40 MB to 400 MB. Now I am going to optimize my code defiantly to reduce this, however for what ever amount the w3wp.exe grows, is there no way to recover the lost memory even when the user has logged out and closed the browser?
I know this worker process will recycle after 30 minutes (default) of idle use, but what if there is no idle use-age for a long time and the worker process still has that portion of memory, it just keeps on growing? Any thoughts people?
The garbage collector will take care of whatever memory needs to be freed, provided that you dispose things correctly, etc. The GC doesn't immediately kick in every time you call Session.Abandon(), as that would be a major performance hit.
That said, every application has a "normal" memory usage, i.e. a stable memory usage (again, provided you don't have leaks), and this figure is different for every application. 400MB can be a lot or it can be nothing, depending on what your app does. I have apps that hover around 400MB and others around 1.5GB and that's OK as long as memory usage stabilizes somewhere. If you see unbounded memory usage then you most likely have a leak somewhere in your app.
Storing large amounts of data in the in-proc session can also quickly rack up the memory usage. Instead, use a file or a database to store this data.
unless you are leaking the memory, the memory manager will re-use this memory so you should not see the process memory keep growing.