what cause vue.js memory leak - vue.js

Memory leakage occurs in the browser with less memory (448MB). The chrome debugger was used to determine the cause, and the graph of the test is as shown in the image below. It has been confirmed that JS Heap, Nodes, and EventHandler are all reduced at the time of GC. Can the increase in Nodes and EventHandler also cause memory leakage?
Unfortunately, we can't allocate more memory...

The increase in nodes will increase the DOM elements therefore will contribute to the memory leakage. In addition to that there are also other factors that could be a cause for the memory leakage. Please check the below link to get to know the ways how to fix those problems
https://nolanlawson.com/2020/02/19/fixing-memory-leaks-in-web-applications/

Related

Nuttx heap allocation failed: heap-size is zero

I'd like to allocate some memory on mm_heap, but its size is zero:
debug mm_heap
This causes the memory allocation to fail.
How can I debug this problem?
For reference, I'm using Nuttx on a STM32F765.
The heap size is zero because nothing was ever added to the heap. You can see this because the number of memory regions (mm_nregions) is also zero.
Memory regions are added to heap by mm_addregion() in mm_initialize(); There is guaranteed to be called at least once to add at least one memory regions. If the number of memory regions is zero this function failed for some reason.
The only way that the function can fail is it is passed bad parameters. The passing of the parameters is based one provided by the implementation of up_allocateheap() that you are using.
So what you must to is look at up_allocateheap() to understand what is being passed. The perhaps put a breakpoint of mm_addregion() to see exactly what it is unhappy about.
Thank you very much for your answer.
I was able to solve the problem.
There was a little mix-up in stm32_boot.c and stm32_appinitialize.c in my program (copy-paste error).
Also I had not activated the "BOARD_LATE_INITIALIZE" in the menueconfig -> RTOS Features -> RTOS hooks.
Therefore, the GPIO initialization function was called before the initialization of the heap, which caused the error I described in the question.

How can I change maximum available heap size for a task in FreeRTOS?

I'm creating a list of elements inside a task in the following way:
l = (dllist*)pvPortMalloc(sizeof(dllist));
dllist is 32 byte big.
My embedded system has 60kB SRAM so I expected my 200 element list can be handled easily by the system. I found out that after allocating space for 8 elements the system is crashing on the 9th malloc function call (256byte+).
If possible, where can I change the heap size inside freeRTOS?
Can I somehow request the current status of heap size?
I couldn't find this information in the documentation so I hope somebody can provide some insight in this matter.
Thanks in advance!
(Yes - FreeRTOS pvPortMalloc() returns void*.)
If you have 60K of SRAM, and configTOTAL_HEAP_SIZE is large, then it is unlikely you are going to run out of heap after allocating 256 bytes unless you had hardly any heap remaining before hand. Many FreeRTOS demos will just keep creating objects until all the heap is used, so if your application is based on one of those, then you would be low on heap before your code executed. You may have also done something like use up loads of heap space by creating tasks with huge stacks.
heap_4 and heap_5 will combine adjacent blocks, which will minimise fragmentation as far as practical, but I don't think that will be your problem - especially as you don't mention freeing anything anywhere.
Unless you are using heap_3.c (which just makes the standard C library malloc and free thread safe) you can call xPortGetFreeHeapSize() to see how much free heap you have. You may also have xPortGetMinimumEverFreeHeapSize() available to query how close you have ever come to running out of heap. More information: http://www.freertos.org/a00111.html
You could also define a malloc() failed hook (http://www.freertos.org/a00016.html) to get instant notification of pvPortMalloc() returning NULL.
For the standard allocators you will find a config option in FreeRTOSConfig.h .
However:
It is very well possible you run out of memory already, depending on the allocator used. IIRC there is one that does not free() any blocks (free() is just a dummy). So any block returned will be lost. This is still useful if you only allocate memory e.g. at startup, but then work with what you've got.
Other allocators might just not merge adjacent blocks once returned, increasing fragmentation much faster than a full-grown allocator.
Also, you might loose memory to fragmentation. Depending on your alloc/free pattern, you quickly might end up with a heap looking like swiss cheese: Many holes between allocated blocks. So while there is still enough free memory, no single block is big enough for the size required.
If you only allocate blocks that size there, you might be better of using your own allocator or a pool (blocks of fixed size). Thaqt would be statically allocated (e.g. array) and chained as a linked list during startup. Alloc/free would then just be push/pop on a stack (or put/get on a queue). That would also be very fast and have complexity O(1) (interrupt-safe if properly written).
Note that normal malloc()/free() are not interrupt-safe.
Finally: Do not cast void *. (Well, that's actually what standard malloc() returns and I expect that FreeRTOS-variant does the same).

How to reduce commit size of memory in vb.net application?

I am developing windows application in VB.Net. My problem is after some time of running application commit size of memory get increased. I have used Memory profiler (Ant Profiler, CL R Profiler ) to identified the problem in application. it suggest me to dispose the object which is alive or not unregistered after close the form. Accordingly i dispose all the objects which can affect the memory leak.
But still cant get reduce the commit size once its go high.
Can anyone give me suggestion what to do?
The .NET garbage collector does not guarantee to free memory in any particular timeframe. It might, for example, wait until the memory is needed before freeing up used memory.
You can force a garbage collection by calling
GC.Collect
These articles explain things in a bit more depth:
http://msdn.microsoft.com/en-us/library/ms973837.aspx
http://www.simple-talk.com/dotnet/.net-framework/understanding-garbage-collection-in-.net/

Is too much memory allocation in Instruments bad?

I am playing around with Instruments. And I just recorded/profiled for memory leaks, I had very few memory leaks, but an overwhelming amount of allocations just keep going even when my app just opened. Here is a screenshot after using the app for less than 10 seconds.
And as I keep using the app it just keeps increasing and increasing.
The weirdest part is most of the allocations are coming from classes I don't know like:
Foundation
Altitude
lbdispatch.dylib
But it could be from the SBJson and the other classes I imported and added for JSon and XML.
But is this a lot of memory allocations? Is too much bad???
Yes and no, it depends on what you are doing, if you allocate for example a lot of strings, lets say you allocate 1000 strings these allocation perse are not bad, but it depends on your logic view of your application, if you really need all the strings at once and you need them to be allocated and alive through all the steps of your application, then you dont have anything to do, your application just needs alot of memory,
However on the other hand, you may find some other ways to logically structure your application, like for example you could only allocate each of the 1000 string once you need it.
A very abstract answer is, if your app requires a lot of memory and there is no way to use some techniques such as lazy loading, or caching, then you dont have any other solution
But if you can restructure your application to use lazy loading, caching, allocation pools it would bee better
Please note: that you could let iOS sdk help you, by implementing correctly the memory warning callbacks in your application, in such a way that whenever you receive a warning, you start releasing any resource that you dont currently need
Also, do you have Zombies on? Zombies default to not actually removing any allocations, so memory is never deallocated. Always test for memory leaks with Zombies off.

Memoryusage drops after a week

I have this app written in VB.Net with winforms that shows some stats and pictures on a bigscreen monitor. I also monitor the memory usage of sad app by using this.
Process.WorkingSet64
I know windows does not always report the correct usage but I just wanted to know if I didn't have any little memory leaks which I had but are solved now. But the first week the memory usage was around 100MB and the second week the memory usage showed around 50MB.
So why did it all of a sudden drop while still running the exact same code?
I can hardly imagine that the garbage collector kicked in this late since the app refreshes every 10 seconds and it has ample time in between those periods to do it's thing.
Or perhaps there is just better way to get memory usage for a process that is more reliable.
Process.WrokingSet64 doesn't report the memory usage, it omits the memory that is swapped to disk:
The value returned by this property represents the current size of working set memory used by the process. The working set of a process is the set of memory pages currently visible to the process in physical RAM memory. These pages are resident and available for an application to use without triggering a page fault. (MSDN)
Even if your system was never low on free memory, you may have minimized the application window, which caused Windows to trim its working set.
If you want to look for memory leaks you should probably use Process.PrivateMemorySize64 instead. Your shared memory is going to contain just executable code and it's going to remain more or less constant throughout the life of the process, so you should focus on the private memory.