VB.NET holding a big vector of structure in memory - vb.net

I need to load a huge dataset keep it in a vector.
I have been able to make it so that the memory is kept right under the limit of around 1.2 gb that seems to be allowed for applications.
But then I tried to serialize the vector, and that is when VB.NET finally gave up giving me an out of memory exception.
I have tried just everything, I can not reduce the size of the dataset.
In fact it is already really reduced.
Changing the internals of my application is not an option.
Is there any way to allow my application to use more than 1.2 gb (or whatever seems to be this magic limit)?
I tried the same in VB6, and the limit was also 1.2 gb.

This was what I Googled:
.net memory limit for application
These are the first 4 results on Google:
Hitting a memory limit slows down the .Net application
.Net Memory limit
Is there a memory limit for a single .NET process
Memory limitations in a 64-bit .Net application?

Related

Why does the amount of memory available to x86 applications fluctuate in vb.net? [duplicate]

Which is the maximum amount of memory one can achieve in .NET managed code? Does it depend on the actual architecture (32/64 bits)?
There are no hard, exact figure for .NET code.
If you run on 32 bit Windows; your process can address up to 2 GB, 3 GB if the /3GB switch is used on Windows Server 2003.
If you run a 64 bit process on a 64 bit box your process can address up to 8 TB of address space, if that much RAM is present.
This is not the whole story however, since the CLR takes some overhead for each process. At the same time, .NET will try to allocate new memory in chunks; and if the address space is fragmented, that might mean that you cannot allocate more memory, even though some are available.
In C# 2.0 and 3.0 there is also a 2G limit on the size of a single object in managed code.
The amount of memory your .NET process can address depends both on whether it is running on a 32/64 bit machine and whether or not it it running as a CPU agnostic or CPU specific process.
By default a .NET process is CPU agnostic so it will run with the process type that is natural to the version of Windows. In 64 bit it will be a 64 bit process, and in 32 bit it will be a 32 bit process. You can force a .NET process though to target a particular CPU and say make it run as a 32 bit process on a 64 bit machine.
If you exclude the large address aware setting, the following are the various breakdowns
32 bit process can address 2GB
64 bit process can address 8TB
Here is a link to the full breakdown of addressable space based on the various options Windows provides.
http://msdn.microsoft.com/en-us/library/aa366778.aspx
For 64 bit Windows the virtual memory size is 16 TB divided equally between user and kernel mode, so user processes can address 8 TB (8192 GB). That is less than the entire 16 EB space addressable by 64 bits, but it is still a whole lot more than what we're used to with 32 bits.
I have recently been doing extensive profiling around memory limits in .NET on a 32bit process. We all get bombarded by the idea that we can allocate up to 2.4GB (2^31) in a .NET application but unfortuneately this is not true :(. The application process has that much space to use and the operating system does a great job managing it for us, however, .NET itself seems to have its own overhead which accounts for aproximately 600-800MB for typical real world applications that push the memory limit. This means that as soon as you allocate an array of integers that takes about 1.4GB, you should expect to see an OutOfMemoryException().
Obviously in 64bit, this limit occurs way later (let's chat in 5 years :)), but the general size of everything in memory also grows (I am finding it's ~1.7 to ~2 times) because of the increased word size.
What I know for sure is that the Virtual Memory idea from the operating system definitely does NOT give you virtually endless allocation space within one process. It is only there so that the full 2.4GB is addressable to all the (many) applications running at one time.
I hope this insight helps somewhat.
I originally answered something related here (I am still a newby so am not sure how I am supposed to do these links):
Is there a memory limit for a single .NET process
The .NET runtime can allocate all the free memory available for user-mode programs in its host. Mind that it doesn't mean that all of that memory will be dedicated to your program, as some (relatively small) portions will be dedicated to internal CLR data structures.
In 32 bit systems, assuming a 4GB or more setup (even if PAE is enabled), you should be able to get at the very most roughly 2GB allocated to your application. On 64 bit systems you should be able to get 1TB. For more information concerning windows memory limits, please review this page.
Every figure mentioned there has to be divided by 2, as windows reserves the higher half of the address space for usage by code running in kernel mode (ring 0).
Also, please mind that whenever for a 32 bit system the limit exceeds 4GB, use of PAE is implied, and thus you still can't really exceed the 2GB limit unless the OS supports 4gt, in which case you can reach up to 3GB.
Yes, in a 32 bits environment you are limited to a 4GB address-space but Windows claims about half. On a 64 bits architecture it is, well, a lot bigger. I believe it's 4G * 4G
And on the Compact Framework it usually is in the order of a few hundred MB
I think other answers being quite naive, in real world after 2GB of memory consumption your application will behave really badly. In my experience GUIs generally go massively clunky, unsusable after lots of memory consumptions.
This was my experience, obviously actual cause of this can be objects grows too big so all operations on those objects takes too much time.
The following blog post has detailed findings on x86 and x64 max memory. It also has a small tool (source available) which allows easy easting of the different memory options:
http://www.guylangston.net/blog/Article/MaxMemory.

How to properly assign huge heap space for JVM

Im trying to work around an issue which has been bugging me for a while. In a nutshell: on which basis should one assign a max heap space for resource-hogging application and is there a downside for tit being too large?
I have an application used to visualize huge medical datas, which can eat up to several gigabytes of memory if several imaging volumes are opened size by side. Caching the data to be viewed is essential for fluent workflow. The software is supported with windows workstations and is started with a bootloader, which assigns the heap size and launches the main application. The actual memory needed by main application is directly proportional to the data being viewed and cannot be determined by the bootloader, because it would require reading the data, which would, ultimately, consume too much time.
So, to ensure that the JVM has enough memory during launch we set up xmx as large as we dare based, by current design, on the max physical memory of the workstation. However, is there any downside to this? I've read (from a post from 2008) that it is possible for native processes to hog up excess heap space, which can lead to memory errors during runtime. Should I maybe also sniff for free virtualmemory or paging file size prior to assigning heap space? How would you deal with this situation?
Oh, and this is my first post to these forums. Nice to meet you all and be gentle! :)
Update:
Thanks for all the answers. I'm not sure if I put my words right, but my problem rose from the fact that I have zero knowledge of the hardware this software will be run on but would, nevertheless, like to assign as much heap space for the software as possible.
I came to a solution of assigning a heap of 70% of physical memory IF there is sufficient amount of virtual memory available - less otherwise.
You can have heap sizes of around 28 GB with little impact on performance esp if you have large objects. (lots of small objects can impact GC pause times)
Heap sizes of 100 GB are possible but have down sides, mostly because they can have high pause times. If you use Azul Zing, it can handle much larger heap sizes significantly more gracefully.
The main limitation is the size of your memory. If you heap exceeds that, your application and your computer will run very slower/be unusable.
A standard way around these issues with mapping software (which has to be able to map the whole world for example) is it break your images into tiles. This way you only display the image which is one the screen (or portions which are on the screen) If you need to be able to zoom in and out you might need to store data at two to four levels of scale. Using this approach you can view a map of the whole world on your phone.
Best to not set JVM max memory to greater than 60-70% of workstation memory, in some cases even lower, for two main reasons. First, what the JVM consumes on the physical machine can be 20% or more greater than heap, due to GC mechanics. Second, the representation of a particular data entity in the JVM heap may not be the only physical copy of that entity in the machine's RAM, as the OS has caches and buffers and so forth around the various IO devices from which it grabs these objects.

mongodb high cpu usage

I have installed MongoDB 2.4.4 on Amazon EC2 with ubuntu 64 bit OS and 1.6 GB RAM.
On this server, only MongoDB running nothing else.
But sometime CPU usage reach to 99% and load average: 500.01, 400.73,
620.77
I have also installed MMS on server to monitor what's going on server.
Here is MMS detail
As per MMS details, indexing working perfectly for each queries.
Suspect details as below
1) HIGH non-mapped virtual memory
2) HIGH page faults
Can anyone help me to understand what exactly causing high CPU usage ?
EDIT:
After comments of #Dylan Tong, i have reduced active connetions but
still there is high non-mapped virtual memory
Here's a summary of a few things to look into:
1. Observed a large number of connections and cursors (13k):
- fix: make sure your connection pool is appropriate. For reporting, and your current request rate, you only need a few connections at most. Also, I'm guessing you have a m1small instance, which means you only have 1 core.
2. Review queries and indexes:
- run your queries with explain(), to observe how the queries are executed. The right model normally results in queries only pulling very few documents and utilization of an index.
3. Memory (compact and readahead setting):
- make the best use of memory. 1.6GB is low. Check how much free memory you have, and compare it to what is reported as resident. A couple of common causes of low resident memory is due to fragmentation. If there are alot of documents moving, changing size and such, you should run the compact command to defragment your data files. Also, a bad readahead can lead to poor use of memory as well. Check your readahead setting (http://manpages.ubuntu.com/manpages/lucid/man2/readahead.2.html). Try a few values starting with low values (http://docs.mongodb.org/manual/administration/production-notes/). The production notes recommend 32 (for standard 512byte blocks). Sometimes higher values are optimal if your documents are larger. The hope is that resident memory should be close to your available memory and your page faults should start to lower.
If you're using resources to the fullest after this, and you're still capped out on CPU then it means you need to up your resources.

boost::serialization high memory consumption during serialization

just as the topic suggests I've come across a slight issue with boost::serialization when serializing a huge amount of data to a file. The problem consists of the memory footprint of the serialization part of the application taking around 3 to 3.5 times the memory of my objects being serialized.
It is important to note that the data structure I have is a three dimensional vector of base class pointers and a pointer to that structure. Like this:
using namespace std;
vector<vector<vector<MyBase*> > >* data;
This is later serialised with a code analog to this one:
ar & BOOST_SERIALIZATION_NVP(data);
boost/serialization/vector.hpp is included.
Classes being serialised all inherit from "MyBase".
Now, since the start of my project I've used different archives for serialization from typical binary_archive, text, xml and finally polymorphic binary/xml/text. Every single one of these acts exactly the same way.
Typically this wouldn't be a problem if I had to serialize small amounts of data but the number of classes I have are in the milions (ideally around 10 milion) and the memory usage as I've been able to test it shows consistently that the memory allocated by boost::serialization part of the code is around 2/3 of the application whole memory footprint while writing the file.
This amounts to around 13.5 GB of RAM taken for 4 milion objects where the objects themselves take 4.2GB. Now this is as far as I've been able to take my code since I don't have access to a machine with more than 8GB of physical RAM. I should also note that this is a 64bit application being run on a Windows 7 professional x64 edition but the situation is similar on an Ubuntu box.
Anyone has any idea how I would go about troubleshooting this as it is unacceptable for me to have such high memory requirements for an application that will not use as much memory while running as it does while serializing.
Deserialization isn't as bad, as it allocates around 1.5 times the needed memory. This is something I could live with.
Tried turning tracking off with boost::archive::archive_flags::no_tracking but it acts exactly the same.
Anyone have any idea what I should do?
Using valgrind I found that the main reason of memory consumption is a map inside the library to track pointers. If you are certain that you do not need pointer tracking ( it means you are sure that there is no pointer aliasing) disable tracking. You can find here the main concepts of disable tracking. In short you must do something like this:
BOOST_CLASS_TRACKING(vector<vector<vector<MyBase*> > >, boost::serialization::track_never)
In my question I wrote a version of this macro that you could disable tracking of a template class. This must have a significant impact on your memory consumption.
Also notice that there are pointers inside any containers If you want tracking never you must disable tracking of them too. Currently I could not find any way to do this properly.

What is the memory footprint for .NET Framework Compact Edition?

What is the memory footprint for .NET Framework Compact Edition?
Thanks.
According to this wikipedia page, it's about 12MB
But then again, this page says it'll run in 128KB to 1MB.
My guess is that it's going to vary based on how much memory you have available and it'll swap pieces in and out of memory depending on circumstances. Quoting from the second link:
Random access memory (RAM) is used to store dynamic data structures and JIT-compiled code. The .NET Compact Framework uses available RAM, up to a limit specified by the device, to cache generated code and data structures and then frees the memory when appropriate.
The common language runtime uses a code-pitching technique to free blocks of JIT-compiled code at run time when memory is low. This enables larger programs to run on RAM-constrained systems with minimal performance penalty.
Although this article is not about the compact framework (it's about the micro version), it shows a comparison between the Micro and Compact frameworks, noting that the .NET Compact Framework has a memory footprint of 12 MB.