Can a 32bit process access 64GB memory? - sql

I've a strange situation: A server, containing 64GB of memory, runs a SQL server process (64 bit) which consumes 32 GB of memory. There is about 17 GB memory available.
MS Dynamics Nav is running on top of SQL
Besides the 64bit SQL process, there is another SQL process and a NAS, both running 32 bits.
Every now and then, an error message is logged in the eventviewer, saying
There is not enough memory to execute this function.
If you work in a single-user installation, you can try reducing the
value of the 'cache' program property. You can find information about
how to optimize the operating system in the documentation for yo
Now I'm wondering what the problem is, since there is still 17 GB memory available. Is it possible that a 32-bit process cannot allocate memory in the last segment (60 to 64 GB)?

32 bit processes are limited to about 4 GB of memory usage. The x64 architecture should allow a 32bit process to run in any of the available memory space, but your 32bit process will still be limited by it's maximum addressible space (~4GB).

Related

Why does the amount of memory available to x86 applications fluctuate in vb.net? [duplicate]

Which is the maximum amount of memory one can achieve in .NET managed code? Does it depend on the actual architecture (32/64 bits)?
There are no hard, exact figure for .NET code.
If you run on 32 bit Windows; your process can address up to 2 GB, 3 GB if the /3GB switch is used on Windows Server 2003.
If you run a 64 bit process on a 64 bit box your process can address up to 8 TB of address space, if that much RAM is present.
This is not the whole story however, since the CLR takes some overhead for each process. At the same time, .NET will try to allocate new memory in chunks; and if the address space is fragmented, that might mean that you cannot allocate more memory, even though some are available.
In C# 2.0 and 3.0 there is also a 2G limit on the size of a single object in managed code.
The amount of memory your .NET process can address depends both on whether it is running on a 32/64 bit machine and whether or not it it running as a CPU agnostic or CPU specific process.
By default a .NET process is CPU agnostic so it will run with the process type that is natural to the version of Windows. In 64 bit it will be a 64 bit process, and in 32 bit it will be a 32 bit process. You can force a .NET process though to target a particular CPU and say make it run as a 32 bit process on a 64 bit machine.
If you exclude the large address aware setting, the following are the various breakdowns
32 bit process can address 2GB
64 bit process can address 8TB
Here is a link to the full breakdown of addressable space based on the various options Windows provides.
http://msdn.microsoft.com/en-us/library/aa366778.aspx
For 64 bit Windows the virtual memory size is 16 TB divided equally between user and kernel mode, so user processes can address 8 TB (8192 GB). That is less than the entire 16 EB space addressable by 64 bits, but it is still a whole lot more than what we're used to with 32 bits.
I have recently been doing extensive profiling around memory limits in .NET on a 32bit process. We all get bombarded by the idea that we can allocate up to 2.4GB (2^31) in a .NET application but unfortuneately this is not true :(. The application process has that much space to use and the operating system does a great job managing it for us, however, .NET itself seems to have its own overhead which accounts for aproximately 600-800MB for typical real world applications that push the memory limit. This means that as soon as you allocate an array of integers that takes about 1.4GB, you should expect to see an OutOfMemoryException().
Obviously in 64bit, this limit occurs way later (let's chat in 5 years :)), but the general size of everything in memory also grows (I am finding it's ~1.7 to ~2 times) because of the increased word size.
What I know for sure is that the Virtual Memory idea from the operating system definitely does NOT give you virtually endless allocation space within one process. It is only there so that the full 2.4GB is addressable to all the (many) applications running at one time.
I hope this insight helps somewhat.
I originally answered something related here (I am still a newby so am not sure how I am supposed to do these links):
Is there a memory limit for a single .NET process
The .NET runtime can allocate all the free memory available for user-mode programs in its host. Mind that it doesn't mean that all of that memory will be dedicated to your program, as some (relatively small) portions will be dedicated to internal CLR data structures.
In 32 bit systems, assuming a 4GB or more setup (even if PAE is enabled), you should be able to get at the very most roughly 2GB allocated to your application. On 64 bit systems you should be able to get 1TB. For more information concerning windows memory limits, please review this page.
Every figure mentioned there has to be divided by 2, as windows reserves the higher half of the address space for usage by code running in kernel mode (ring 0).
Also, please mind that whenever for a 32 bit system the limit exceeds 4GB, use of PAE is implied, and thus you still can't really exceed the 2GB limit unless the OS supports 4gt, in which case you can reach up to 3GB.
Yes, in a 32 bits environment you are limited to a 4GB address-space but Windows claims about half. On a 64 bits architecture it is, well, a lot bigger. I believe it's 4G * 4G
And on the Compact Framework it usually is in the order of a few hundred MB
I think other answers being quite naive, in real world after 2GB of memory consumption your application will behave really badly. In my experience GUIs generally go massively clunky, unsusable after lots of memory consumptions.
This was my experience, obviously actual cause of this can be objects grows too big so all operations on those objects takes too much time.
The following blog post has detailed findings on x86 and x64 max memory. It also has a small tool (source available) which allows easy easting of the different memory options:
http://www.guylangston.net/blog/Article/MaxMemory.

Frequent CPU spike on Openfire on Windows 2008

We are running Operfire version : 3.9.1 on Windows 2008 R2 server in a 64 bit JVM.
Very recently , we have started seeing frequent CPU spikes on the server. The threads that are taking up most of the CPU time are blocked on this offset in
JVM -
jvm!JVM_FindSignal+2d7d
We are not seeing any out of memory exceptions. Also the CPU spike is generally seen during non peak hours. As a first resolution for this issue we recently increased the max heap memory from 1024mb to 2048mb but that seems to have made spikes more frequent. The server has total of 8gb memory out of which more than 4gb is free.
Please see attached screenshot for JVM version.
Any idea what this offset refers to ? We are not sure what is stressing the CPU so much and if this is an indication of a problem that can get bigger.
Any help is much appreciated
jvm!JVM_FindSignal is internal function inside JVM library that listens the signal from native operating system and returns to Java.
Signal can be (SIGABRT, SIGFPE, "SEGV", SIGSEGV,SIGINT,SIGTERM,SIGBREAK,SIGILL)
We need to inspect vmstat and iostat information to actually figure out the actual issue.
You can file issue to http://bugreport.java.com/ with vmstat and io stat information we will get back to you.
You are using JDK 8 update 91. Please upgrade to latest version JDK8 update 112.

Microsoft Visual Studio 2015 increase max Process Memory (over 2GB)

On Windows 8
Is there a way to increase the process Memory limit of 2GB. My script needs 2.5GB RAM to run after I performed garbage collection to the best of my knowledge.
I need to run in 64-bit (not related to largeaddressaware)

Total Memory shown in Task Manager less than Hyper-v Manager Assigned Memory

I am running VMs on 2008 R2 and just tried to add memory to one. So I turned the machine off, increased the memory (static) and turned started it. The "Assigned Memory says "40970 MB" but Windows Task Manager at the VM says "32768" in the total row for physical memory.
Has anyone experienced this before, and can help me explain why this is happening and how to address it?
Sounds like this could be a limitation of your guest OS. Please verify that your guest OS supports more than 32GB. 32 is the max for Server 2008 R2 Standard Edition.
According to this article, Hyper-V assigns a memory buffer, which you can edit under the "Memory Management" page, as described in "Step 3.
The reason why there's more "Assigned Memory" is because Hyper-v allocated more ram to the VM than it's actively using, because the dynamic memory feature is enabled.
The dynamic memory feature lets VMs consume memory dynamically based on the current workload. If an application on a VM is designed to use a fixed amount of memory, it’s better to give that VM exactly the amount of memory it needs instead of using dynamic memory in order to make full use the installed memory.

Why there is a limitation of 2GB redis.io database on 32 bit machine

Why there is a limitation of 2GB redis.io database for 32 bit machine; How can I overcome that limitation under 32-bit machine.
32 bit systems can't handle addresses greater than 2^32. That is 4GB, though the available memory for an individual process is obviously going to be lower than that.
The recommended approach is to split your data across multiple smaller redis instances.
This can even make sense on a 64 bit machine, since redis requires significantly less memory if it can use 32 instead of 64 bits for internal addressing.