Currently I am using intellij idea 14.0.3(earlier I was using 12.1.4) on 64 bit windows 8.1.
When we install it, the installer creates the shortcut in start menu and other places which defaults to the 32 bit .exe file even on a 64 bit system.
I know that I can use the 64 bit executable to run idea in 64 bit mode as given in this SO answer.
But is there any significant performance difference between the two versions of the IDE?
And which executable is recommended for 64 bit systems? Shall I keep using 32 bit? or shall I switch to 64bit version?
The difference between running the 32 and 64 bits launcher is which Java will be used to start the IDE and what are the vmoptions parameters passed to it.
When starting the 32 bit one, IDEA uses it's own bundled 32 bit JRE. If there is no such one, IDEA tries to find 32 bit JRE in several places on specific order (%IDEA_HOME%, %JDK_HOME%, %JAVA_HOME%). The values in idea.exe.vmoptions are passed to it.
When starting the 64 bit one, it tries to find 64 bit JRE in several places on specific order. The values in idea64.exe.vmoptions are passed to it.
So if you want to allocate 2 GB RAM or more (with -xmx), this is not going to happen with 32 bit Java (resp. IDEA). And for large projects using less than 2GB causes the IDE to hang a lot. For smaller projects I don't think you'll feel any difference.
For reference this is the bug about this, so far they are not acting on it:
https://youtrack.jetbrains.com/issue/IDEA-146040
Related
Which is the maximum amount of memory one can achieve in .NET managed code? Does it depend on the actual architecture (32/64 bits)?
There are no hard, exact figure for .NET code.
If you run on 32 bit Windows; your process can address up to 2 GB, 3 GB if the /3GB switch is used on Windows Server 2003.
If you run a 64 bit process on a 64 bit box your process can address up to 8 TB of address space, if that much RAM is present.
This is not the whole story however, since the CLR takes some overhead for each process. At the same time, .NET will try to allocate new memory in chunks; and if the address space is fragmented, that might mean that you cannot allocate more memory, even though some are available.
In C# 2.0 and 3.0 there is also a 2G limit on the size of a single object in managed code.
The amount of memory your .NET process can address depends both on whether it is running on a 32/64 bit machine and whether or not it it running as a CPU agnostic or CPU specific process.
By default a .NET process is CPU agnostic so it will run with the process type that is natural to the version of Windows. In 64 bit it will be a 64 bit process, and in 32 bit it will be a 32 bit process. You can force a .NET process though to target a particular CPU and say make it run as a 32 bit process on a 64 bit machine.
If you exclude the large address aware setting, the following are the various breakdowns
32 bit process can address 2GB
64 bit process can address 8TB
Here is a link to the full breakdown of addressable space based on the various options Windows provides.
http://msdn.microsoft.com/en-us/library/aa366778.aspx
For 64 bit Windows the virtual memory size is 16 TB divided equally between user and kernel mode, so user processes can address 8 TB (8192 GB). That is less than the entire 16 EB space addressable by 64 bits, but it is still a whole lot more than what we're used to with 32 bits.
I have recently been doing extensive profiling around memory limits in .NET on a 32bit process. We all get bombarded by the idea that we can allocate up to 2.4GB (2^31) in a .NET application but unfortuneately this is not true :(. The application process has that much space to use and the operating system does a great job managing it for us, however, .NET itself seems to have its own overhead which accounts for aproximately 600-800MB for typical real world applications that push the memory limit. This means that as soon as you allocate an array of integers that takes about 1.4GB, you should expect to see an OutOfMemoryException().
Obviously in 64bit, this limit occurs way later (let's chat in 5 years :)), but the general size of everything in memory also grows (I am finding it's ~1.7 to ~2 times) because of the increased word size.
What I know for sure is that the Virtual Memory idea from the operating system definitely does NOT give you virtually endless allocation space within one process. It is only there so that the full 2.4GB is addressable to all the (many) applications running at one time.
I hope this insight helps somewhat.
I originally answered something related here (I am still a newby so am not sure how I am supposed to do these links):
Is there a memory limit for a single .NET process
The .NET runtime can allocate all the free memory available for user-mode programs in its host. Mind that it doesn't mean that all of that memory will be dedicated to your program, as some (relatively small) portions will be dedicated to internal CLR data structures.
In 32 bit systems, assuming a 4GB or more setup (even if PAE is enabled), you should be able to get at the very most roughly 2GB allocated to your application. On 64 bit systems you should be able to get 1TB. For more information concerning windows memory limits, please review this page.
Every figure mentioned there has to be divided by 2, as windows reserves the higher half of the address space for usage by code running in kernel mode (ring 0).
Also, please mind that whenever for a 32 bit system the limit exceeds 4GB, use of PAE is implied, and thus you still can't really exceed the 2GB limit unless the OS supports 4gt, in which case you can reach up to 3GB.
Yes, in a 32 bits environment you are limited to a 4GB address-space but Windows claims about half. On a 64 bits architecture it is, well, a lot bigger. I believe it's 4G * 4G
And on the Compact Framework it usually is in the order of a few hundred MB
I think other answers being quite naive, in real world after 2GB of memory consumption your application will behave really badly. In my experience GUIs generally go massively clunky, unsusable after lots of memory consumptions.
This was my experience, obviously actual cause of this can be objects grows too big so all operations on those objects takes too much time.
The following blog post has detailed findings on x86 and x64 max memory. It also has a small tool (source available) which allows easy easting of the different memory options:
http://www.guylangston.net/blog/Article/MaxMemory.
Why there is a limitation of 2GB redis.io database for 32 bit machine; How can I overcome that limitation under 32-bit machine.
32 bit systems can't handle addresses greater than 2^32. That is 4GB, though the available memory for an individual process is obviously going to be lower than that.
The recommended approach is to split your data across multiple smaller redis instances.
This can even make sense on a 64 bit machine, since redis requires significantly less memory if it can use 32 instead of 64 bits for internal addressing.
I have created an application in .NET. When I compile a 64bit version and a 32bit version of the same software, the 64bit executable is smaller.
However, when you run them both, the 64bit version uses more RAM.
I'm sure something is happening "under the hood", and was just interested why? (It's not a worry either way)
Thanks.
EDIT: C#.NET 4.0 if it matters.
In 32 bit applications, pointers are 32 bits i.e. 4 bytes, whereas they are 64 bits i.e. 8 bytes in 64 bit applications. So pointers (e.g. object reference) take up twice as much memory.
Also, in 32-bit applications objects have an overhead of 12 bytes per object, whereas in 64 applications they have an overhead of 24 bytes. Double again.
These affects will be noticed at runtime, not in the dll size.
Pointers are twice as big in 64bit mode. That could explain some (sometimes much) of the RAM usage difference.
Is there a way to convert from OMF 16 bit object file format to COFF 32 bit object file format?
I seriously doubt there would exist one. Code designed to be ran in 16 bit environment is binary incompatible with 32 bit mode. For example there's an instruction that tells the CPU to flip bit sizes for the upcoming instruction. In 16 bit mode such an instruction is needed to use 32 bit instructions. However the same opcode is needed to use 16 bit instructions in 32 bit mode.
Whether a series of opcodes are to be assumed to be 16 or 32 bits is specified in the segment descriptor.
Anyway, if you have 16 bit code that you'd like to use in 32 bit mode, that has no OS dependencies, you can use that by disaassembling it using IDA, then recompile it with a 32bit assembler. Of course only if that's permitted by its license. (although this could be fair use, but IANAL).
If the code is also tied to the underlying OS, this could be a lot more difficult, and would require to rewrite perhaps significant portions of the code.
Presumably the OMF16 code targets 16 bit x86 real-mode or 286 protected mode? That being the case, the object file format is not really your issue, the code itself is entirely incompatible since it uses different register sizes and a different addressing scheme.
Moreover if the code is targetted for DOS, Win16 or OS/2 (i.e. systems that used OMF16), then targeting it to a 32 bit target is not just a case of converting the object file format.
You need to rebuild from the source which give the tags to the question is either C or C++? Either that or you have a significant reverse engineering task on your hands!
I've searched on the net, and found these links:
The first one is a collection of tools:
http://sourceware.org/binutils/
The second one is a tool I think you need:
http://sourceware.org/binutils/docs/binutils/objcopy.html
They are not work in all cases(bazsi77 above), just test it.
I am not sure what is meant by 16-bit or 32-bit applications. Is that a 16-bit application is an application which would not require more than 2^16 bytes of memory space? Does this 16-bit refers to the max size of the application?
It means the application has been compiled for a processor that has 16 bits of memory addressing or 32 bit of memory addressing. Same goes for 64 bit applications.
The number refers to the maximum amount of memory that the application can address.
See wikipedia - 16-bit, 32-bit, 64-bit (and more).
A 32-bit application is software that runs in a 32-bit flat address space.
Answers to common questions
Will a 64 bit CPU run a standard (32-bit) program on a 64-bit version of an OS?
Yes it will. 64 bit systems are backward compatible with the 32 bit counterparts.
Will a 64-bit OS run a standard application on a 64 bit processor?
Again, it will. This is because of backward compatibility.
Can I run W2K and WXP on an 64 bit CPU, and use old software?
Yes, a 32 bit OS (W2K and WXP) will run on a 64 bit processor. Also, you should be able to run "old software" on a 64 bit OS.
The number(32 or 16 of the assembler directive of the addressmode (example "[use16]" and "[use32]")) does not refers to the maximum amount of memory that the application can address!
Because with the 80386+ it is also possible to use operandsize- and adresssize prefixes in combination with the 16 bit PM for to address up to 4 GB of ram.
(The maximum amount of memory that our application can be use is refering to the segment entries of the segmentsize inside of a GDT/LDT selector, or by the default size for a segment of 64 kb.)
The only one differnce between the 32 bit - and the 16 bit addressmode is the meaning and the usage of those operandsize- and addresssize prefixes.
[use16]
So if we want to use in the 16 bit addressmode 32 bit operands/addresses, then we have to add those prefixes to our opcode. Without those prefixes we can only use 16 bit.
[use32]
In the 32 bit addressmode we found a diametrical opposite situation, so if we want to use 32 bit operands/addresses, then we have to leave out those prefixes from our opcode and only if we want to use 16 operand/addresses, then we have to add those prefixes to our opcode.
If we use these size-directives above(or similar notation) carefully, then our assembler will do this job.
Operand size prefix in 16-bit mode
Dirk