MS Access 32-bit or 64-bit - vba

I have created an Access 2016, 32-bit application that is about to be deployed. Every user will access the application through a RDP connection and then onto a MS Server 2012 R2 server (I doubt if it will ever be installed on any other platforms). As far as each user is concerned there will be nothing on the server, except the Access database (using an Access runtime license), and each machine will be pretty much locked down.
All of the libraries used in the Access application are available in either 32-bit or 64-bit formats, plus the back-end databases are usually small (typically between 10MB and 200MB). That said the compiled Access front-end is about 18MB and can run some rather complex functions/calculations (but nothing on a scientific level).
So finally, my question. Would there be any real benefit in moving to a 64-bit version (in either the short or long-term)?

Short answer: No. 64 bit version doesn't give any useful advantages, but you will have some additional problems with compatibility.

I would say this answer strongly depends on how many RAM your application will need. Because 32 bit is limited to 2 GB or something I think.
So this is not a real question of benefits or performance but of not running into any RAM issues if the amount fo data increases I think.
See also Choose between the 64-bit or 32-bit version of Office.
Besides that I see no advantages in choosing 64 bit.

Related

Why does the amount of memory available to x86 applications fluctuate in vb.net? [duplicate]

Which is the maximum amount of memory one can achieve in .NET managed code? Does it depend on the actual architecture (32/64 bits)?
There are no hard, exact figure for .NET code.
If you run on 32 bit Windows; your process can address up to 2 GB, 3 GB if the /3GB switch is used on Windows Server 2003.
If you run a 64 bit process on a 64 bit box your process can address up to 8 TB of address space, if that much RAM is present.
This is not the whole story however, since the CLR takes some overhead for each process. At the same time, .NET will try to allocate new memory in chunks; and if the address space is fragmented, that might mean that you cannot allocate more memory, even though some are available.
In C# 2.0 and 3.0 there is also a 2G limit on the size of a single object in managed code.
The amount of memory your .NET process can address depends both on whether it is running on a 32/64 bit machine and whether or not it it running as a CPU agnostic or CPU specific process.
By default a .NET process is CPU agnostic so it will run with the process type that is natural to the version of Windows. In 64 bit it will be a 64 bit process, and in 32 bit it will be a 32 bit process. You can force a .NET process though to target a particular CPU and say make it run as a 32 bit process on a 64 bit machine.
If you exclude the large address aware setting, the following are the various breakdowns
32 bit process can address 2GB
64 bit process can address 8TB
Here is a link to the full breakdown of addressable space based on the various options Windows provides.
http://msdn.microsoft.com/en-us/library/aa366778.aspx
For 64 bit Windows the virtual memory size is 16 TB divided equally between user and kernel mode, so user processes can address 8 TB (8192 GB). That is less than the entire 16 EB space addressable by 64 bits, but it is still a whole lot more than what we're used to with 32 bits.
I have recently been doing extensive profiling around memory limits in .NET on a 32bit process. We all get bombarded by the idea that we can allocate up to 2.4GB (2^31) in a .NET application but unfortuneately this is not true :(. The application process has that much space to use and the operating system does a great job managing it for us, however, .NET itself seems to have its own overhead which accounts for aproximately 600-800MB for typical real world applications that push the memory limit. This means that as soon as you allocate an array of integers that takes about 1.4GB, you should expect to see an OutOfMemoryException().
Obviously in 64bit, this limit occurs way later (let's chat in 5 years :)), but the general size of everything in memory also grows (I am finding it's ~1.7 to ~2 times) because of the increased word size.
What I know for sure is that the Virtual Memory idea from the operating system definitely does NOT give you virtually endless allocation space within one process. It is only there so that the full 2.4GB is addressable to all the (many) applications running at one time.
I hope this insight helps somewhat.
I originally answered something related here (I am still a newby so am not sure how I am supposed to do these links):
Is there a memory limit for a single .NET process
The .NET runtime can allocate all the free memory available for user-mode programs in its host. Mind that it doesn't mean that all of that memory will be dedicated to your program, as some (relatively small) portions will be dedicated to internal CLR data structures.
In 32 bit systems, assuming a 4GB or more setup (even if PAE is enabled), you should be able to get at the very most roughly 2GB allocated to your application. On 64 bit systems you should be able to get 1TB. For more information concerning windows memory limits, please review this page.
Every figure mentioned there has to be divided by 2, as windows reserves the higher half of the address space for usage by code running in kernel mode (ring 0).
Also, please mind that whenever for a 32 bit system the limit exceeds 4GB, use of PAE is implied, and thus you still can't really exceed the 2GB limit unless the OS supports 4gt, in which case you can reach up to 3GB.
Yes, in a 32 bits environment you are limited to a 4GB address-space but Windows claims about half. On a 64 bits architecture it is, well, a lot bigger. I believe it's 4G * 4G
And on the Compact Framework it usually is in the order of a few hundred MB
I think other answers being quite naive, in real world after 2GB of memory consumption your application will behave really badly. In my experience GUIs generally go massively clunky, unsusable after lots of memory consumptions.
This was my experience, obviously actual cause of this can be objects grows too big so all operations on those objects takes too much time.
The following blog post has detailed findings on x86 and x64 max memory. It also has a small tool (source available) which allows easy easting of the different memory options:
http://www.guylangston.net/blog/Article/MaxMemory.

Reducing the impact on diskspace when loading new software on a dev machine.

TL;DR
noob wants to setup dev machine/workspace on old hardware using windows 10 and load up 5+ software programs with similar file size and disk impact as Visual Studios. Wants reduce the impact these programs have on his already resource scarce laptop. Buying new hardware is the last resort, what is a viable workaround?
I have a laptop that I use for school and I am looking into using it as a development work space. (Visual Studio, SSMS, .NET, Jetbrains, Github Desktop, Infragistics Studio and the works) However I also don't want these programs to slow down my regular student workflow (Word, Excel, browser) and take up resources. Additionally some of the development programs I intend to only test drive during their trial period so I don't want them to stick around in my file system. A lot of the things these programs do overlaps so eventually I will be removing some of the programs that are not a good fit for what I am doing(Training for Web Development).
My area of concern is that Memory usage per Task manager floats around 50% and Disk hits 99% on a regular basis. My goal is to reduce the impact of loading even more software to my computer. It currently has the basic office programs for school but I think the cause of it being gloated is that it is a 4yr old computer (Lenovo Ideapad Z370) Intel Core i5-2410M dual-core/4GB DDR3-1333 RAM/500GB 5400RPM, which may not be the most optimal hardware to have windows 10 running on.
To address this problem, could I just load my development programs to a external hard drive and then connect it to the laptop only when I am in "developer workflow" ?
I've done some initial looking into and this solution is said to be non-viable solution because programs vary in portability. If this is the case, could you propose alternatives such as loading the programs to a VM and connecting to it when I need the programs? What are other possible solutions to my resource problem?
I have a dropbox account and a onedrive account and a $25 Azure Credit provided by the school which I have at my disposal. Solution should be cost-effective. Goal is to squeeze the last ounce of value of current hardware before upgrading.
Thanks in Advance! #noob
Hello All I found what I was looking for!
Azure Cloud has a "Developer Ready" image. The VM holds Visual studios and other helpful tools preloaded. However you need to have a MSDN subscription and a Window 10 Professional product key. I had neither so I went with another option of a VM with preloaded SQL Server. From there I was able to load up all the demoware and tools as well as SSMS. I can now access my tools through RDP from work, home, school, any other MS machine. Best of all, now I don't need to buy new hardware and the pay per minute use keeps the price within my allotted credits from Azure.
TL;DR
Free VM to tap into my dev space and develop from anywhere

What is the memory footprint for .NET Framework Compact Edition?

What is the memory footprint for .NET Framework Compact Edition?
Thanks.
According to this wikipedia page, it's about 12MB
But then again, this page says it'll run in 128KB to 1MB.
My guess is that it's going to vary based on how much memory you have available and it'll swap pieces in and out of memory depending on circumstances. Quoting from the second link:
Random access memory (RAM) is used to store dynamic data structures and JIT-compiled code. The .NET Compact Framework uses available RAM, up to a limit specified by the device, to cache generated code and data structures and then frees the memory when appropriate.
The common language runtime uses a code-pitching technique to free blocks of JIT-compiled code at run time when memory is low. This enables larger programs to run on RAM-constrained systems with minimal performance penalty.
Although this article is not about the compact framework (it's about the micro version), it shows a comparison between the Micro and Compact frameworks, noting that the .NET Compact Framework has a memory footprint of 12 MB.

Is it possible, by any stable method, to enable ReadyBoost on Windows Server 2008? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I know the standard answer is No. However hear out the reasons for wanting it, and then we'll go for whether it is possible to achieve the same effect as ReadyBoost via either enabling (and installing) ReadyBoost or using third party software.
Reasons for using Widows Server 2008 as a development environment on a laptop:
64-Bit, so you get the full use of 4GB RAM.
SharePoint developer, so you can run SharePoint locally and debug successfully.
Hyper-V, so you get hardware virtualisation of test environments and the ability to demo full solutions stored in Hyper-V on the road
So all of that equals: Windows Server 2008 (64) on a laptop.
Now because we are running Hyper-V, we require a large volume of disk space. This means we are using 5,000 rpm 250GB HDD.
So we are on a laptop, we are not able to use solid state HDD, and we only have 4GB of RAM and the throughput of a laptop motherboard rather than a server one... all of which means we are not flying... this thing isn't a sluggard but it's not zippy either.
Windows Server 2008 is based on the same code base as Vista. Vista features ReadyBoost, which enables USB 2 flash devices to be used as a weak cache for system files, which visibly increases the performance of Vista. As the codebases are similar, it should be possible for ReadyBoost to work on WS2008, however Microsoft have not shipped or enabled ReadyBoost in WS2008.
Given that we are running WS2008 on a laptop as a development environment, how can we achieve the performance gains of ReadyBoost through the use of flash devices in Windows Server 2008?
For the answer to be accepted it must outline an end to end process for achieving the performance gain.
Answers of 'No' will not be accepted as I understand some third party tools achieve some of the functionality, but I haven't seen a full end-to-end description of how to get going with them.
With Virtual machines, the answer to "do you really need so much memory" is a resounding YES. Trying to run 4-6 virtual machines eacch configured with 512MB or more really stresses out the system.
The ability to use ANYTHING as additonal virtual memory is key.
Is everything that's installed
64bit?
Do you have hardware virtualization
capabilities and is it turned on in
the bios?
Have you enabled superfetch?
Turn of desktop experience.
And last but not least, have a look
at this article and see if it
gives you any pointers.
Too add: It doesn't look like there is a reasonable way of using ReadyBoost on WS2008
OK, so this isn't quite ReadyBoost but the end result should be quite similar. Here is a video on youtube you can follow on how to do this on Vista - WS2008 should be no different.
http://www.youtube.com/watch?v=A0bNFvCgQ9w
Also, you may want to upgrade the hard drive on your laptop:
Recommend ST9500420ASG 500GB 7200RPM 16MB SATA w/ G-Shock Sensor

Which RDBMS should I use?

I have developed a high speed transactional server for transfering data over the internet so I do not need to rely upon a database implementation like MySQL to provide this. That opens up the question of which SQL version to use?
I really like SQLite, but I am not convinced it is industrial strength yet
What I do like is how lightweight it is on resources.
I loathed MySQL 8 years ago, but now it obviously IS industrial strength and my partners use it, so it is the obvious choice on the server side. If I use it I will just be connecting through "localhost" to the installed server (windows service). My concern is about the memory usage.
I DO NOT load the result set into memory, but I notice about 6Mb for the first connection. I am hoping subsequent connections are not an additional 6MB!
If I use the libmysqld.dll embedded libarary then does each new connection load a new instance of the embedded client/server code into memory? We assume so since each process will have its own in process memory...
Regardless, the manual states that When using the libmysqld embedded server, the memory benefits are essentially lost when retrieving results row by row because "memory usage incrementally increases with each row retrieved until mysql_free_result() is called."
http://dev.mysql.com/doc/refman/5.1/en/mysql-use-result.html
This means I must use the installed service. But is this as fast as the embedded server?
Are there any other low cost flavors that have high reliability?
SQLite is used in more applications than any other DB. (Citation required).
There are some issues with MySQL, like that it doesn't respect foreign integrity constraints.
I'm currently a fan of PostgreSQL, which is also freely available (and, I think if you read the licensing of MySQL, actually turns out to have a more amenable license for commercial use). It seems to be higher performance than SQLite, which probably has more to do with it being run on an SMP machine, and making use to different threads. It also seems to be quite solid.
Sorry to be pedantic, but the title should really be "Which RDBMS?" - the way it's phrased makes about as much sense as "Which Java?" or "Which Internet?"...