Will more CPU cache help compliation/development in Visual Studio 2008? - hardware

I'm thinking of a new laptop to replace my current machine. I notice a lot of machines have the P8xx and T9xxx Intel Core 2 Duo. The T9xxx have a premium but they have I believe 6 megs of cache compared to the 3 megs in the P8xx. Will this help me for compilation times or any other stat? Should I invest the premium in more RAM than the cache?
I do a lot of Web work in Visual Studio 2008, some C++/MFC. I just want to balance my budget around my needs without overkill. Thanks.

Usually that's not as helpful as increasing the number of CPU cores (which can run parallel build if you don't have one-by-one dependency tree) or the speed of CPU itself - but the result may still vary by your real project to work with.

I don't know if more cache will help. It can't hurt I imagine. There are a couple things that helped my Visual Studio performance.
Put as much RAM in your system as possible. RAM is cheap, you should max out your machine.
Go to your power options, and make sure you CPU is running full speed. For instance, on my machine, with Vista installed, switching the power options from "Balanced" to "High performance" roughly double the speed for compiles.

Related

Intellij IDEA: Improving performance due to being incredibly slow at times

I have these settings:
-server
-Xms2048m
-Xmx8096m
-XX:MaxPermSize=2048m
-XX:ReservedCodeCacheSize=2048m
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=512
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-XX:+HeapDumpOnOutOfMemoryError
-Dawt.useSystemAAFontSettings=lcd
Yes, they are maxed out.
I have also changed from 2500 to:
idea.max.intellisense.filesize=500
I am developing in a Java project which mostly works fine, although in some java classes it is slow at times, like when just editing a String.
However, today I am touching some html, css and javascript files and it is just going slower and slower.
The CPU level are not increasing considerably but just slow.
I am in debug mode most of the time, but i don't have auto build on save.
What other parameters can I increase/decrease to get it run smoother?
Right now it is not able to provide me with any help.
I have 24 GB ram and a I7-4810MQ so it's a pretty powerful laptop.
According to this Jetbrains blog post, you can often double the performance of IDEA by fixing various NTFS formatted disk related issues:
If you are running a Windows machine with NTFS disks, there is a good chance to double the performance of IntelliJ IDEA by optimizing the MFT tables, disk folder structure and Windows paging file.
We have used the Diskeeper, 2007 Pro Trial version tool to carry out the following procedure. You may of course, repeat this with your favorite defragmenter, provided it supports equivalent functionality.
Free about 25% space on the drive you are optimizing.
Turn off any real-time antivirus protection and reboot your system.
Defragment files.
Defragment MFT (Do a Frag Shield, if you are using Diskeeper). Note that this is quite lenghty process which also requires your
machine to reboot several times.
Defragment the folder structure (perform the Directory consolidation).
Defragment the Windows paging file.
The above optimizations have positive impact not only on IntelliJ
IDEA, but on general system performance as well.
You could open VisualVM, YourKit or other profiler and see what exactly is slow.
Or just report a performance problem.
VisualVM alone would tell you if the CPU is spending time with garbage collecting or normal stuff.
Large heap provides a considerable benefit only when garbage collection causes lags or eats too much CPU. Also if you enable a Memory Indicator by enabling Settings | Show Memory Indicator you will see how much of heap is occupied and when GC clears it.
BTW you absolutely need an SSD.

Basic virtualization questions

Excuse me for my lack of knowledge but I am really new to the Virtual world and have a few questions.
I work for a small charity who specialise in providing basic IT training. We have recently acquired a few Dell Poweredge 2650 servers and Dell desktops and we wish to offer both XP, Windows 7, Mac and Ubuntu training. I am looking at setting up a Virtual environment so that we can have a standard image for each OS (I currently use image files but it currently takes approximately 25mins to build each machine and multi-boot is not an option as the new machines have 20Gb disks).
The servers are all dual processor and we can purchase more memory(I need to justify the cost)
What are the memory requirements for
the Host?
How many VM's can I run
per server?
Can I run multiple instances of the same VM
Thanks in advance for your knowledge.
Darryn
You might be able to get away with a multi-boot option with those 20 gig disks; each OS will probably take no more than ten gigs for minimal installs, two OSes per machine isn't terrible. (Incidentally, look around for a group like FreeGeek in your area -- larger hard drives ought to be cheap for small sizes like 120-500 gigs.)
That said, virtualization might be just what you need, if you have a handful of pretty powerful machines.
I think between one and two gigabytes of host memory for every guest VM that you want to run would be very useful. At least in my experience, an Ubuntu image I gave 1024 megabytes to ran very quickly, but I didn't press it very far. Running Firefox or OpenOffice inside the VM would probably dictate more memory very quickly. Chrome seemed snappy.
So, if you've got 12 gigabytes of RAM, you might be able to get between four and twenty virtual machines hosted on the machine simultaneously, depending upon what your guests are doing.
As for disk space, if you use QEMU's -snapshot option, you ought to be able to save disk space. Each user could boot the same underlying disk image, but their own modifications would go into the 'snapshot' file. (I have no experience trying to do long-term system maintenance with this option, so it could be that all twenty of your users need to store service pack 2 contents when they upgrade in the future; I'd be scared of trying to modify the shared disk image once you've got snapshots of it running. Perhaps having everyone store 'personal documents' and the like in CIFS shares would make a ton of sense.)
The biggest hurdle will probably be Mac; because the Apple terms of service forbid running OS X on non-Apple hardware, you'll have to have some Apple machines around to run VirtualBox.

Creating a heater application

This might seem weird, but I'm interesting in creating an electric heater out of my computer, that is program an application, that heats up my PC, and I need some help.
I currently made an application, that runs infinite loops on the GPU (using a little shader), and on the CPU cores, however I'm interesting in getting the ram going too, as well as the several output ports, so.. About the ram heating, just allocate, and start randomly accessing and writing using all 8 cores?
And what about triggering CD-ROM, floppy etc, how do I do this?
How about heater with a purpose? Just run World Community Grid, create tons of heat while making your computer do valuable computations for science. It runs the processors wide open, is stable, and isn't just wasting cycles.
Have a look at How to stress test a computer If your interested in making your own try searching for open source stress test software that you could modify to your liking.
Use Furmark together with LinX/Prime95. Max out your settings. Make sure you have a strong enough PSU.
There`s a torture test option for CPU & RAM in Prime95 that looks like what you want. As for the GPU, there is Furmark which achieves the same kind of stress.
The heat from the other components will likely be not relevant (unless you have something really specific like a physx card) if you stress enough your cpu and gpu imho.

How To Simulate Lower CPU Processor Machines For Browser Testing

We have some users which are using lower-CPU powered machines and they're encountering slow response times using our web application. Is there any way for me to do testing so that I can simulate lower CPU rates?
For example, I have 2.3 Ghz computing power, can I lower it to 1.6 Ghz or lower so that I may be able to test it?
BTW, our customers are using Windows. I have to simulate low computing power on Internet Explorer as browser.
Most new CPUs multiplier can easily be lowered (Intel: Speedstep, AMD: PowerNow!). This is used to save power. With RMclock you can manually adjust your multiplier and thus lower your frequency and make your pc slower. I use this tool myself so I can tell you that it works.
http://cpu.rightmark.org/products/rmclock.shtml
The virtual machine Bochs(pronounced boxes) allows you to set a instructions per second directive. It's probably the slowest emulator out there as it is though...
Create some virtual machines.
You can use VirtualPC or VirtualBox both are free.
I would recommend to start something on the background which eats up all your processor cycles.
A program which finds primenumbers or something similar.
Another slight option in addition to those above is to boot windows in a lower resource config. Go to the start menu,, select run and type MSCONFIG. You can go to the boot tab, click on advanced options and limit the memory and number of of processsors. It's not as robust as the above, but it does give you another option.
Lowering the CPU clock doesn't always give expected results.
Newer CPUs feature architecture improvements which make them more efficient on an equvialent clock basis than older chips. Incidentally, because of this virtual machines are a bad way of testing performance for "older" tech as well.
Your best bet is to simply buy a couple of older machines. Using similar RAM (types and amounts), processor, motherboard chipsets, hard drives, and video cards. All of which feed into the total performance of the machine itself.
I bring the other components up because changing just one of them can have an impact on even browser performance. A prime example is memory. If your clients are constrained to something like 512MB of RAM, the machines could be performing a lot of hard drive access for VM swaps, even for just running the browser. In this situation downgrading the clock speed on your processor while still retaining your 2GB (assuming) of RAM would still not perform anywhere near the same even if everything else was equal.
Isak Savo'sanswer works, but can be a bit finicky, as the modern tpl is going to try and limit cpu load as much as possible. When I tested it out, It was hard (though possible with some testing) to consistently get the types of cpu usages I wanted.
Then I remembered, http://www.cpukiller.com/, which does this already. Highly recommended. As an aside, I found this util from playing old 90s games on modern machines, back when frame rate was pegged to cpu clock time, making playing them on modern computers way too fast. Great utility.
Another big difference between high-performance and low-performance CPUs is the number of cores available. This can realistically differ by a factor of 4, way more than the difference in clock frequency you're likely to encounter.
You can solve this by setting the thread affinity. Even IE6 will use 13 threads just to show google.com. That means it will benefit from a multi-core CPU. But if you set the thread affinity to one core only, all 13 IE threads will have to share that one core.
I understand that this question is pretty old, but here are some receipts I personally use (not only for Web development):
BES. I'm getting some weird results while using it.
Go to Control Panel\All Control Panel Items\Power Options\Edit Plan Settings\Change Advanced Power Settings, then go to the "Processor" section and set it's maximum state to 5% (or something else). It works only if your processor supports dynamic multiplier change and ACPI driver is installed correctly.
Run Task Manager and set processor affinity to a single core (or whatever number of cores you want) for your browser's (or any other's) process. Not a best practice for browsers, because JavaScript implementations are usually single-threaded, but, as far as I see, modern browsers actually DO use multiple cores.
There are a few different methods to accomplish this.
If you're using VirtualBox, go into the Settings for the VM you want to slow the CPU speed for. Go to System > Processor, then set the Execution Cap. The percentage controls how slow it will go: lower values are slower relative to the regular speed. In practice, I've noticed the results to be choppy, although it does technically work.
It is also possible to set the CPU speed for the whole system. In the Windows 10 Settings app, go to System > Power & Sleep. Then click Additional Power Settings on the right hand side. Go to Change Plan Settings for the currently selected plan, then click Change Advanced Power Plan Settings. Scroll down to Processor Power Management and set the Maximum Processor State. Again, this is a percentage. Although this does work, I find that in practice, it doesn't have a big impact even when the percentage is set very low.
If you're dealing with a videogame that uses DirectX or OpenGL and doesn't have a framerate cap, another common method is to force Vsync on in your graphics driver settings. This will usually slow the rendering to about 60 FPS which may be enough to play at a reasonable rate. However, it will only work for applications using 3D hardware rendering specifically.
Finally: if you'd rather not use a VM, and don't want to change a system global setting, but would rather simulate an old CPU for one specific process only, then I have my own program to do that called Old CPU Simulator.
The main brain of the operation is a command line tool written in C++, but there is also a GUI wrapper written in C#. The GUI requires .NET Framework 4.0. The default settings should be fine in most cases - just select the CPU you'd like to simulate under Target Rate, then hit New and browse for the program you'd like to run.
https://github.com/tomysshadow/OldCPUSimulator (click the Releases tab on the right for binaries.)
The concept is to suspend and resume the process at a precise rate, and because it happens so quickly the process will appear to just be running slowly. For example, by suspending a process for 3 milliseconds, then resuming it for 1 millisecond, it will appear to be running at 25% speed. By controlling the ratio of time suspended vs. time resumed, it is possible to simulate different speeds. This is completely API agnostic (it doesn't hook DirectX, OpenGL, etc. it'll work with a command line program if you want.)
Old CPU Simulator does not ask for a percentage, but rather, the clock speed to simulate (which it calls the Target Rate.) It then automatically determines, based on your CPU's real clock speed, the percentage to use. Although clock speed is not the only factor that has improved computer performance over time (there are also SSDs, faster GPUs, more RAM, multithreaded performance, etc.) it's a good enough approximation to get fairly consistent results across machines given the same Target Rate. It also supports other options that may help with consistency, such as setting the process affinity to one.
It implements three different methods of suspending and resuming a process and will use the best available: NtSuspendProcess, NtQuerySystemInformation, or Toolhelp Snapshots. It also uses timeBeginPeriod and timeEndPeriod to achieve high precision timing without busy looping. Note that this is not an emulator; the binary still runs natively. If you like, you can view the source to see how it's implemented - it's not a large project. On my machine, Old CPU Simulator uses less than 1% CPU and less than 1 MB of memory, so the program itself is quite efficient (unlike running intensive programs to intentionally slow the CPU.)

PostgreSQL recompile needed after upgrading to a quad-core CPU?

I recently upgraded my server running CentOS 5.0 to a quad-core CPU from a dual-core CPU. Do I need a recompile to make use of the added cores? PostgreSQL was installed by compiling from source.
EDIT: The upgrade was from an Intel Xeon 5130 to an Intel Xeon 5345.
No, you will not need to recompile for PostgreSQL to take advantage of the additional cores.
What will happen is that the Linux scheduler will now be able to select two or more (up to four) postgresql threads/processes to run all at the same time, basically they are working in parallel rather than having to wait on each other to get a slice of CPU time. This means you are able to process data faster since now four different queries can be processed at the same time rather than just the two you had previously.
PostGreSQL has no further tuning required to take advantage of multiple cores/physical CPU's and it is entirely up to the OS. You basically improved your performance for the cost of a new CPU.
If you are looking for information on tuning your PostgreSQL take a look at this post on tuning PostgreSQL on a dedicated server.
Since you now have more processes able to run at the same time, you may also want to consider upgrading the amount of RAM you have depending on what you currently have installed, the more the database is able to be stored in memory the faster all of the transactions and queries will be!
If it's the same architecture, I don't think a recompile should be needed.
If it's a different architecture (x86 vs x86_64 vs amd64, etc.), then you will have to recompile.
No, the multiprocessing is handled dynamically.
Presumably both the old and new chips are running x86_64 architecture. No recompile is necessary, however some tuning of the database and/or application might be to fully use those extra cores.