1.90 GHz with Turbo Boost up to 3 GHz - good enough to run VM's [closed] - windows-8

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
http://www.sony.co.uk/product/vn-duo/svd1121z9e
I'm about to but the above laptop as a desktop replacement, but I want to know if it's a good enough processor to run VM's via hyper-v using windwos8.
not sure what Turbo Boost up to 3 GHz means,
Any input greatly received

It's a weird choice for desktop replacement, since it's not designed to be.
The first thing you want to know here, it's a U-series CPU, which means it uses considerably lower power. And that comes at a cost, the lower base CPU frequency. Yes, it can run up to 3.0 GHz, but again, better performance costs energy.
As it gives you a 6.5 hours battery life, the actual battery life when you Turbo Boost to 3.0 GHz will be considerably lower.
And for your another question, what is Turbo Boost? Turbo Boost is a technology to boost your CPU's performance when it's needed. Much like a smart power control that gives your CPU a push when it's needed.
Back to virtualization. It have two core, and I recommend using at least two cores for a virtual machine, so that leaves no spare core for your system, and that's a pretty bad idea. Also, 8 GB of memory may not be enough for a more memory-demand virtual machine.
For a desktop replacement, I will recommend a quad-core laptop, especially if you need to run a more performance-demand virtual machine.
Almost forgot one big thing, the screen size, 11" is really small, despite the high-resolution, your eyes might sour over time. I enjoyed programming on my much less powerful 21" desktop than my 15" laptop for the bigger screen. Unless you're going to pair it with an external display, I will suggest you consider AT LEAST a 13" model.

Related

Same model, same batch size - (why) is GPU always faster than CPU? [duplicate]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
The community reviewed whether to reopen this question 2 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
How are GPUs more faster then CPUs? I've read articles that talk about how GPU's are much faster in breaking passwords than CPUs. If thats the case then why can't CPUs be designed in the same way as GPUs to be even in speed?
GPU get their speed for a cost. A single GPU core actually works much slower than a single CPU core. For example, Fermi GTX 580 has a core clock of 772MHz. You wouldn't want your CPU with such a low core clock nowadays...
The GPU however has several cores (up to 16) each operating in a 32-wide SIMD mode. That brings 500 operations done in parallel. Common CPUs however have up to 4 or 8 cores and can operate in 4-wide SIMD which gives much lower parallelism.
Certain types of algorithms (graphics processing, linear algebra, video encoding, etc...) can be easily parallelized on such a huge number of cores. Breaking passwords falls into that category.
Other algorithms however are really hard to parallelize. There is ongoing research in this area... Those algorithms would perform really badly if they were run on the GPU.
The CPU companies are now trying to approach the GPU parallelism without sacrificing the capability of running single-threaded programs. But the task is not an easy one. The Larabee project (currently abandoned) is a good example of the problems. Intel has been working on it for years but it is still not available on the market.
GPUs are designed with one goal in mind: process graphics really fast. Since this is the only concern they have, there have been some specialized optimizations in place that allow for certain calculations to go a LOT faster than they would in a traditional processor.
In the case of password cracking (or the molecular dynamic "folding at home" project) what has happened is that programmers have found ways of leveraging these optimized processes to do things like crunch passwords at a faster rate.
Your standard CPU has to do a lot more different calculation and processing types that what graphics processors do, so they can't be optimized in a similar manner.

Dualcore vs Quadcore for Development [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Recently I am thinking to upgrade my current development laptop in newer machine. I'm all the while working under Linux/Windows dual boot and doing development work on both.
My current development platform including Java (Eclipse), Ruby/RoR (Gvim/Atom), Blender (learning), Erlang, ANSI C (VS/gcc), Android Studio for Android development, VirtualBox running Windows for Microsoft Office suite, C# development and MSSQL development. Sometimes need to debug as well under Virtualbox Windows by running Eclipse. Natively on Linux using MySQL/Postgresql for development and testing. I'm interested in exploring 3D and game programming as well.
Occasionally I do play some 3D games on Windows such as Modern Warfare 4, BF4 etc.
Now for the new year, thinking to upgrade to Macbook Pro but I'm undecided on to look at dualcore or should I stick to quadcore? Is there any benefit if I'm using quadcore for development purposes?
Googled and found the link below but it is done in year 2007.
http://blog.codinghorror.com/choosing-dual-or-quad-core/
http://blog.codinghorror.com/quad-core-desktops-and-diminishing-returns/
Understanding that utilizing multiple cores is mostly software or OS responsibility and this is easier to update to utilize those extra fire power.
So is it still trivial for development machine to have quadcore CPU as of 2015/2016? I've already targeted I'd take 16GB of RAM but not on CPU choice.
If you are looking for an upgrade I would recommend you to first look for a laptop that comes with a SSD harddisk because disk I/O is the typical performance bottleneck.
As for whether you should go for dual or quad cores... I personally think it doesn't really matter because not every piece of software is written to fully utilizes all CPU cores. It really depends on how the software is implemented. For instance if it is a multi-threaded or multi-process program then you will benefit heavily otherwise you probably won't see much of a difference. But the speed of your CPU core will definitely make a difference thou.
I see so you are also into games programming and some serious FPS gaming like BF4, then you will definitely need a powerful quad chipset and also an excellence graphics card. Otherwise if it is just for pure web/server development (not games), a good dual core should do it.

Does extra RAM improve performance when running VMs? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am running a Linux VM on Windows 7, and experience extreme slowness when using the VM. Is this because of a lack of memory? I thought VM's leveraged primarily compute cores and drive space (HDD/SSD).
I am looking at getting a new Surface device and need to know whether memory is critical to running VMs. Unfortunately VMWare hasn't been very helpful: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008360. Please advise.
There are four main bottlenecks with respect to VMs in my experience:
CPU
Memory
Disk IO Throughput
Network throughput
CPU
CPUs will be taxed and cause performance issues if your VMs are working constantly at computing. IO (Input/Output) usually doesn't factor into this it is more about computation time on the processor for your applications. You will need more CPU if you are doing tasks like:
computing large numbers
video/photo editing
Video Games
Memory
Memory is a very common bottleneck as each machine will need a set amount to host the OS. Windows usually uses more for this than Linux and the like. Also, If you are running applications which are storing large amounts of data in memory like:
Some Databases
video playback
video/photo editing
Video Games
Disk Throughput
While disk storage space is becoming incredibly cheap, there is still a finite amount of throughput (the amount of data it can send/receive at once) available. You will notice more lag here if you are running a low-RPM disk like a 5200 RPM drive, if you are experiencing lag (especially during boot) your best bang for your buck will usually be in a solid-state hard drive.
Network Throughput
If your VMs are reaching out to the network or handling a lot of network requests (like a server), you may notice some lag, but this will not usually affect the other factors, so your lag would usually be noticed only in page load times. If this is a problem, I have invested in a NIC (Network Interface Card) which has four gigabit network interfaces. This ran me about $250 about a year ago, and it has allowed my servers to keep up with a couple of medium traffic websites. I believe any lag my users experience are usually on my ISPs end (only 25 Mb service is available in my area).

Why would you need to know about each processor in particular? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm curious to understand what could be the motivation behind the fine-grained detail of each virtual processor that the Windows 8 task manager seems to be focusing on.
Here's a screenshot (from here):
I know this setup could only exist in a non-standard, costly, important server environment (1TB RAM!), but what is the use of a heatmap? Or, setting processor affinity:
What I'm asking is, under what circumstances a developer would care if specific processor X is being used more than processor Y (instead of just knowing that a single non-multithreaded process is maxing out a core, which would be better shown as a process heatmap, instead of a processor heatmap), or care whether a process will use this or that processor (which I can't expect a human to guess better than an auto-balancing algorithm)?
In most cases, it doesn't matter, and the heatmap does nothing more than look cool.
Big servers, though, are different. Some processors have a "NUMA", or Non-Uniform Memory Access, architecture. In these cases, some processor cores are able to access some chunks of memory faster than other cores. In these cases, adjusting the process affinity to keep the process on the cores with faster memory access might prove useful. Also, if a processor has per-core caches (as many do), there might be a performance cost if a thread were to jump from one core to another. The Windows scheduler should do a good job avoiding switches like these, but I could imagine in some strange workloads you might need to force it.
These settings could also be useful if you want to limit the number of cores an application is using (say to keep some other cores free for another dedicated task.) It might also be useful if you're running a stress test and you are trying to determine if you have a bad CPU core. It also could work around BIOS/firmware bugs such as the bugs related to high-performance timers that plagued many multi-core CPUs from a few years back.
I can't give you a good use case for this heat map (except that it looks super awesome), but I can tell you a sad story about how we used CPU affinity to fix something.
We were automating some older version of MS Office to do some batch processing of Word documents and Word was occasionally crashing. After a while of troubleshooting and desperation, we tried setting Word process' affinity to just one CPU to reduce concurrency and hence reduce the likelihood of race conditions. It worked. Word stopped crashing.
One possible scenario would be a server that is running multiple VMs where each client is paying to have access to their VM.
The administrator may set the processor affinities so that each VM has guaranteed access to X number of cores (and would charge the client appropriately).
Now, suppose that the administrator notices that the cores assigned to ABC Company Inc.'s VMs are registering highly on the heatmap. This would be a perfect opportunity to upsell ABC Company Inc and get them to pay for more cores.
Both the administrator and ABC Company Inc win - the administrator makes more money, and ABC Company Inc experience better performance.
In this way, the heatmap can function as a Decision Support System which helps ABC Company Inc decide whether their needs merit more cores, and helps the administrator to target their advertising better to the their customers that can benefit.

Is a gaming machine better for software development? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Is a gaming machine better for software development?
NO.
CPU
For software development, you need lots of cores. For gaming, you need fast but not necessarily many cores. This is slowly changing as newer games are being written to take advantage of multicore CPUs, but the general case is that most gaming machines focus on raw CPU power. For example, in my case, I'm an RoR developer, and during development I run: my editor, mongrel, solr, postgresql, and memcached. Most of the time I also have an open browser, a PDF editor, and iTunes.
RAM
Most games will be OK with 2-3GB of RAM.
For software development, especially web development - if you will be running multiple servers - you'll want at least 4GB, or even 8GB of RAM.
GPU
Top-of-the-line graphics cards for gaming can cost $500 or more. For software development, you can get away with the cheapest GPU you can get. The only aspect of the video card you'll want to concern yourself with is the capability to handle multiple large monitors.
It will actually be helpful if your development machine is so crippled (gaming-wise) that you can't play the games you like to play on that machine. No distractions! :)
I would say some aspects are the same between gaming machines and development machines, like large disks, a lot of memory, etc. So in that respect yes, a gaming machine would fit better than a low end desktop.
On the other hand, gaming machines tend to be tuned towards raw performance instead of robustness. A development machine often does not need a state of the art graphics card, nor does it want a RAID-0 to spead up the disk. If it crashes one disk you lose all your work, so RAID-1 would be much better. Same holds for memory, ECC (or what its called nowadays) is a bit slower but adds robustness.
One gotcha with powerful development machines is that they do not represent the non-functional requirements as to execution environment. If you are not aware of this enough your software will run slow on a "normal" machine because it ran great on your supercomputer :-) One take on this is that development machines should always be a tad slower than the target machines, but this cuts into your development time. A better solution is to have slower machines in the test environment and a few slower machines in the development lab.
Some attributes of gaming machines can help developers, like having a good deal of memory, or a quad core processor (so you can, respectively, run VMs without hassle, and compile faster).
But a fast GPU won't do you much good, so there's no point in spending much money on it. Unless you plan on developing or playing games, of course.
Summing up: if you plan on using the PC for fun, get a reasonable GPU. If you don't, skip it and keep the rest just like you would. You won't regret it.
If you want to develop games, sure. I should know, I have experience on both.
Unless you're programming something to do with graphics / game related, not necessarily. The video card is going to be underused otherwise. On the other hand gaming machines tend towards the high end making them ideal for many programming tasks.
I think so. I think the performance required for gaming will greatly help developers. Only overkill would be graphics, unless you use big rendering software, in which case RAM, graphics is a must.
Good CPU, Lots of fast RAM, and a fast HD will do you lots of good.
What you'll need for software development is usually a machine with ample RAM, ample HDD space (and a fast HDD or set of HDDs to boot), a fast multi-core processor (very important if you're working with compiled languages, especially the likes of C++ which take a long time to compile compared to Java or C#) and preferably the ability to drive multiple monitors. For the latter, it's a case of the more the merrier as screen real estate is one of those things that you can never have enough of.
While a lot of this does indeed sound like the spec for a gaming machine due to its raw number crunching ability, the main difference is likely to be the graphics hardware. You don't need something that can render x million polygons per second on a single monitor if you're trying to drive 3x 24" monitors as 2D displays. In fact you probably don't want a usually rather noisy gamer spec video card that only shines when rendering 3D; you're more likely to get more out of a "pro" graphics card that can drive 4 monitors instead.
So yes, I'd think the spec is quite similar and there is a lot of overlap between the two but in the end a developer spec machine is not the same as a gaming rig.
A gaming machine without the fancy video card, I think that's more suitable for a programmer. (you can use the video card money to add more RAM for example)
Gaming machines are great for everything except your wallet ;-)
Programming WPF Shader Effects is one of those particular tasks where a gaming machine can actually allow you to do more while not working in game-development. Also, GPGPU work may benefit from fast memory transfer and fast GPU.