Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am running a Linux VM on Windows 7, and experience extreme slowness when using the VM. Is this because of a lack of memory? I thought VM's leveraged primarily compute cores and drive space (HDD/SSD).
I am looking at getting a new Surface device and need to know whether memory is critical to running VMs. Unfortunately VMWare hasn't been very helpful: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008360. Please advise.
There are four main bottlenecks with respect to VMs in my experience:
CPU
Memory
Disk IO Throughput
Network throughput
CPU
CPUs will be taxed and cause performance issues if your VMs are working constantly at computing. IO (Input/Output) usually doesn't factor into this it is more about computation time on the processor for your applications. You will need more CPU if you are doing tasks like:
computing large numbers
video/photo editing
Video Games
Memory
Memory is a very common bottleneck as each machine will need a set amount to host the OS. Windows usually uses more for this than Linux and the like. Also, If you are running applications which are storing large amounts of data in memory like:
Some Databases
video playback
video/photo editing
Video Games
Disk Throughput
While disk storage space is becoming incredibly cheap, there is still a finite amount of throughput (the amount of data it can send/receive at once) available. You will notice more lag here if you are running a low-RPM disk like a 5200 RPM drive, if you are experiencing lag (especially during boot) your best bang for your buck will usually be in a solid-state hard drive.
Network Throughput
If your VMs are reaching out to the network or handling a lot of network requests (like a server), you may notice some lag, but this will not usually affect the other factors, so your lag would usually be noticed only in page load times. If this is a problem, I have invested in a NIC (Network Interface Card) which has four gigabit network interfaces. This ran me about $250 about a year ago, and it has allowed my servers to keep up with a couple of medium traffic websites. I believe any lag my users experience are usually on my ISPs end (only 25 Mb service is available in my area).
Related
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
The community reviewed whether to reopen this question 2 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
How are GPUs more faster then CPUs? I've read articles that talk about how GPU's are much faster in breaking passwords than CPUs. If thats the case then why can't CPUs be designed in the same way as GPUs to be even in speed?
GPU get their speed for a cost. A single GPU core actually works much slower than a single CPU core. For example, Fermi GTX 580 has a core clock of 772MHz. You wouldn't want your CPU with such a low core clock nowadays...
The GPU however has several cores (up to 16) each operating in a 32-wide SIMD mode. That brings 500 operations done in parallel. Common CPUs however have up to 4 or 8 cores and can operate in 4-wide SIMD which gives much lower parallelism.
Certain types of algorithms (graphics processing, linear algebra, video encoding, etc...) can be easily parallelized on such a huge number of cores. Breaking passwords falls into that category.
Other algorithms however are really hard to parallelize. There is ongoing research in this area... Those algorithms would perform really badly if they were run on the GPU.
The CPU companies are now trying to approach the GPU parallelism without sacrificing the capability of running single-threaded programs. But the task is not an easy one. The Larabee project (currently abandoned) is a good example of the problems. Intel has been working on it for years but it is still not available on the market.
GPUs are designed with one goal in mind: process graphics really fast. Since this is the only concern they have, there have been some specialized optimizations in place that allow for certain calculations to go a LOT faster than they would in a traditional processor.
In the case of password cracking (or the molecular dynamic "folding at home" project) what has happened is that programmers have found ways of leveraging these optimized processes to do things like crunch passwords at a faster rate.
Your standard CPU has to do a lot more different calculation and processing types that what graphics processors do, so they can't be optimized in a similar manner.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I've realized that the 3 ways to make an I/O connection :
1- Programmed I/O (polling)
2- Interrupt-Driven I/O
3- Direct Memory Access (DMA)
now, I need to relate this with the reality of how accessing I/O addresses is done
(Isolated I/O || Memory-mapped I/O) :
DMA
Memory mapping does not affect the direct memory access (DMA) for a device, because, by definition, DMA is a memory-to-device communication method that bypasses the CPU.
this is all information I have.
now, what about Interrupt-driven and Programmed I/O, what is the addressing modes are used in these cases?
Does a microcontroller can do both addressing modes (Isolated/memory-mapped) or only one choice?
Am I understanding the topics right now, or there are any misconceptions?
Port mapped vs memory mapped (Communication)
This is how the IO access is performed, i.e. how the CPU communicates with the device.
With port mapped IO the CPU uses special instructions (e.g. x86's in and out) to read/write from a device in a special IO address space you can't access with load/store instructions.
With memory mapped IO the CPU performs normal memory loads and stores to communicate with a device.
The latter is usually more granular and uniform when it comes to security permissions and code generation.
Polling vs Interrupt driven (Notification)
This is how notifications from the devices are received by the CPU.
With polling the CPU will repeatedly read a status register from the device and check if a completion bit (or equivalent) is set.
With interrupt driven notifications the device will raise an interrupt without the need for the CPU to do any periodic work.
Polling hogs the CPU but has less latency for some workload.
DMA vs non-DMA (Transfer)
This is how the data is transferred from the device to the CPU.
With DMA the device will write directly into memory.
Without DMA the CPU will have to read the data repeatedly (either with port or memory mapped IO).
All these three dimensions are independent of each other, you can combine them however you like (e.g. port mapped, interrupt driven, DMA).
Note, however, that the nomenclature is not consistent in the literature.
Also, different devices have different interfaces that may not need all of the three dimensions (e.g. a very simple output-only GPIO pin may have a single write-only register, so it makes no sense to talk about polling or DMA in this case).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am running Ubuntu on my physical machine; and VirtualBox to run various virtual ones on top.
Most of the time, I was doing "straight forward" installations; but today I wanted to be "smart" and checked out the partitions that the fedora or ubuntu installers will create on my virtual disks.
And sure, when going for the defaults, some GBs of my virtual disks will be used as "swap space".
Now I am wondering: assuming that I have plenty of physical memory (so I can assign 6 or 8 GB to a VM) - is there any sense in dedicated swap space for a a virtual machine?
This is answered at ServerFault:
TL;DR: use swap: 1. avoids out of memory error, 2. guest OS is better at memory management
Ignoring the fact that people are dealing with OS specific reasons I
have two reasons why it's a bad idea to not run with a swap
partition/file.
If you have 1.5 GB of RAM allocated to a VM with no space file/partition and it wants to use 1.5 GB + 1 MB it will report an out
of memory error. With the swap space it will be able to swap data out
of active memory and onto the disk.
The guest OS does a much better job of memory management than the host. This is why technology like memory ballooning exists because the
Host can make educated guesses on what memory isn't needed right now
but the guest knows at a much more intelligent level (this keeps OS
memory from being swapped out which could kill your performance).
Swap partitions are used to free your physical memory when it goes out of space. In modern day machines, with plenty of memory, it depends on the type of applications you would be running. If you are planning to run such memory intensive programs like video editors, high end games or something of that sort, virtual memory or swap space is an asset. But if it is not the case then you are safe to avoid swap space, provided you have enough memory. But it is safe to have a fallback.
That depends on what programs you are running on your host system along with the virtual machine, or what programs you are running within the virtual machine.
The only upper bound on memory that software can consume is the total memory (physical or virtual) available to it. There are plenty of programs that require large amounts of memory when behaving normally, and plenty of circumstances that cause a program to consume large amounts of memory (e.g. loading of input files). There are also plenty of faulty programs that unintentionally consume large amounts of memory.
You can often get an idea by examining requirements or recommendations (e.g. memory and drive space) of the programs you run. Failing that, try it out.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
http://www.sony.co.uk/product/vn-duo/svd1121z9e
I'm about to but the above laptop as a desktop replacement, but I want to know if it's a good enough processor to run VM's via hyper-v using windwos8.
not sure what Turbo Boost up to 3 GHz means,
Any input greatly received
It's a weird choice for desktop replacement, since it's not designed to be.
The first thing you want to know here, it's a U-series CPU, which means it uses considerably lower power. And that comes at a cost, the lower base CPU frequency. Yes, it can run up to 3.0 GHz, but again, better performance costs energy.
As it gives you a 6.5 hours battery life, the actual battery life when you Turbo Boost to 3.0 GHz will be considerably lower.
And for your another question, what is Turbo Boost? Turbo Boost is a technology to boost your CPU's performance when it's needed. Much like a smart power control that gives your CPU a push when it's needed.
Back to virtualization. It have two core, and I recommend using at least two cores for a virtual machine, so that leaves no spare core for your system, and that's a pretty bad idea. Also, 8 GB of memory may not be enough for a more memory-demand virtual machine.
For a desktop replacement, I will recommend a quad-core laptop, especially if you need to run a more performance-demand virtual machine.
Almost forgot one big thing, the screen size, 11" is really small, despite the high-resolution, your eyes might sour over time. I enjoyed programming on my much less powerful 21" desktop than my 15" laptop for the bigger screen. Unless you're going to pair it with an external display, I will suggest you consider AT LEAST a 13" model.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm curious to understand what could be the motivation behind the fine-grained detail of each virtual processor that the Windows 8 task manager seems to be focusing on.
Here's a screenshot (from here):
I know this setup could only exist in a non-standard, costly, important server environment (1TB RAM!), but what is the use of a heatmap? Or, setting processor affinity:
What I'm asking is, under what circumstances a developer would care if specific processor X is being used more than processor Y (instead of just knowing that a single non-multithreaded process is maxing out a core, which would be better shown as a process heatmap, instead of a processor heatmap), or care whether a process will use this or that processor (which I can't expect a human to guess better than an auto-balancing algorithm)?
In most cases, it doesn't matter, and the heatmap does nothing more than look cool.
Big servers, though, are different. Some processors have a "NUMA", or Non-Uniform Memory Access, architecture. In these cases, some processor cores are able to access some chunks of memory faster than other cores. In these cases, adjusting the process affinity to keep the process on the cores with faster memory access might prove useful. Also, if a processor has per-core caches (as many do), there might be a performance cost if a thread were to jump from one core to another. The Windows scheduler should do a good job avoiding switches like these, but I could imagine in some strange workloads you might need to force it.
These settings could also be useful if you want to limit the number of cores an application is using (say to keep some other cores free for another dedicated task.) It might also be useful if you're running a stress test and you are trying to determine if you have a bad CPU core. It also could work around BIOS/firmware bugs such as the bugs related to high-performance timers that plagued many multi-core CPUs from a few years back.
I can't give you a good use case for this heat map (except that it looks super awesome), but I can tell you a sad story about how we used CPU affinity to fix something.
We were automating some older version of MS Office to do some batch processing of Word documents and Word was occasionally crashing. After a while of troubleshooting and desperation, we tried setting Word process' affinity to just one CPU to reduce concurrency and hence reduce the likelihood of race conditions. It worked. Word stopped crashing.
One possible scenario would be a server that is running multiple VMs where each client is paying to have access to their VM.
The administrator may set the processor affinities so that each VM has guaranteed access to X number of cores (and would charge the client appropriately).
Now, suppose that the administrator notices that the cores assigned to ABC Company Inc.'s VMs are registering highly on the heatmap. This would be a perfect opportunity to upsell ABC Company Inc and get them to pay for more cores.
Both the administrator and ABC Company Inc win - the administrator makes more money, and ABC Company Inc experience better performance.
In this way, the heatmap can function as a Decision Support System which helps ABC Company Inc decide whether their needs merit more cores, and helps the administrator to target their advertising better to the their customers that can benefit.