Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I'm trying to measure the effects of cpu overcommitting on a KVM setup (both host and guest). I can detect performance is degraded when the number of vCPUs is increased but ideally I want to look at some more objective metric (like CPU Ready in esxtop). Is there an equivalent to esxtop for KVM that provides a similar metric.
There is a fundamental difference between how you monitor VMs in KVM and how you monitor them with ESXi.
Since a lot of people run KVM in Linux, I'm going to assume your underlying OS is a Linux based one.
How to get CPU Ready like functionality with KVM?
With htop enable additional metrics and watch the gu section. This tells you how much CPU usage a guest is using.
Use virt-top which tells you overall CPU usage (among other things) of a guest.
The oversubscription principles that apply to ESXi also apply to KVM. Although KVM does not use CPU bonding (by default) like ESXi does, you still do not want to go more than 1:5 ratio pCPU to vCPU ratio in KVM. Of course, this depends on how much you're utilizing the CPUs. You also do not want to give more CPU cores than necessary either. Start with 1 core and move up.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Can I run PyTorch or Tensorflow on Windows on a GPU that is also acting as the system's graphics card (e.g. there is no graphics built-in to a Ryzen 3600 CPU)? If so, is there any downside, or would I be better off getting a CPU with built-in graphics?
Yes, it is possible to run i.e. Tensorflow on GPU while also using the GPU for your system. You do not need a 2nd graphics-card or integrated GPU.
Keep in mind, your graphicscard will share memory and processing-power between all your programs. GPU-intensive work might slow down the fps on your system and the other way around. Also keep an eye on the memory usage.
I had tensorflow_gpu with a multi-layer CNN running, while playing AAA-Games (i.e. GTA V) using a Ryzen 3600. It worked on my super old NVIDIA GTX 260 (2GB memory) but it crashed quite often cause of the limited memory. Upgraded to a GTX 1080 with 8GB and it worked quite well. Needless to say, you can always fill up your GPU-memory and crash, no matter the size.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Short question and hopefully a positive answer:
Is it possible to create a virtual CPU that consists from multible real cores?
So lets say you have a 4x3.5 GHz CPU, can you create a vCPU that is 1x14GHz ?
Why do it?
If there is a software which is heavily CPU using, but can just use one thread, it would boost up the program.
I am not very advanced with hardware tech, but I guess there is no way to do that.
Thanks.
So lets say you have a 4x3.5 GHz CPU, can you create a vCPU that is 1x14GHz ?
No. As the expression goes -- nine women cannot make a baby in one month.
Each instruction executed by a virtual CPU can potentially be dependent on anything that previously happened on that CPU. There's no way to run an instruction (or a group of instructions) before all of the previous instructions have been completed. That leaves no room for another physical CPU to speed things up.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I've managed to run around 5 VM's on my desktop at once before it froze and crashed. I'd like to be able to run 20 or more at once without having to use multiple computers.
Does anyone have any ideas on how I could accomplish this without breaking the bank too badly? Any tips would be greatly appreciated!
Thanks!
Virtual machines are virtual, they are not magic. If you have 20 virtual machines running they still have to share the resources of your 1 actual desktop computer. So you could probably run more virtual machines by allocating less memory to each virtual machine, but the number of virtual machines you can operate is limited by the underlying memory/CPU resources of your computer.
First you have to find out where your bottleneck is. My best guess is memory. If your VM software supports it, try giving the virtual machines dynamic memory so that they don't allocate much that they aren't using.
As mentioned by the other answers, virtual machines do have to share the physical resources on your machine. Depending on how much money you are willing yo spend, you could upgrade your RAM or your CPU. Depending on exactly what you intend to do on these virtual machines, you might be able to get away with allocating less RAM to each one. If each of the virtual machines is running a 32 bit OS, you could probably give each one 1 GB of RAM, give or take. For 20 virtual machines running at the same time, I would recommend 32 GB of ram. If all of your VMs are 64 bit, you're going to need even more. A cheap CPU is definitely not going to do very well with that either. More cores will likely improve your performance significantly (but will be pretty hard on your wallet).
I know that this doesn't really sound like an "on a budget" solution, but aside from allocating a miniscule amount of RAM to each machine, there isn't really much else you can do. The issue here is that your hardware simply can't handle it, therefore, you need better hardware.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am working on a project involving the following with my team:
GUI and a keyboard for user interaction.
Real-time processing and display.
SPI communication.
USB-based printing.
1, 2 and 3 are to be done in parallel.
Currently we are using Raspberry Pi. But R-pi is lagging in doing the job. So any other embedded processor meeting the above specs and should be less than $100.
Any suggestion would be highly appreciated.
PS: Do ask questions if I'm vague in my statements.
Your lack of real-time response probably has more to do with the fact that Linux is not a real-time OS than the performance of the RPi. You can throw processing power at the problem if you like, but it still may not reliable solve your problem.
It is not possible to advise based on the little information you have provided; you'd need to define the real-time response requirements in terms of time and quantity of data to be processed.
While an RTOS might solve your real-time processing problems, that would need you needing drivers for the USB printer, display, and a GUI implementation, these are readily available for Linux, but not so much for a typical low-cost RTOS, especially a USB printer driver, since the raster-image processing required is complex and resource hungry - resources a typical Linux system will have.
If you have the necessary time and skill, you could port RTLinux to RPi (or some other board capable of supporting Linux). It has a different scheduler to the standard time-sharing kernel, and can be used to improve real-time response, but it is no substitute for a real RTOS for deterministic performance.
You may be better off using the RPi and connecting it to a stand-alone microcontroller to perform the hard real-time processing. There are a number of project examples connecting an Arduino to RPi for example. The lower clock rate does not mean slower response since the processor can be dedicated to the task and will not non-deterministically switch to some other task for long periods.
Try the beaglebone black. Its 1GHz processor should be more then sufficient to do your processing. Also it is ARM7, Ubuntu dropped support for ARM6 (Pi) a couple of months ago.
http://beagleboard.org/products/beaglebone%20black
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
As far as I understand it, embedded software is just software (that runs on a general purpose CPU) that has little if any user input or configuration. Embedded software powers IP routers, cars, computer mice, etc.
My question is:
When (roughly) was the historical moment when embedded software was first considered cost-effective for some applications (rather than an equal technical solution not involving embedded software)? Which applications and why?
Detail: Obviously there is a tradeoff between the cost of a CPU fast enough to perform X in software versus the cost of designing hardware that performs X.
Embedded systems date from the Apollo moon landings. Specifically the Apollo Guidance Computer (AGC) - widely held to be one of the first examples of embedded systems.
Commercially in the early 1970's early microprocessors were being employed in products, famously the 4-bit Intel 4004 used in the Busicom 141-PF. Bill Gates and Paul Allen saw the potential for embedded microprocessors early with their pre-Microsoft endeavour the Traf-O-Data traffic survey counter.
So I would suggest around 1971/72 at the introduction of the Intel 4004 and the more powerful 8-bit 8008. Note that unlike the more powerful still Intel 8080 which inspired the first home-brew microcomputers and the MITS Altair, the 4004 and 8008 were barely suitable for use a general purpose "computer" as such, and therefore embedded computing systems pre-date general purpose microcomputers.
I would dispute your characterisation of what an embedded system is; if you were asking that question here's my answer to a similar question.