VMware Player VM - 1 core CPU limitation - virtual-machine

I'm using a VM with VMware Player to write code and compile.
As my current program is huge, the compilation takes a while to be done (upto 5 minutes)
using 25% of my 4 cores CPU on my host = 100% of one core.
It seems that the VM is limited to use 1 single core.
Is there a way to optimize the number of cores a VM can use?
I'd like to use 50% or 75% of my 4 cores CPU.
Thanks

It sounds like you're limited by the number of parallel build tasks you can run, not the VM CPU configuration, e.g., by default, make will run a single step at a time. Try running several steps in parallel, e.g., run make -j4 or equivalent for your build system.
On a separate note, a VM may be more overhead for you than you might like; consider using Docker to host your development environment.

Related

Docker Desktop Windows - Abysmal performance in AMD system?

I've recently assembled a new AMD Desktop, to replace an older Dell Latitude E7540 laptop.
The AMD Desktop:
Ryzen 3 3100 # 3.8GHz (4C/8T), 32GB DDR4 3600 CL17 RAM, Corsair P600
Gen4 SSD
The DELL Laptop:
Dell Latitude E7540: Intel I7-5600U # 2.6GHz (2C/4T), 16 MB RAM DDR3 1600, Samsung mSATA PM851
On the new AMD Desktop, when executing a docker build command, two situations occur:
The performance is dreadful, even building a simple image, it takes a long time for the command to start. After starting, it takes a long long time to complete (when it completes)
The build window crashes almost 50% of the time.
The benchmarks indicate that the new AMD Desktop is 3.5x faster at single core, and 6x faster at multicore.
As such, I was expecting a much better performance with the new AMD Desktop.
Unfortunately, that's not the case, and for the same Dockerfile (which generates a very big image):
The Dell starts faster
The Dell completes faster (10m vs 30m)
On the Dell, the build window never crashes.
The only difference between both systems is that one is an Intel platform, the new one an Ryzen 3 AMD.
Environment Details are the same on both machines:
Windows Version: Windows 10 Ent. 19049
Docker Desktop Version: Docker 3.0.0
What can explain this abysmal performance on Docker-Desktop on the new AMD system?
After a few troubling days, i can confirm that the problem is not AMD related.
The culprit is the Antivirus, that when ON, its scanning the files used by Docker, which cause all the problems i've described.
Docker documentation states how to disable the antivirus to scan Docker related files:
https://docs.docker.com/engine/security/antivirus/
When antivirus software scans files used by Docker, these files may be locked in a way that causes Docker commands to hang.
One way to reduce these problems is to add the Docker data directory (/var/lib/docker on Linux, %ProgramData%\docker on Windows Server, or $HOME/Library/Containers/com.docker.docker/ on Mac) to the antivirus’s exclusion list. However, this comes with the trade-off that viruses or malware in Docker images, writable layers of containers, or volumes are not detected. If you do choose to exclude Docker’s data directory from background virus scanning, you may want to schedule a recurring task that stops Docker, scans the data directory, and restarts Docker.

How long is a gem5 build with "gem5 scons build/X86/gem5.opt -j9" expected to take on a virtual machine?

first time working with gem5,according to gem5.org the following build command should take about 15 minutes or so to complete, build/X86/gem5.opt -j9. But its been more than an hour since the build started and its not complete yet, has anyone experienced the same issue? is it normal? My machine has 8 cores and I've allocated 16 gigs of memory for VMware on which am running the build. Can it be a hardware problem such as not enough memory?
So far I have started the build process from scratch a few times with the same results, I've also tried it on a different virtualization platform (Virtualbox) but Its taking the same amount of time to build.
Thanks!

MuleSoft On-Premise Distributing CPU and Memory Between 2 Runtimes (both are on same system)

here is my scenario: I have a windows VM and it's having 2 runtimes installed on it (Mule1 and Mule2).
Now If i have to distribute 60% of VM CPU to Mule1 and 40% to Mule2. How can it be done?
Is that even possible ?
There is a concept called CPU affinity, when you have more than one core or CPU. The operating system you are using have tools for assigning cores to a process. I'm not aware of a feature to assign or limit a percentage of CPU usage to a process. I don't know about an out of the box feature to limit CPU usage per process.
Linux:
You can use the taskset command to set which cores to assign to the mule process.
Example:
taskset -c 0,1 ./mule
Source: https://help.mulesoft.com/s/article/How-to-set-CPU-affinity-for-Mule-ESB-process
Windows:
In the Task Manager, you can right-click the java.exe and wrapper-windows-x86-64.exe processes, select "Set Affinity" and choose the processors
In this article there are Powershell commands to do the same from the command line: https://help.mulesoft.com/s/article/How-to-set-CPU-affinity-for-a-Mule-ESB-process-in-Windows-as-a-Service
It is completely different topic however Docker allows something similar per container.

why ninja build and msbuild are unable to utilize more than roughly 50%?

I have a Lenovo Z51-70 laptop (Windows10). it had 8GB RAM by default and SSHD. When I used to compile large projects 20K c++ files Task Manager always showed 90-100% CPU utilization. A week back I upgraded SSHD to SSD and 8GB RAM to 16GB RAM for gaining speed in the compilation. But build time hasn't improved (it is almost same) but Task Manager always shows roughly 50% CPU utilization. Why it is not able to utilize anyway near 90-100%? and why same build on SSHD & 8GB RAM always used to consume roughly 90-100% CPU utilization? It is not specific to a particular build system, i have tried MSBUILD, NINJA. All build system show same CPU utilization. I have tried to compile different projects for excluding any reason which may be the project-specific.
Any thoughts?

How to ensure the stability of a pc?

I need to run an intensive CPU task that maxes all cores of my CPU to 100%.
After a few days of running this task, I find that the machine becomes unresponsive and I am no longer able to SSH into it. I then have to restart the machine and begin the task again. This task could take several weeks or even months to compute.
I'd like to find a way to run this task to completion.
I've tried running the task on Debian 8 and Ubuntu Server LTS. Both of these operating system exhibited the same problem. I thought about running the task inside a Virtual Machine and using Cron to snapshot it every hour, but this seems quite extreme and would suffer an overhead.
Why is it that my machine is unstable?
Could it be due to power fluctuations?
Should I try under-clocking the CPU?
Thanks