Process time increases with increase in virtual hosts threads in apache server - apache

I’m trying to run an AI yolov5 model on 12 cameras through the flask script. I have made 12 virtual hosts with separate directory containing separate flask script for each camera.
Each virtual host is with different port number, making them port separated virtual hosts
The system specifications includes:
OS: Windows 2019 server
GPU: 12 GB RTX Geforce 2080 Ti
CPU: intel xeon, 8 cores with 3.6 GHz
RAM: 32 GB
When I’m running only one camera through virtual hosts, its performance is really good, processing upto 34 frames per second. But as I add more cameras the processing time increases. With 12 cameras each virtual hosts processes only 4 frames per second, reducing the performance.
Are the worker threads through which the virtual hosts are initialized are interfering with each other?
I have increased the thread limit of apache through winnit mpm module to 12000 threads, to spare extra threads to handle the load.
The cpu usage only goes to 35 %, RAM: 10 GB and GPU: 6 GB

Related

Number of processors assigned to Guest OSs (time slicing?)

VMWare Workstation Pro 14
Windows 10 v1809
4 processor CPU (Core i7-6700 3.4GHz)
I am wondering about how to assign processors to my Guest OSs.
I am using guest OSs
CentOS 6.9
CentOS 6.9
Ubuntu 16.04 LTS
Windows 7
Windows 8.1
Windows 10
If I assign each processors to fours guest OSs, that are equal to the physical number of processors (four), and also use processors for host OS. But I have no problem in doing so.
Question: Are the processors assigned using time-slicing, making a single CPU assigned to two Guest OS in turn?
If so, I am thinking to increase number of processors assigned to some GuestOS (e.g. 2).
There's a CPU scheduler that's in use. Basically, it makes it so that the VM can make use of any available CPU instead of being pinned to a specific CPU where it could get stuck waiting for an extended period of time.
More information about the scheduler concept can be found in the following whitepaper: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-vsphere-cpu-sched-performance-white-paper.pdf

Difference between virtual machine process and host os process?

Suppose in my pc I have Ubuntu as Host OS. Now I installed a Virtual Machine say VirtualBox (hypervisor) and then deployed a centos and a redhat os inside that as guest OS.
Suppose CentOS and redhat has 2 processes running and Ubuntu is running 3 processes. So following are my questions:
There are how many processes that Ubuntu is having?
Is there any difference between GuestOS and HostOS processes?
If all guestos runs as a process then they will get less time as compared to other process running on host os.
Please clear my doubts here.
Thank you.
Well let me clear your doubts,
First of all there aren't any specific number of process for an OS, its called as cores or threads, technically you can define how many cores or threads you want to use on your virtual machine and it depends on the system configuration you use.
Secondly Guest OS is what you have created in the virtual machine and host is what your laptop or pc actually run. Host OS uses the actual hardware for the working whereas the Guest OS uses the virtual hardware like number of cores and type and size of hard drive defined by the user while adding a virtual machine.
Third, as I mentioned earlier Guest and Host OS works on the configurations used by you, if you user higher amount of cores/ threads in setting your virtual machine the Guest OS will get higher speed.
Ideally the virtual machines are used to test and create some functionality of the Operating Systems without affecting the internal OS, so you can think of it as a your parents house where you can live and grow but at the end you cannot go away from the fact that their contribution is more and so you cannot go beyond their features without leaving it and making your own home.
Linux operating systems are multi-threaded operating system. The host OS would consider virtual box as a thread. You can define number of cores and virtual hard disk size for guest OS by using virtual box.
Since virtual box runs in separate thread and other operations of host OS runs in separate threads, there would be less effect on speed of processing. But I've observed big variances in processing speed in systems which have low memory. Each and every thread needs specific allocation of memory for its smooth operation. So systems having more than 2 GB RAM managed virtual box very well.

Installation of openstack on virtual machines (multi-node architecture)

Can I install openstack on 3 different virtual machines with the configurations as listed:
Controller Node: 1 processor, 4 GB memory, and 10 GB storage
Network Node: 1 processor, 4 GB memory, and 20 GB storage
Compute Node: 1 processor, 4 GB memory, and 30 GB storage
I wanted to know if having a physical machine with visualization enabled processor is essential for openstack deployment or one can proceed with virtual machines only. I am asking this because in almost all the documents i have read it suggests to have physical nodes.
I also want to know what difference will it make if i install on a virtual machine(assuming it is possible to install on a VM) & why can i not install openstack on virtual machines(assuming i cannot install openstack on virtual machines)
Please bear in mind that i dont want to install devstack.
I guess one can install controller and the neutron on VMs.
However, for the compute node you require a physical VM.
A simple configuration could be (as suggested in the openstack docs)
Controller Node: 1-2 CPU, 8GB RAM, 100GB Storage 1 NIC
Neutron Node: 1-2 CPU 2 GB RAM, 50 GB Storage 3 NIC
Compute Node: 2-4 CPU , 8GB RAM, 100+ GB Storage 2 NIC
However i guess(though unsure ) that the compute if has CPU virtualisation enabled, then the compute could also be a VM.
Someone who could specify the implications of running these nodes on a VM compared to physical nodes???

find the cpu core in the host in which virtual machine is running

I have openstack (kvm hypervisor) installed.
I have 32 cores in my host (/proc/stat...gives me that info)
I can start a vm from the host also I can get the cpu utilisation of the vm . I get this by finding the pid of the virtual machine from the host.
However I am not able to figure out is how do I know which virtual machine is running on which of the 32 core.
Is there any way to find it our
Or is there any way to explicitly pin it to a particular cpu?
This answer on ask.openstack.org https://ask.openstack.org/en/question/1282/can-openstack-choose-the-physical-resources-to-boot-a-vm/ indicates that "OpenStack Compute with the libvirt driver has no ability to pin a VM to specific physical CPU."

VMWare Player - swapping to disk more if more memory allocated

Windows XP as base OS. Laptop has 4GB RAM and 2*2.2GHz cores. About 3 year old laptop
Am using Windows7 in VMWare Player. If I allocate more than 1GB of RAM to the Win7 machine in the VMWare player settings it goes so slow, and is continually swapping to disk.
I've turned off all Win7 processor intensive stuff.
http://www.computingunleashed.com/speed-up-windows-7-ultimate-guide-to.html
http://www.computingunleashed.com/list-of-services-in-windows-7-that-can.html
The base OS only reports using aboiut 144MB of RAM to the player. Very weird.
I'm using 2 virtual disks: 20GB SCSI for c:\ and 25GB SCSI for data f:\
Problem: How to tweak Win7 VMware (ie VS2010, Sql2008R2) well on an older laptop. Or use something else?
The problem is that by default vmware player uses file as memory.
Read this for more info & fix
http://communities.vmware.com/thread/46122
If you want to achieve this for all your VMs, you may just add/append following two lines:
prefvmx.minVmMemPct = 100
mainMem.useNamedFile = "false"
... inside the following VMware-wide configuration file:
C:/ProgramData/VMware/VMware Workstation/config.ini (or sometimes settings.ini)
The first line sets the percentage of configured VM memory that should fit into the host memory and the second (as already shown in the prior answer) disables default file-based memory usage.
If you want to apply this to a specific VM only, in order to not alter general VMware configuration, adding the following line to the VM's *.vmx file may be an alternative:
hard-disk.hostBuffer = "disabled"