How to enable VNC in VM in KVM when the KVM server doesn't have a GUI? - virtual-machine

Can someone tell me how to enable the VNC in VM which is being created using virt-install on KVM hypervisor?
My server doesn't have a GUI so I used to run the following command to spin up a VM:
virt-install \
--name centos6 \
--ram 1024 \
--disk path=/var/lib/libvirt/images/centos6.img,bus=virtio,size=30 \
--vcpus 1 \
--os-type linux \
--os-variant rhel6 \
--network bridge=br0 \
--graphics none \
--location 'http://mirror.i3d.net/pub/centos/6/os/x86_64/' \
--extra-args 'console=ttyS0,115200n8 serial'
Now I want to install GUI on the VM(centos6) and install VNC, can someone tell me how to achieve that?
Thanks.

Found answer on this link though it is implemented on ubuntu but same can be replicated for other distribution : https://www.howtoforge.com/tutorial/ubuntu-gnome-vnc-headless-server

Related

what is a no nonsense way to create a ubuntu virtual machine using virt-manager on command line

I tried various methods explained on internet, but none seems to be working. using local iso image give one issue and location gives another issue.
Can we setup IP using this command?
currently using this command
sudo virt-install \ --name worker-2 \ --ram=4096 \ --disk size=100 \ --disk path=/opt/sciserver/vm/worker-2.qcow2,size=30,format=qcow2 \ --vcpus 2 \ --os-type linux \ --os-variant ubuntu20.04 \ --graphics none \ --location 'http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/' \ --extra-args "console=tty0 console=ttyS0,115200n8"
and error says..
"ERROR Error validating install location: Could not find an installable distribution at 'http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/'
The location must be the root directory of an install tree.
See virt-install man page for various distro examples."
Your help will be much appreciated

Which package provides virt-install CLI tool in Yocto?

Does anyone know which package provides virt-install CLI tool in Yocto or how to use virt-install tool in yocto environment?
I have added below packages in my bb file but I don't see virt-install tool getting built
packagegroup-core-boot \
qemu \
libvirt \
libvirt-libvirtd \
libvirt-virsh \
libvirt-python \
kernel-module-kvm \
kernel-module-kvm-intel \
kernel-module-kvm-amd \
Please help me.
Not 100% sure about Yocto, but I think virt-manager usually provides virt-install.

RuntimeWarning:You're running the worker with superuser privileges:this is absolutely not recommended

I am using django+celery+redis,celery==4.4.0 in local it is working fine but when I am dockerizing it , I am getting the above error.
I am using following commands to run in local as well as inside container
**CMDs**
celery -A nrn worker -l info
docker run -d -p 6379:6379 redis
flower -A nrn --port=5555
Any help is highly appreciated
*settings.py**
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_BROKER_URL = os.environ.get('redis', 'redis://127.0.0.1:6379/')
Take a look in the documentation. It's a warning, though, not an error (see the code). Running Celery under root is an error only when you allow pickle serialization which is not enabled by default (see here).
However, it's still the best practice to run Celery with lower privileges. In Docker (with Debian based image), I choose to run Celery under nobody:nogroup. I use this Dockerfile:
FROM python:3.6
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
WORKDIR /srv/celery
COPY ./app app
COPY ./requirements.txt /tmp/requirements.txt
COPY ./celery.sh celery.sh
RUN pip install --no-cache-dir \
-r /tmp/requirements.txt
VOLUME ["/var/log/celery", "/var/run/celery"]
CMD ["./celery.sh"]
where celery.sh looks as follows:
#!/usr/bin/env bash
mkdir -p /var/run/celery /var/log/celery
chown -R nobody:nogroup /var/run/celery /var/log/celery
exec celery --app=app worker \
--loglevel=INFO --logfile=/var/log/celery/worker-example.log \
--statedb=/var/run/celery/worker-example#%h.state \
--hostname=worker-example#%h \
--queues=celery.example -O fair \
--uid=nobody --gid=nogroup

Cannot run Mesos Containers with GPU tasks

I am running Mesos on Ubuntu and am trying to execute:
mesos-execute \
--master=$(cat /etc/mesos/zk) \
--name=gpu-test \
--docker_image=nvidia/cuda \
--command="nvidia-smi" \
--framework_capabilities="GPU_RESOURCES" \
--resources="gpus:1"
and it is failing because: sh: 1: nvidia-smi: not found
even though when I run it without container support
mesos-execute \
--master=$(cat /etc/mesos/zk) \
--name=gpu-test \
--command="nvidia-smi" \
--framework_capabilities="GPU_RESOURCES" \
--resources="gpus:1"
it has access to the gpu
plus if I run it without container support but put the command as
nvidia-docker run -it nvidia/cuda nvidia-smi
it works, so it seems that the mesos containerizer doesnt have access to the GPUs. But in the /etc/mesos-slave/ directory I gave it containerizers mesos (and all the other required flags to run gpu commands). Plus non-gpu related commands are working fine.
This looks like a regression in 1.3.0. I downgraded to 1.2.1 on Ubuntu and can successfully use GPUs with Docker containers and the Mesos containerizer again.
sudo apt-get install mesos=1.2.1-2.0.1
It looks like someone filed a related bug but there's been no activity:
https://issues.apache.org/jira/browse/MESOS-7730

multiple KVM guests script using virt-install

I would like install 3 KVM guests automatically using kickstart.
I have no problem installing it manually using virt-install command.
virt-install \
-n dal \
-r 2048 \
--vcpus=1 \
--os-variant=rhel6 \
--accelerate \
--network bridge:br1,model=virtio \
--disk path=/home/dal_internal,size=128 --force \
--location="/home/kvm.iso" \
--nographics \
--extra-args="ks=file:/dal_kick.cfg console=tty0 console=ttyS0,115200n8 serial" \
--initrd-inject=/opt/dal_kick.cfg \
--virt-type kvm
I have 3 scripts like the one above - i would like to install all 3 at the same time, how can i disable the console? or running it in the background?
Based on virt-install man page:
http://www.tin.org/bin/man.cgi?section=1&topic=virt-install
--noautoconsole
Don't automatically try to connect to the guest console. The
default behaviour is to launch virt-viewer(1) to display the
graphical console, or to run the "virsh" "console" command to
display the text console. Use of this parameter will disable this
behaviour.
virt-install will connect console automatically. If you don't want,
just simply add --noautoconsole in your cmd like
virt-install \
-n dal \
-r 2048 \
--vcpus=1 \
--quiet \
--noautoconsole \
...... other options
We faced the same problem and at the end the only way we found was to create new threads with the &.
We also include the quiet option, not mandatory.
---quiet option (Only print fatal error messages).
virt-install \
-n dal \
-r 2048 \
--vcpus=1 \
--quiet \
--os-variant=rhel6 \
--accelerate \
--network bridge:br1,model=virtio \
--disk path=/home/dal_internal,size=128 --force \
--location="/home/kvm.iso" \
--nographics \
--extra-args="ks=file:/dal_kick.cfg console=tty0 console=ttyS0,115200n8 serial" \
--initrd-inject=/opt/dal_kick.cfg \
--virt-type kvm &
I know this is kind of old, but I wanted to share my thoughts.
I ran into the same problem, but due to the environment we work in, we need to use sudo with a password (compliance reasons). The solution I came up with was to use timeout instead of &. When we fork it right away, it would hang due to the sudo prompt never appearing. So using timeout with your example above: (we obviously did timeout 10 sudo virt-instal...)
timeout 15 virt-install \
-n dal \
-r 2048 \
--vcpus=1 \
--quiet \
--os-variant=rhel6 \
--accelerate \
--network bridge:br1,model=virtio \
--disk path=/home/dal_internal,size=128 --force \
--location="/home/kvm.iso" \
--nographics \
--extra-args="ks=file:/dal_kick.cfg console=tty0 console=ttyS0,115200n8 serial" \
--initrd-inject=/opt/dal_kick.cfg \
--virt-type kvm
This allowed us to interact with our sudo prompt and send the password over, and then start the build. The timeout doesnt kill the process, it will continue on and so can your script.