Unable to open X display when trying to run google-chrome on Centos (Rhel 7.5) - ssh

I need to run Google Chrome remotely on a virtual machine using SSH. I do not want xforwarding - I want to utilize the GPU available on the vm. When I try running google-chrome I get following error:
[19615:19615:0219/152933.751028:ERROR:browser_main_loop.cc(1512)] Unable to open X display.
I've tried to setting my DISPLAY env value to various values:
export DISPLAY=localhost:0.0
export DISPLAY=127.0.0.1:0.0
export DISPLAY=:0.0
I've also tried replacing 0.0 in abowe examples with different values.
I have ForwardX11 no in /etc/ssh/sshd_config
I tried setting up target like this:
systemctl isolate multi-user.target
When I try to run sudo lshw -C display i get folowing output:
*-display
description: VGA compatible controller
product: Hyper-V virtual VGA
vendor: Microsoft Corporation
physical id: 8
bus info: pci#0000:00:08.0
version: 00
width: 32 bits
clock: 33MHz
capabilities: vga_controller bus_master rom
configuration: driver=hyperv_fb latency=0
resources: irq:11 memory:f8000000-fbffffff
*-display UNCLAIMED
description: VGA compatible controller
product: GM204GL [Tesla M60]
vendor: NVIDIA Corporation
physical id: 1
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list
configuration: latency=0
resources: iomemory:f0-ef iomemory:f0-ef memory:41000000-41ffffff memory:fe0000000-fefffffff memory:ff0000000-ff1ffffff
I've tried to update my gpu drivers by:
wget https://www.nvidia.com/content/DriverDownload-March2009/confirmation.php?url=/tesla/375.66/nvidia-diag-driver-local-repo-rhel7-375.66-1.x86_64.rpm
yum -y install nvidia-diag-driver-local-repo-rhel7-375.66-1.x86_64.rpm
But after that I still see UNCLIMED next to my NVIDIA gpu.
Aby ideas?

You can try with Xvfb. it does not require additional hardware.
Install Xvfb if you didn't install it yet and do the following steps.
sudo apt-get install -y xvfb
Dependencies to make "headless" chrome/selenium work:
sudo apt-get -y install xorg xvfb gtk2-engines-pixbuf
sudo apt-get -y install dbus-x11 xfonts-base xfonts-100dpi xfonts-75dpi xfonts-cyrillic xfonts-scalable
Optional but nifty: For capturing screenshots of Xvfb display:
sudo apt-get -y install imagemagick x11-apps
Make sure that Xvfb starts every time the box/vm is booted:
Xvfb -ac :99 -screen 0 1280x1024x16 &
export DISPLAY=:99
Run Google Chrome
google-chrome

Okay guys. I found my problem after 2 hours of going crazy. My box was configured correctly. What you can NOT do, is ssh from one box, to another box, to this box and expect X11 forwarding to play nicely. Without tearing apart the entire network, I found that if I shelled over from the MAIN box to this box ( no double or triple ssh'ing) chrome comes right up as a regular user using CLI. So it was a matter of multiple shells from multiple boxes that made the display say it was set to NOTHING! Setting the display manually only complicates the problems. Once I shelled directly over to this box from the main outside box, my display was set to 10:0, which is first instance in my configuration. Don't make this mistake, you will waste valuable time.

FWIW, I ran into this when using SSH to log into a Selenium chrome node in a Docker compose stack. Chrome would launch if I invoked it as root with sudo -u seluser google-chrome, but not if I logged in as seluser. The trick turned out to be that root had DISPLAY set to :99:0, and seluser didn't have it set at all. If I set it explicitly (either from a seluser shell or from the docker compose exec command line) it worked.
$ docker-compose exec -u seluser \
selenium-chrome \ # or whatever your service is called
/bin/bash
seluser#c02cda62b751:/$ export DISPLAY=:99:0
seluser#c02cda62b751:/$ google-chrome http://app.test:3000/home
or
$ docker-compose exec -u seluser -e DISPLAY=:99:0 \
selenium-chrome \
google-chrome http://app.test:3000/home
That :99.0 is undocumented, though, so if this isn't working, you might try checking root's DISPLAY value with:
docker-compose exec -u root selenium-chrome bash -c 'echo "${DISPLAY}"'

I faced the same issue with WSL and Ubuntu. I have unininstalled/Reset the ubuntu. After that, I have executed the below command
wsl --set-default-version 2
then I installed again the Ubuntu, I didn't get --no-sandbox issue or any issue.
Hope this will use for someone.

Related

Why doesn't htop show my docker-processes using wsl2

Building my container using docker and wsl2 I wanted to see what happens. Running htop in wsl only shows the CPU usage, but none processes running in my containers.
The only information searching for htop, docker and wsl2, the only thing I could find was this archived and unrelated reddit-thread: https://www.reddit.com/r/bashonubuntuonwindows/comments/dia2bw/htop_on_wsl2_doesnt_show_any_processes_while_ps/
Docker does not run in your default WSL-distro, but in a special Docker-Wsl-distro. Running wsl -l shows the installed distros:
Ubuntu (Standard)
docker-desktop
docker-desktop-data
Docker desktop is based on alpine and you can run top right out of the box:
wsl -d docker-desktop top
If you want htop, you need to install it first:
wsl -d docker-desktop apk update
wsl -d docker-desktop apk add htop
Running
wsl -d docker-desktop htop
will now give you a nice overview of what is happening in your docker-containers:
I agree with #Morty.
The following commands give you the list for windows
wsl -l
Then you can run either of the following command
wsl -d docker-desktop ps
wsl -d docker-desktop top

Why can I access my Apache default page ONLY when I go in my container's bash?

First of all, I would like to say that I'm new to Docker and all that is around it.
I have been wanting to make a container where I have Apache, php and Firebird installed. So far, so good ; everything seems to work and I can get my default page when I type in my Internet search bar my ip address and :8080. I do so by first starting my container like this :
docker run -p 8080:80 -d apps
Where "apps" is the name of my container.
I have achieved this with my Dockerfile, which looks like this (it might be a bit messy, still learning the good practices !) :
# Download of base image - ubuntu 20.04
FROM ubuntu:20.04
# Updating/upgrading
RUN apt-get update -y && apt-get upgrade -y
# Installing apache2, php and firebird with modules
RUN DEBIAN_FRONTEND="noninteractive" apt-get install apache2 php libapache2-mod-php -y && \
apt-get install php-curl php-gd php-intl php-json php-mbstring php-xml php-zip -y && \
DEBIAN_FRONTEND="noninteractive" apt-get install firebird3.0-server -y && apt-get install firebird->
# Start up apache in foreground by default
CMD /usr/sbin/apache2 -D FOREGROUND
ENTRYPOINT service apache2 restart && /bin/bash
# Expose apache
EXPOSE 80
Now, my idea was to export this container to another computer and try the same thing. I have followed a few tutorials and got to import my container on the new machine. My problem here is that somehow, the command I previously used doesn't work ; it shows me this error :
docker: Error response from daemon: No command specified.
See 'docker run --help'.
Which is odd, because it works just fine on the other machine. I also did this command, WHICH WORKS :
docker run -i -t -p 8080:80 apps /bin/bash
This one works alright, but I don't want to have to access the bash everytime I want my Apache page to load. I would want my container to run without me having to get in my container, if that makes sense.
In my opinion, it probably comes from the fact that I only loaded the container, and not the image used to build it (maybe a bad practice? Couldn't find anything about it on google).
Here is my setup just in case ---
On the first machine (which is the one where I created the image and the container) :
Ubuntu 20.04 LTS
Apache/2.4.41
Docker 19.03.8
On the other machine which I'm trying to make my container work :
Ubuntu 18.04 LTS
Apache/2.4.29
Docker 19.03.6
Thank you for your patience and time !
apps is your docker image, if you want to give name for your container you can specify --name in the run command ie,
docker run --name container_name -p 8080:80 -d apps
You can use sudo docker save -o apps.tar apps to create a tar file of the image
then change the root permission of the tar file sudo chmod 777 apps.tar
Copy this tar file to the other system you want to try, then
sudo docker load --input apps.tar
This will load the image, then you can use the previous command to start the container
docker run -p 8080:80 -d apps
Where "apps" is the name of my container. <- This statement is incorrect and perhaps the misunderstood concept that leads you to the problem.
apps is the name of the image, not the name of the container. On the host on which you can run the container, you must have built that image from the Dockerfile that you shared using the command:
docker build -t apps .
Copy the Dockerfile on the host where you cannot run the container, built the image in-there as well and try again running the container.

Singularity container from conda environment

I want to build a container from my conda environment following this post. However, I get the following error: '/bin/sh: 1: cannot create ~/.bashrc: Directory nonexistent'. I am using a vagrant VM to build my image and would be grateful for any help.
Editing the .bashrc, aside from failing, will not be helpful as the shell loaded by singularity is explicitly --norc. You want to use the $SINGULARITY_ENVIRONMENT variable in %post to have the values available.
Something along these lines:
%post
# You may need to install some pre-reqs your host system has installed outside of conda, e.g.
# apt update && apt install -y build-essential make zlib
ENV_NAME=$(head -1 environment.yml | cut -d' ' -f2)
echo ". /opt/conda/etc/profile.d/conda.sh" >> $SINGULARITY_ENVIRONMENT
echo "conda activate $ENV_NAME" >> $SINGULARITY_ENVIRONMENT
. /opt/conda/etc/profile.d/conda.sh
conda env create -f environment.yml -p /opt/conda/envs/$ENV_NAME
I listed a few libraries that you probably have installed in your current machine that might not be installed in the slim docker image. You can install them via apt or conda, depending on your preference. If it does happen though, it'll be specific to your environment.yml and host OS, so you'll have to iterate through until the build succeeds.

Found unknown Linux distribution on /dev/sdb2: grub configuration dual boot Arch Linux and NetBSD-7.0

I have Arch Linux on /dev/sdb1 and NetBSD-7.0 on /dev/sdb2.
On Arch Linux when I run sudo grub-mkconfig -o /boot/grub/grub.cfg I get a message like Found unknown Linux distribution on /dev/sdb2 but when I reboot, there is no grub option for that unknown Linux distribution which I know it is NetBSD-7.0.
How can I add NetBSD-7.0 to my grub menu option when rebooting.
There is a similar post, currently looking into it.
UPDATE: I mounted NetBSD partition with sudo mount -t ufs -o ro,ufstype=ufs2 /dev/sdb2 /mnt/ (ufstype=44bsd did not work) and then ran grub-mkconfig -o /boot/grub/grub.cfg but yet the issue persists.
UPDATE: Rebooted and pressed c to get the grub command line. Following commands booted the NetBSD-7.0:
ls
Ran ls to see the correct name of disks and partitions, /dev/sdb2 on Linux was (hd0,gpt2) on Grub. Then ran the following:
insmod ufs2
set root=(hd0,gpt2)
knetbsd /netbsd
boot
And NetBSD-7.0 booted.
To add NetBSD option to Grub menu, modified file /etc/grub/40_custom on Arch Linux like below:
menuentry "NetBSD-7.0"{
insmod ufs2
set root=(hd0,gpt2)
knetbsd /netbsd
}
However, after modifying 40_custom like above, NetBSD option does not appear on Grub menu. I don't know why.
Unless you have a typo, it looks like the 40_custom file is in the wrong directory. it should be located at /etc/grub.d/40_custom, notice the .d.
If your /boot is located on a separate partition, make sure that it is mounted with mount /boot before generating the grub.cfg. Otherwise your new grub.cfg won't be used.
Check which partition grub is loading the configuration from by running echo ${prefix} within the grub command line. It's possible that grub is loading the configuration from a partition that you don't expect.
Verify that netbsd was added to the config with grep -i netbsd /boot/grub/grub.cfg before rebooting to avoid some frustration after generating grub.cfg

libvirt and VirtualBox / Getting Started

I'm trying to get started on libvirt with VirtualBox as a virtualization solution. I installed everything and VirtualBox itself is running when using their VBoxHeadless command.
However, libvirt fails to connect to VirtualBox:
# virsh -c vbox:///session
libvir: error : could not connect to vbox:///session
error: failed to connect to the hypervisor
I could not find any hints in the libvirt documentation that point to whether I have to make any domain specific configuration before using virsh.
Does anyone have a hint? Or even better, maybe a tutorial that works through the way of using libvirt, virsh or it's APIs (my later goal) from the ground up.
If you are doing this on Ubuntu, then the problem is their libvirt package is built without VirtualBox support.
You can rebuild the package with support very easily. Something like:
apt-get source -d libvirt
sudo apt-get build-dep libvirt
dpkg-source -x libvirt*dsc
Go into the libvirt directory and edit debian/rules so that instead of --without-vbox it says --with-vbox. You can add an entry to the top of debian/changelog so the package is compiled as a different version (e.g., append ~local1 to the version).
dpkg-buildpackage -us -uc -b -rfakeroot
You'll get new .debs built in the directory above. Use dpkg -i to install the relevant ones (libvirt0, libvirt0-bin, and whatever else you want).
Double-check whether or not you have write access to /var/run/libvirt/libvirt-sock.
The socket file should have permissions similar to:
$ sudo ls -la /var/run/libvirt/libvirt-sock
srwxrwx--- 1 root libvirtd 0 2010-08-24 14:54 /var/run/libvirt/libvirt-sock
I think it could be helpful also to increase the libvirt logging capabilities by running this in your shell:
export LIBVIRT_DEBUG=1
There is Ubuntu PPA for libvirt with VirtualBox support: https://launchpad.net/~cxl/+archive/ubuntu/libvirt