why "systemctl" not working in Ubuntu terminal on Windows? [duplicate] - windows-subsystem-for-linux

This question already has an answer here:
Enable Systemd in WSL 2
(1 answer)
Closed 2 months ago.
I need to reload the daemon using systemctl command in ubuntu terminal on window 10. I attached the error I received.
The error:
bashdos#yana:~$ systemctl
System has not been booted with systemd as init system (PID 1). Can't operate.

WSL doesn't have systemd implemented therefore in Ubuntu you need to run for example service start ssh or you can call the binary directly such as /etc/init.d/ssh start/stop/restart.

I had this problem running WSL 2
the solution was the command
$ sudo dockerd
Open other terminal and try it
$ docker ps -a
if after that you still have a problem with permission, run the command:
$ sudo usermod -aG docker your-user

Related

Why doesn't htop show my docker-processes using wsl2

Building my container using docker and wsl2 I wanted to see what happens. Running htop in wsl only shows the CPU usage, but none processes running in my containers.
The only information searching for htop, docker and wsl2, the only thing I could find was this archived and unrelated reddit-thread: https://www.reddit.com/r/bashonubuntuonwindows/comments/dia2bw/htop_on_wsl2_doesnt_show_any_processes_while_ps/
Docker does not run in your default WSL-distro, but in a special Docker-Wsl-distro. Running wsl -l shows the installed distros:
Ubuntu (Standard)
docker-desktop
docker-desktop-data
Docker desktop is based on alpine and you can run top right out of the box:
wsl -d docker-desktop top
If you want htop, you need to install it first:
wsl -d docker-desktop apk update
wsl -d docker-desktop apk add htop
Running
wsl -d docker-desktop htop
will now give you a nice overview of what is happening in your docker-containers:
I agree with #Morty.
The following commands give you the list for windows
wsl -l
Then you can run either of the following command
wsl -d docker-desktop ps
wsl -d docker-desktop top

Unable to open X display when trying to run google-chrome on Centos (Rhel 7.5)

I need to run Google Chrome remotely on a virtual machine using SSH. I do not want xforwarding - I want to utilize the GPU available on the vm. When I try running google-chrome I get following error:
[19615:19615:0219/152933.751028:ERROR:browser_main_loop.cc(1512)] Unable to open X display.
I've tried to setting my DISPLAY env value to various values:
export DISPLAY=localhost:0.0
export DISPLAY=127.0.0.1:0.0
export DISPLAY=:0.0
I've also tried replacing 0.0 in abowe examples with different values.
I have ForwardX11 no in /etc/ssh/sshd_config
I tried setting up target like this:
systemctl isolate multi-user.target
When I try to run sudo lshw -C display i get folowing output:
*-display
description: VGA compatible controller
product: Hyper-V virtual VGA
vendor: Microsoft Corporation
physical id: 8
bus info: pci#0000:00:08.0
version: 00
width: 32 bits
clock: 33MHz
capabilities: vga_controller bus_master rom
configuration: driver=hyperv_fb latency=0
resources: irq:11 memory:f8000000-fbffffff
*-display UNCLAIMED
description: VGA compatible controller
product: GM204GL [Tesla M60]
vendor: NVIDIA Corporation
physical id: 1
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list
configuration: latency=0
resources: iomemory:f0-ef iomemory:f0-ef memory:41000000-41ffffff memory:fe0000000-fefffffff memory:ff0000000-ff1ffffff
I've tried to update my gpu drivers by:
wget https://www.nvidia.com/content/DriverDownload-March2009/confirmation.php?url=/tesla/375.66/nvidia-diag-driver-local-repo-rhel7-375.66-1.x86_64.rpm
yum -y install nvidia-diag-driver-local-repo-rhel7-375.66-1.x86_64.rpm
But after that I still see UNCLIMED next to my NVIDIA gpu.
Aby ideas?
You can try with Xvfb. it does not require additional hardware.
Install Xvfb if you didn't install it yet and do the following steps.
sudo apt-get install -y xvfb
Dependencies to make "headless" chrome/selenium work:
sudo apt-get -y install xorg xvfb gtk2-engines-pixbuf
sudo apt-get -y install dbus-x11 xfonts-base xfonts-100dpi xfonts-75dpi xfonts-cyrillic xfonts-scalable
Optional but nifty: For capturing screenshots of Xvfb display:
sudo apt-get -y install imagemagick x11-apps
Make sure that Xvfb starts every time the box/vm is booted:
Xvfb -ac :99 -screen 0 1280x1024x16 &
export DISPLAY=:99
Run Google Chrome
google-chrome
Okay guys. I found my problem after 2 hours of going crazy. My box was configured correctly. What you can NOT do, is ssh from one box, to another box, to this box and expect X11 forwarding to play nicely. Without tearing apart the entire network, I found that if I shelled over from the MAIN box to this box ( no double or triple ssh'ing) chrome comes right up as a regular user using CLI. So it was a matter of multiple shells from multiple boxes that made the display say it was set to NOTHING! Setting the display manually only complicates the problems. Once I shelled directly over to this box from the main outside box, my display was set to 10:0, which is first instance in my configuration. Don't make this mistake, you will waste valuable time.
FWIW, I ran into this when using SSH to log into a Selenium chrome node in a Docker compose stack. Chrome would launch if I invoked it as root with sudo -u seluser google-chrome, but not if I logged in as seluser. The trick turned out to be that root had DISPLAY set to :99:0, and seluser didn't have it set at all. If I set it explicitly (either from a seluser shell or from the docker compose exec command line) it worked.
$ docker-compose exec -u seluser \
selenium-chrome \ # or whatever your service is called
/bin/bash
seluser#c02cda62b751:/$ export DISPLAY=:99:0
seluser#c02cda62b751:/$ google-chrome http://app.test:3000/home
or
$ docker-compose exec -u seluser -e DISPLAY=:99:0 \
selenium-chrome \
google-chrome http://app.test:3000/home
That :99.0 is undocumented, though, so if this isn't working, you might try checking root's DISPLAY value with:
docker-compose exec -u root selenium-chrome bash -c 'echo "${DISPLAY}"'
I faced the same issue with WSL and Ubuntu. I have unininstalled/Reset the ubuntu. After that, I have executed the below command
wsl --set-default-version 2
then I installed again the Ubuntu, I didn't get --no-sandbox issue or any issue.
Hope this will use for someone.

apachectl command doesn't work from SSH

We have one redhat linux enterprise 7.0 server.
I installed Apache 2.4.6 (last version) on this server.
Once i check the version of apache with apachectl -v command on the server terminal, i am getting below result and it is ok.
But when i tried same command from a different machine by using SSH Secure Shell, i am getting no result from this (apachectl -v) command as shown below.
What is the problem here? Is there any SSH setting regarding this command? We need to run apachectl -v command from SSH-Outside.
Thanks for your help..

OpenfireHome - Home not found

I have xmpp server (openfire_3.9.3) that is running on my ubuntu Ubuntu 14.04.1 LTS.
I have installed openfire by following given steps
1. $ sudo tar -zxvf openfire_x_x_x.tar.gz
2. $ sudo mv openfire /opt
then I moved to openfire bin directory to start openfire as
$ cd /opt/openfire/bin
$ sudo ./openfire start
then during setup through admin console always I am getting the given error
Home not found. Define system property "openfireHome" or create and add the openfire_init.xml file to the classpath
where I need to set openfireHome ? or how can i fixed it out ?
Well it seems your user account might have permissions issue. Can't you keep openfire in your home and try to run it from there and share results?
For me, it's a permissions issue.
I'm using server(Openfire 4.7.0, build e020f58) on my local computer (macOS Monterey 12.1 (21C52)).
My SOLUTION is:
sudo chmod -R 777 /usr/local/openfire

Error with rabbit-mq server

I am trying to setup OpenStack on Ubuntu 12.04 using devstack. Now, the error I am getting is:
Setting up rabbitmq-server (2.7.1-0ubuntu4) ...
Starting rabbitmq-server: FAILED - check /var/log/rabbitmq/startup_{log, _err}
rabbitmq-server.
invoke-rc.d: initscript rabbitmq-server, action "start" failed.
dpkg: error processing rabbitmq-server (--configure):
subprocess installed post-installation script returned error exit status 1
No apport report written because MaxReports is reached already
Errors were encountered while processing:
rabbitmq-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
++ err_trap
++ local r=100
++ set +o xtrace
stack.sh failed
Any idea why am I getting this error?
I had this issue twice, when either hostname or ip address in the hosts file didn't match.
Therefore, check that you provide the correct ip address and hostname in the /etc/hosts file
Run sudo cat /etc/hostname to see your hostname
Output:
yoursite
Run sudo nano /etc/hosts
File contains:
127.0.0.1 yoursite
As you see from cat /etc/hostname, hostname is the same as in the /etc/hosts:
Run sudo rabbitmq-server start to start the rabbitmq-server
Try deleting the folder /var/lib/rabbitmq and re-running ./stack.sh
If that doesn't work either, run the following after stach.sh fails:
chown -R rabbitmq:rabbitmq /var/lib/rabbitmq
chown -R rabbitmq:rabbitmq /var/log/rabbitmq
service rabbitmq-server restart
and check the status of rabbitmq using "rabbitmqctl status"
Similar thing happen to me. Rabbit depends on being able to resolve a hostname, run this:
echo "127.0.0.1 $(hostname -s)" | sudo tee -a /etc/hosts
This way works for me.
First go to
sudo vim /etc/hosts
and set
127.0.0.1 <hostname>
then open firewall
sudo rabbitmq-plugins enable rabbitmq_management
sudo service rabbitmq-server restart
For a clean environment, this will not happen. You must run devstack for several times, and one of them failed but you didn't get it cleaned.
run command pf -ef | grep rabbitmq, kill all rabbitmq processes. then it would be fine to run ./stack.sh
it is highly recommended to run ./unstack.sh && ./clean.sh before ./stack.sh
Just to be sure, take a look to your local network
ip add
If there's no lo network, then you should enable it:
ifconfig lo up
Then restart the server again and let's see if it works again now
systemctl start rabbitmq-server
I had the same problem though my /etc/hosts and DNS were OK. I suspect that SystemV init script was started too early when the network was not ready yet. I rewrote the startup script to systemd on CentOS 7.8 and it seems to work well now.
[Unit]
Description=RabbitMQ
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
RuntimeDirectory=rabbitmq
PrivateTmp=true
Restart=on-failure
RestartSec=10
WorkingDirectory=/opt/data/rabbitmq/
User=rabbitmq
Group=rabbitmq
ExecStart=/opt/app/rabbitmq/default/sbin/rabbitmq-server
ExecStop=/opt/app/rabbitmq/default/sbin/rabbitmqctl stop
ExecStop=/bin/sh -c "while ps -p $MAINPID >/dev/null 2>&1; do sleep 1; done"
StandardOutput=journal
StandardError=inherit
[Install]
WantedBy=multi-user.target