Run script when start the asterisk using service - system

I am facing some strange issue to execute script when started the asterisk using "service asterisk start".
I can run script within dialplan using system() app when I started the asterisk using asterisk -vvvvvvvvc but when I start the asterisk using service (service asterisk start), asterisk can not execute the script, getting error like "Can not execute the command".
I have installed the asterisk as asterisk user. setup is done with freepbx installation.
I tried with various option, like changing the script with chown (chown asterisk:asterisk script), chmod 777 script but does not work.

Which OS version are you using?
If it is RHEL / CentOS 7, then you should use "systemctl start asterisk" instead.
If it is RHEL / CentOS 6, then you run the command correctly. Note that this startup script launches an intermediate script safe_asterisk, and the latter, in turn, launches the asterisk itself.
Check /var/log/messages and /var/log/asterisk/full - does anything interesting appear there at the moment of unsuccessful startup?
Could you also share the list of processes (ps -ef | grep asterisk) after you execute this startup script?

Related

How to create a Linux GUI app short cut for WSL2 on Windows10?

I have properly installed and setup WSL2. It works fine.
I also setup X11 forwarding and X server (VcXsrv). I can launch GUI apps such like konsole or gvim or even google-chrome from a bash shell.
Now I want to launch konsole by simply double clicking a short cut on the desktop without launching the bash command mode terminal. How should I do it?
I tried running this in cmd:
> wsl /usr/bin/konsole
and it reports:
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, xcb.
I'm guessing it is because some X11 forwarding configurations were not properly setup, so I created a k.sh as follows:
#!/usr/bin/bash
export DISPLAY=$(cat /etc/resolv.conf | grep nameserver | awk '{print $2; exit;}'):0.0
export LIBGL_ALWAYS_INDIRECT=1
/usr/bin/konsole &
The first two lines were the X11 settings in my .bashrc, the last line launches konsole.
It works fine under bash environment; but when I ran
wsl k.sh
from windows cmd environment, it silently quitted without launching the konsole.
I'm out of ideas. What should I do to directly launch konsole or other Linux GUI apps under windows without having to getting into bash?
Thanks in advance.
You are asking about two different command-lines, and while the failures in running them via the wsl command have the same root-cause, the underlying failures are likely slightly different.
In both cases, the wsl <command> invocation results in a non-login, non-interactive shell where the command simply "runs and exits".
Since the shell is non-login/non-interactive, your startup files (such as ~/.bashrc and ~/.bash_profile, among others) are not being processed.
When you run:
wsl /usr/bin/konsole
... the DISPLAY variable is not set, since, as you said, you normally set it in your ~/.bashrc.
Try using:
wsl -e bash -lic "/usr/bin/konsole"
That will force bash to run as a login (-l), interactive (-i) shell. The DISPLAY should be set correctly, and it should run konsole.
Note that the quotes probably aren't necessary in this case, but are useful for delineating the commands you are passing to bash. More complicated command-lines can be passed in via the quotes.
As for:
wsl k.sh
That's likely a similar problem. You are doing the right thing by setting DISPLAY in your script, but I notice that you aren't using a fully-qualified path it. This would normally work, of course, if your script is in a directory on the $PATH.
But I'm guessing that you might add that directory to the $PATH in your startup config, which means (again) that it isn't being set in this non-login, non-interactive shell.
As before, try:
wsl -e bash -lic "k.sh"`
You could also use a fully-qualified path, of course.
And, I'm fairly sure you are going to run into an issue with trying to put konsole in the background via the script. When WSL exits, and the bash shell process ends, the child konsole process will terminate as well.
You could get around this with a nohup in the script, but then you also need to redirect the stderr. It's probably easiest just to move the & from the script itself to the command-line. Change your k.sh to:
#!/usr/bin/bash
export DISPLAY=$(cat /etc/resolv.conf | grep nameserver | awk '{print $2; exit;}'):0.0
export LIBGL_ALWAYS_INDIRECT=1
/usr/bin/konsole
Then run it with:
wsl -e bash -lic "k.sh &"`
Finally, a side note that when and if you can upgrade to Windows 11, it will automatically create Windows Start Menu entries for any Linux GUI app you install that creates a .desktop file. You can manually create .desktop files to have WSL create Start menu items for most applications.
For reference, in Windows 11 it's easier. To run a GUI application without a terminal window popping up, you just need to call wslg.exe instead of wsl.exe.
So, for example:
target: C:\Windows\System32\wslg.exe konsole
start in: C:\WINDOWS\system32
shortcut key: None
comment: Konsole
This tutorial shows how to install VcXsrv and and edit .bashrc to ensure that the "DISPLAY env var is updated on every restart".
DISPLAY env var needs to be dynamic setting.
I've used it successfully with WSL2 on Windows10 Version 21H2 (OS build 19044.2130) to run Chrome, Edge, and thunar. I'm using the Ubuntu 20.04 Linux distro.
To edit .bashrc follow these instructions.

Python cron job with Chrome not running in AWS EC2

I've been using an EC2 instance to run a python script with cron everyday for a month or so. The script uses selenium.
Everything was working correctly until today, when my script did not run.
I have tried to run it manually but it's not working either. The error message says that
raise exception_class(message, screen, stacktrace) selenium.common.exceptions.
NoSuchElementException: Message: no such element: Unable to locate element:
{"method":"cssselector","selector":"#ctl00_ctl00_moteurRapideOffre_
ctl01_EngineCriteriaCollection_Contract > option:nth-child(5)"}
(Session info: headless chrome=90.0.4430.85)
However, the same script is running fine on my computer (ie on my Macbook, not on AWS EC2).
As the problem seems to come from Chrome, I uninstalled it on AWS EC2 using:
sudo yum remove google-chrome-stable
Then I reinstalled it using :
curl https://intoli.com/install-google-chrome.sh | bash
sudo mv /usr/bin/google-chrome-stable /usr/bin/google-chrome
google-chrome --version && which google-chrome
If I try to run Chrome on the EC2 using /usr/bin/google-chrome, it does not work and it displays the following error message :
ERROR:browser_main_loop.cc(1386)] Unable to open X display.
I don't know if it was working before as I have never used it this way. But it seems to be a problem.
I have seen on the web that it might come from the fact that there is no screen and that I should use a package named xvfb. I have tried to install it with the following code:
sudo yum install xorg-x11-server-Xvfb
I guess the package was correclty installed, but it is not working better.
To sum up, I think my problem in the python code is linked to the fact that Google Chrome is not working correclty and this might be linked to xvfb. But I am not sure at all, it is just what I have tried until now.
Could you please help me ? Thanks!
You can simply add setup your like this, runs after every 30 minutes
*/30 * * * * export DISPLAY=:0 && ,<do what ever you want.>
If this does not work, and you google-chrome or firefox not found, simply run the command below in your shell BASH, FISH, ZSH etc to get PATH.
echo $PATH
Whatever the result comes out from the above command just copy and paste it above your cronjob like this,
*/30 * * * * export DISPLAY=:0 && ,<your selenium script.>```
You can remove export ```export DISPLAY=:0``` line if you want to this in the background or make your driver headless.
The reason of doing this, you might install the respective from snapd etc and that's why path is not defined as you downloaded from separate resource.

Execute a shell command outside of a sandbox while in a sandbox

I'm using singularity to run python in an environnement deprived of python. I'm also running a mysql instance as explained by the IOWA state university (running an instance of mysql, and closing it when done).
For clarity, I'm using a bash script to open mysql, then do what i have to do (a python script) and close mysql, and it works fine. But Python's only way to stop if an error occured is sys.exit([value]) and this not only stops the python script, but also the bash script that ran it. This makes it impossible for me to manage the errors and close the instance of mysql if the python script exits.
My question is : Is there a way for me to execute a 'singularity instance stop mysql' while being in the python sandbox. Something to tell singularity "hey, this command here must be used on the host !" ?
I keep searching but can't find anything.
I only tried to execute it with subprocess like any other command, but it returned an error message because I don't have this instance inside the python sandbox. I don't even have singularity in this sandbox.
For any clarifications, just ask me, I'm trying to be clear but I'm pretty sure it's not very clear.
Thanks a lot !
Generally speaking, it would be a big security issue if a process could be initiated from inside a container (docker or singularity) but run in the host OS's namespace.
If the bash script is exiting on the python failure, it sounds like you're using set -e or #!/bin/bash -e. This causes the script to abort if any command returns non-zero. It's commonly recommended for safer processing, but can cause problems like this at times. To bypass that for the python step you can modify your script:
# start mysql, do some stuff
set +x # disable abort on non-zero return
python my_script.py
set -x # re-enable abort on non-zero
# shut down mysql, do other stuff

Crashplan on FreeNAS missing /var/lib/crashplan/.ui_info

So I spent a few weeks on this problem now. I've been trying to get CrashPlan running on a headless FreeNAS server. I have found lots a tutorial to do this. However the fact is that I'm missing the .un_info file on my FreeNAS server after installing CrashPlan.
I have searched the whole file system to try and find the elusive .ui_info file.
I've tried creating it manually with information copied from desktop PC but that does not help me resolve my CrashPlan Pro app connecting to the Crashplan server service on FreeNAS.
INFO:
FreeNAS 9.3 STABLE
Crashplan 3.6.3_1 Plugin
The crashplan remote access behaviour changed several times during the last updates, however with version 3.6.3_1 you should find the .ui_info file in
/var/lib/crashplan/.ui_info
Although the jail version is 3.6.3 it's possible that Crashplan updated itself, please check this with:
tail -f /usr/pbi/crashplan-amd64/share/crashplan/log/service.log.0
In the end you want your Crashplan to update itself anyway. If the update process produces an error related to bash, please run:
pkg update
pkg install bash
ln -siv /usr/local/bin/bash /bin/bash
And restart crashplan while checking the log output with the tail -f command from above:
service crashplan restart
If you finally reach a recent version (>4.4.1), its time to remotely connect to crashplan.
The only change on the server necessary for the easiest method without ssh tunnel is the serviceHost tag in /usr/pbi/crashplan-amd64/share/crashplan/conf/my.service.xml.
<serviceUIConfig>
<serviceHost>0.0.0.0</serviceHost>
Either do this everytime you want to connect, because the token will change after every crashplan restart or use my script from here (for OS X): https://gist.github.com/Phlogi/8654e353786ed1cf0858
Copy /var/lib/crashplan/.ui_info to the correct place on your desktop machine and edit the IP address at the end (to your servers address), for example:
4339,7f1d655f-*****,192.168.1.20
That's it, you can start crashplan on your remote machine and it will connect properly, there are no other changes neccessary. Latest crashplan (>4.4.1) will actually use the IP address from .ui_info.
Install JRE. You will need to add --no-check-certificate to the JRE wget line in the install.sh file

Jenkins SSH remote process is getting killed as soon as the Jenkins SSH plugin returns back

Jenkins version: 1.574
I created a simple job which performs the following:
Using "Execute shell script on remote host using SSH" as one of the BUILD steps, I'm just calling a shell script. This shell script performs stop and start operations on Tomcat to restart an application on the target machine.
I have a valid username, password, port defined for the target SSH server in Jenkins Global settings.
I saw this behavior that when I run a Jenkins job and call the restart script (which gets the application name as parameter $1), it works fine, but as soon as "Execute shell script on remote host using SSH" step completes, I see the new process dies on the remote/target application server.
If I run the script from the target/remote server itself, everything works fine and the new process/PID remains live forever, but running the same script from Jenkins, though I don't see any errors and everything works as expected, the new process dies as soon as the above mentioned SSH step is complete and control comes back to the next BUILD step in Jenkins job OR the Jenkins job is complete.
I saw a few posts/blogs and tried setting: BUILD_ID=dontKillMe in the Jenkins job (in various places i.e. Prepare Environment variables and also using Inject Environment variables...). When the job's particular build# is complete, I can see Environment Variables for that build# does say BUILD_ID=dontKillMe as its value (instead of the default Timestamp tag value).
I tried putting nohup before calling the restart script, i.e.,
nohup restart_tomcat.sh "${app}"
I also tried:
BUILD_ID=dontKillMe nohup restart_tomcat.sh "${app}"
This doesn't give any error and creates a nohup.out file on the remote server (but I'm not worried about it as the restart_tomcat.sh script itself creates its own LOG file which I'm "cat"ing after the restart_tomcat.sh script is complete. cat'ing on the log file is performed using another "Execute shell script on remote host using SSH" build step, and it successfully shows the log file created by the restart script).
I don't know what I'm missing at this point, but as soon as the restart_tomcat.sh step is complete, the new PID/process on the remote/target server dies.
How can I fix this?
I've been through this myself.
On my first iteration, before I knew about Jenkins ProcessTreeKiller, I ended up just daemonizing Tomcat. The Apache Tomcat documentation includes a section on running as a daemon.
You can also try disabling the ProcessTreeKiller for your whole Jenkins instance, if it's relatively small (read the first link for information).
The BUILD_ID=dontKillMe should be passed to the shell, and therefore it should be in your command line, not in Jenkins global configuration or job parameters.
BUILD_ID=dontKillMe restart_tomcat.sh "${app}" should have worked without problems.
You can also try nohup restart_tomcat.sh "${app}" & with the & at the end.
My solution (it worked after trying everything else) in Ubuntu 14.04 (Trusty Tahr) (Amazon AWS - Amazon EC2), Jenkins 1.601:
Exec command: (setsid COMMAND < /dev/null > /dev/null 2>&1 &);
Exec in PTY: DISABLED
// Example COMMAND=socat TCP4-LISTEN:1337,fork TCP4:127.0.0.1:1338
I created this Transfer as my last one.
#!/bin/ksh
export BUILD_ID=dontKillMe
I added the above line to the start of my script and the issue was resolved.