Set host ip in run parameters Intellij with WSL2 - intellij-idea

I have a run configuration setup for a server tool. I want to run it in WSL2 environment from an Intellij run task. This works great but I need to manually set the Windows host IP whenever I restart the WSL2. To get the host IP I want to use this grep command:
grep -o -P "(?<=nameserver )[0-9\.]+" /etc/resolv.conf
I played with the configuration and tried something like this
This didn't work, because the grep command didn't get executed. It worked as expected when I used it in the console.
Trying the same thing with the enviroment variable didn't succeed as well.
I saw that it is possible to setup a "before run task". Maybe it is possible to do it with this option?

Related

Apache Airflow command not found with SSHOperator

I am trying to use the SSHOperator to SSH into a remote machine and run an external application through the command line. I have setup the SSH connection via the admin page.
This section of code is used to define the commands and the SSH connection to the external machine.
sshHook = SSHHook(ssh_conn_id='remote_comp')
command_1 ="""
cd /files/232-065/Rans
bash run.sh
"""
Where 'run.sh' runs the shell script:
#!/bin/sh
starccm+ -batch run_export.java Rans_Model.sim
Which simply runs the commercial software starccm+ with some options I have specified.
This section defines the task:
inlet_profile = SSHOperator(
task_id='inlet_profile',
ssh_hook=sshHook,
command=command_1
)
I have confirmed the SSH connection works by giving a simple 'ls' command and checking the output.
The error that I get is:
bash run.sh, error: run.sh: line 2: starccm+: command not found
The command in 'run.sh' works when I am logged into the machine (it does not require a GUI). This makes me think that there is a problem with the SSH session and it is not the same as the one that Apache Airflow logs into, but I am not sure how to solve this problem.
Does anyone have any experience with this?
There is no issue with SSH connection (at least from the error message). However, the issue is with starccm+ installation path.
Please check the installation path of starccm+ .
Check if the installation path is part of $PATH env variable
$ echo $PATH
If not, then install it in the standard locations like /bin or /usr/bin etc (provided they are included in $PATH variable), or export the installed director into PATH variable like this,
$ export PATH=$PATH:/<absolute_path>
It is not ideal but if you struggle with setting the path variable you can run starccm stating the full path like:
/directory/where/star/is/installed/starccm+ -batch run_export.java Rans_Model.sim

Proper way to automatically start and expose ssh when running my app container

I have containers with python apps and I need them to automatically start and expose ssh when running them. I know it's against Docker's best practices, but right now I don't have any other solution. I'd be interested to know the best way to automatically run an additionnal service in a docker container anyway.
Since Docker will only start one process, installing sshd isn't enough. There are apparently multiple options to deal with it:
use a process manager like Monit or Supervisor
use the ENTRYPOINT option
append a command (service sshd start, for instance) at the end of /etc/bash.bashrc (see this answer)
Option 1 seems overkill to me. Also I suppose I'll have to run the container with a cmd calling the process manager instead of bash or my python app: not exactly what I want.
I don't know how to use Option 2 for such a case. Should I write a custom script starting sshd and then running the provided command if any ? How should this script look like ?
Option 3 is very straightforward but quite dirty. Also it won't work if I run the container with another command than /bin/bash.
What's the best solution and how to set it up ?
You mention that option 1 seems like overkill. Why is it overkill? Supervisor is very simple to configure and will basically do what you want.
First, write supervisor config files that starts your python app and sshd:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:pythonapp]
command=/path/to/python myapp.py -x args etc etc
Call that file supervisord.conf and commit it somewhere in your repo. In your Dockerfile, copy that file to the container as one of the container build steps, expose the ports for SSH and your app (if needed) and set the CMD to start supervisord:
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
This is clean and easy to understand. It's how I run multiple processes in a container when needed. It is even suggested in the Docker docs as a nice solution.
If you don't want to use a process manager, you can wrap your actual container command inside a shell script and sudo service ssh start, then execute your actual command.
sudo service ssh start
python myapp.py -x args blah blah
This will start up ssh as a daemon, and then your python app will start up after.
Yes, We can configure the Supervisord for the multi process in a container. If you want to use Openssh-server we can configure the Supervisor like below-:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
in supervisord.conf file.
We can add the supervisord.conf file in the docker image update a line in Dockerfile.
RUN apt update && apt install -y supervisor openssh-server
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]
Reference link-: Gotechnies

I am getting error ssh exit staus 1

In jenkins post build action I configured Execute shell script on remote host using ssh
ssh site 10.32.25.66, command:
cd $HOME/appsadm/bin; ./ims-carte-stop
and i again modified
cd /HOME/appsadm/bin; ./ims-carte-stop.*
I tried both these commands and Build is successful, but I see in console output in Jenkins after, that it is not executing my script. I am getting ssh exit status 1 error.
In my winscp my script (ims-carte-stop) in this location home/appsadm/bin.
Please tell me if I am doing aything wrong.
My intention is to stop my server from jenkins automatically whenever the build success.
This may be a typo in your question, but:
You said your ims-carte-stop script is in:
/home/appsadm/bin
whereas your script is doing:
cd $HOME/appsadm/bin
or
cd /HOME/appsadm/bin
Looking at the paths, I am going to assume you are using a UNIX-flavoured OS (Linux, BSD, OSX).
UNIX paths are case sensitive. Your script should be calling:
cd /home/appsadm/bin
Note that the word "home" is all small letter not capitals. Also, using $ makes it a variable, which I don't think you want.

Local environment not set in MPI process

When I run mpiexec on a few computers some of them don't automatically load their local environments - they don't seem to run their bashrc or bash_profile files. When I ssh into these troublesome computers everything is fine (the environment is all there). What else could be missing?
If I run
mpiexec -np 1 --host remotehost printenv
I get a very small result. However if I do the following
ssh remotehost
printenv
I get a much larger and more comprehensive result. What is the difference between these two?
MPI jobs run in non-interactive shells which do not load .bashrc. Rather than having each job load its own .bashrc, it is usually better to set the environment variables in the call to mpiexec. MPICH will pass all environment variables from the launching process by default, but with OpenMPI you need to use the --envall option.

How to run a script file remotely using SSH

I want to run a script remotely. But the system doesn't recognize the path. It complains that "no such file or directory". Am I using it right?
ssh kev#server1 `./test/foo.sh`
You can do:
ssh user#host 'bash -s' < /path/script.sh
Backticks will run the command on the local shell and put the results on the command line. What you're saying is 'execute ./test/foo.sh and then pass the output as if I'd typed it on the commandline here'.
Try the following command, and make sure that thats the path from your home directory on the remote computer to your script.
ssh kev#server1 './test/foo.sh'
Also, the script has to be on the remote computer. What this does is essentially log you into the remote computer with the listed command as your shell. You can't run a local script on a remote computer like this (unless theres some fun trick I don't know).
If you want to execute a local script remotely without saving that script remotely you can do it like this:
cat local_script.sh | ssh user#remotehost 'bash -'
It works like a charm for me.
I do that even from Windows to Linux given that you have MSYS installed on your Windows computer.
I don't know if it's possible to run it just like that.
I usually first copy it with scp and then log in to run it.
scp foo.sh user#host:~
ssh user#host
./foo.sh
I was able to invoke a shell script using this command:
ssh ${serverhost} "./sh/checkScript.ksh"
Of course, checkScript.ksh must exist in the $HOME/sh directory.
Make the script executable by the user "Kev" and then remove the try it running through the command
sh kev#server1 /test/foo.sh