When I start tmux, I get a failure when trying to configure powerline. I have set an environment environment variable with this:
export POWERLINE_CONFIG_COMMAND=`which powerline-config`
My ~/.tmux.conf contains the following:
if-shell "POWERLINE_CONFIG_COMMAND" \
run-shell "$POWERLINE_CONFIG_COMMAND tmux setup"
The error I get is:
unknown command: /path/to/powerline-config
I can run the config command manually after tmux starts with this:
$POWERLINE_CONFIG_COMMAND tmux setup
I don't understand why tmux can't run the command during the startup when it can run just fine afterwards.
I don't understand how you get that error. You should not get any message, and nothing should work.
if-shell "POWERLINE_CONFIG_COMMAND" \
run-shell "$POWERLINE_CONFIG_COMMAND tmux setup"
will fail, because POWERLINE_CONFIG_COMMAND is not a command. Your if-shell should have a $ in front of POWERLINE_CONFIG_COMMAND.
Let's assume that was a typo, and it's correct in your actual .conf. Then, the problem is that run-shell runs against tmux, the way it'd run if you typed <prefix>: in your tmux session.
tmux $POWERLINE_CONFIG_COMMAND tmux setup is not a valid command.
You could instead do
run-shell 'send-keys "$POWERLINE_CONFIG_COMMAND tmux setup" Enter'
If you wanted it run in a single pane.
Related
I'm trying to run a command on a remote host via libssh2 as wrapped by the ssh2 Rust crate.
So I would like to run the command cargo build, but when I try to run it via libssh, I get the error:
cargo: command not found
However, when I ssh into the server manually from the command line everything works fine.
I have noticed that the $PATH is different when running ssh from the command line and libssh as well:
for instance when I echo $PATH
ssh gives me:
/home/<user>/.cargo/bin:/usr/share/swift/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bi
while libssh gives me:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
So it looks like what's happening is that the modifications made to $PATH inside .bashrc and .profile are not making it in when running via libssh.
I also get the same behavior if I run /bin/bash -c "echo ${PATH}"
Why would this be the case, and is there any way to get the same behavior in both these cases?
Please take a look at that question.
TL;DR A login shell first reads /etc/profile and then ~/.bash_profile. A non-login shell reads from /etc/bash.bashrc and then ~/.bashrc.
updated: added the missing docker attach.
Hi am trying to run a docker container, with -dti. but I cannot access with a terminal set to dumb. is there a way to change this (it is currently set to xterm, even though my ssh client is dumb)
example:
create the container
docker run -dti --name test -v /my-folder alpine /bin/ash
docker attach test
apk --update add nodejs
cd /my-folder
npm install -g gulp
the last command always contains ascii escape chars to move the cursor.
I have tried "export TERM=dumb" inside the running container, but it does not work.
is there a way to "run" this using the dumb terminal?
I am running this from a script on another computer, via (dumb) ssh.
using the -t which sets this https://docs.docker.com/engine/reference/run/#env-environment-variables, however removing effects the command prompt (the prompt is not shown)
possible solution 1 remove the -t and keep the -i. To see if the command has completed echo out a known token (ENDENDEND). ie
docker run -di --name test -v /my-folder alpine /bin/ash
docker attach test
apk --update add nodejs;echo ENDENDEND
cd /my-folder;echo ENDENDEND
npm install -g gulp;echo ENDENDEND
not pretty, but it works (there is no ascii in the results)
Possible solution 2 use the journal, docker can log out to the linux journal, this can be gathered as commands are executed in the container. (I have yet to fully test this one out. however the log seems to be a nicer output of what happened)
update:
Yep -t is the problem.
However if you want to see the entire process when running a command, maybe this way is better:
docker run -di --name test -v/my-folder alpine /bin/ash
docker exec -it test /bin/ash
finally you need to kill the container after all jobs finished.
docker run -d means "Run container in background and print container ID"
not start the container as a daemon
I was hitting this issue on OSx running docker, i had to do 2 things to stop the terminal/ascii/ansi escape sequences.
remove the "t" option on the docker run command (from docker run -it ... to docker run -i...)
ensure to force bash or sh shells used on osx when running the command from a script file, not the default zsh
Also
the escape sequences were not always visible on the terminal
even so, they still usually caused content corruption, even with SED brought to bear
they always were shown in my editor
I am trying to run a script in AIX to execute another script on a remote server. In addition to the remote script i need to send the stdout to /dev/null. The same command works fine on another server but when I run on the current server it hangs, any advice?
su - test -c "rsh testserver /scripts/testme" 2>&1 >/dev/null1
In your comment you write that a menu is presented when the user logins.
Let's say this is done in the .profile file, using echoes and a read command.
When a menu is presented, the read command in the menu code will not be skipped by redirecting the output. The menu still waits for your input and the su command seems to hang.
Can you change your .profile or .bashrc so that it will skip presenting the menu when called using a su command? When this is called during startup, you can look at the returncode of tty. When you use the su command from the commandline, you should look for another solution.
When your root shell is ksh, you can try the following:
if [[ "$(ps -fp $$)" != *"-ksh -c "* ]]; then
echo "Now I should call the Menu"
fi
I want to run a few shell commands every time I SSH to a server via PuTTY. I'm connecting to a production web server managed by someone else, and I don't want to store my own scripts there.
I see the option Connection > SSH > Remote Command, but if I put my initialization commands there, after starting the session, it closes immediately after the commands execute. How can I run the Remote Command, and then keep the session open so I can continue using it?
The SSH session closes (and PuTTY with it) as soon as the command finishes. By default the "command" is a shell. As you have overridden this default "command" and yet you want to run the shell nevertheless, you have to explicitly execute the shell yourself:
my-command ; /bin/bash
See also Executing a specific command on the server.
One option to go is set up your putty remote command like this:
ls > dir.ls & /bin/bash
In this example command you want to run is "ls > dir.ls" what creates file dir.ls with content of directory listing.
And as you want to leave shell open you can add aditional command "/bin/bash" or any other shell of your choice.
I'm attempting to setup a new compute cluster, and currently experiencing errors when using the qsub command in the SGE. Here's a simple experiment that shows the problem:
test.sh
#!/usr/bin/zsh
test="hello"
echo "${test}"
test.sh.eXX
test=hello: Command not found.
test: Undefined variable.
test.sh.oXX
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
If I ran the script on the head node (sh test.sh), the output is correct. I submit the job to the SGE by typing "qsub test.sh".
If I submit the exact same script job in the same way on an established compute cluster like HPC, it works perfectly as expected. What setting could be causing this problem?
Thanks for any help on this matter.
Most likely the queues on your cluster are set to posix_compliant mode with a default shell of /bin/csh. The posix_compliant setting means your #! line is ignored. You can either change the queues to unix_behavior or specify the required shell using qsub's -S option.
#$ -S /bin/sh