How can I cat back exact formatting regardless of shell? - ssh

While trying to write a script, I found an interesting issue with cat today. If I do the following at the command line, everything works properly:
var=$(ssh user#server "cat /directory/myfile.sh")
echo $var > ~/newfile.sh
This works and I have a script file with all the proper formatting and can run it. However, if I do the EXACT same thing in a script:
#!/bin/sh
var=$(ssh user#server "cat /directory/myfile.sh")
echo $var > ~/newfile.sh
The file is mangled with carriage returns and weird formatting.
Does anyone know why this is happening? My goal is to ultimately cat a script from a server and run it locally on my machine.
EDIT
I now know that this is happening because of my invoking #!/bin/sh in my shell script. The command line works because I'm using zsh and it is preserving the formatting.
Is there a way to cat back the results regardless of the shell?

As you seem to have figured out, word splitting is off by default on zsh, but on in sh, bash, etc. You can prevent word splitting in all shells by quoting the variable:
echo "$var" > ~/newfile.sh
Note that echo appends a newline to its output by default, which you can suppress (on most echo implementations and builtins) with -n.

Related

Exit code from docker-compose breaking while loop

I've got case: there's WordPress project where I'm supposed to create a script for updating plugins and commit source changes to the separated branch. While doing this I had run into a strange issue.
Input variable:
akimset,4.0.3
all-in-one-wp-migration,6.71
What I wanted to do was iterating over each line of this variable
while read -r line; do
echo $line
done <<< "$variable"
and this piece of code worked perfectly fine, but when I have added docker-compose logic everything started to act weirdly
while read -r line; do
docker-compose run backend echo $line
done <<< "$variable"
now only one line was executed and after this script exited with 0 and stopped iterating. I have found workaround with:
echo $variable > file.tmp
for line in $(cat file.tmp); do
docker-compose run backend echo $line
done
and that works perfectly fine and it iterates each line. Now my question is: why? ZSH and shell scripting could be a bit misterious and running in edge-cases like this one isn't anything new for me, but I'm wondering why succesfully executed script broke input stream.
The problem with this
while read -r line; do
docker-compose run backend echo $line
done <<< "$variable"
is that docker allocate pseudo-TTY. After the first execution of docker-compose run (first loop) it access to the terminal using up the next lines as input.
You have to pass -T parameter to 'docker-compose run' command in order to avoid docker allocating pseudo-TTY. Then, a working code is:
while read -r line; do
docker-compose run -T backend echo $line
done < $(variable)
Update
The above solution is for docker version 18 and docker-compose version 1.17. For newer version the parameter -T is not working but you can try:
-d instead of -T to run container in background mode BUT no you will not see stdout in terminal.
If you have docker-compose v1.25.0, in your docker-compose.yml add the parameter stdin_open: false to the service.
I was able to solve the same problem by using a different loop :
for line in $(cat $variable)
do
docker-compose run backend echo $line
done
I ran into a nearly identical problem about a year ago, though the shell was bash (the command/problem was also slightly different, but it applied to your issue). I ended up writing the script in zsh.
I'm not certain what's going on, but it's not actually the exit code (you can confirm by running the following):
variable=$'akimset,4.0.3\nall-in-one-wp-migration,6.71'
while read line; do docker-compose run backend print "$line"; print "$?"; done <<<($variable)
... which yielded ...
(akimset,4.0.3
0
(I'm not at all sure where the ( came from and perhaps solving that would answer why this problem happens)
Working Script
for line in "${(f)variable}"; do
docker-compose run backend echo "$line"
done
The (f) flag tells zsh to split on newlines; the "${(f)variable" is in quotes so that any blank lines aren't lost. If you're going to include escap sequences that you want to not be converted to the corresponding values (something that I often need when reading file contents from a variable), make the flags (fV)

No such file or directory from sh script

Looking for the origin of this error message:
Processing: +([^_]).flv
date: +([^_]).flv: No such file or directory
I started getting this at some point in the last few months (can't say when as I wasn't logging my cron output. I know, I know!).
When I originally wrote this, it worked ok for at least two months. I'm wondering if there was an sh update that broke it?
The script runs via crontab and gets all .flv files in the current directory without an underscore and processes each one. It then checks the modified date for files that have been created in the last 24 hours and runs the yamdi meta tag injector for .flv files.
It seems like it's not recognizing the pattern as a pattern and looking for it as an actual file to me. If I run this script from an ssh shell it works ok, it's only when running via cron that it gives this error.
shopt -s extglob
now=$(date +"%s")
for f in +([^_]).flv; do
echo "Processing: $f"
age=$(date -r "$f" +"%s")
calc=$(((now-age) / 60 / 60))
if(( calc < 24 )); then
echo "$f age=$calc"
yamdi -i "$f" -o "$f".seek
rm "$f"
cp "$f".seek "$f"
touch -d #$age "$f"
fi
done
This is most likely a problem of the wrong shell being used; make sure your script's first line represents the right shell:
#!/bin/bash
for bash, or whatever shell you wrote this for. You might want to check your environment variables that cron may set (that's a very common problem -- one assumes everything is set up correctly, but the environment that cron offers to scripts it executes is different).

Unable to run a postgresql script from bash

I am learning the shell language. I have creating a shell script whose function is to login into the DB and run a .sql file. Following are the contents of the script -
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
echo "Running SQL Dump - auto_qa_db_sync"
\\i auto_qa_db_sync.sql
After running the above script, I get the following error
./autoqa_script.sh: 39: ./autoqa_script.sh: /i: not found
Following one article, I tried reversing the slash but it didn't worked.
I don't understand why this is happening. Because when I try manually running the sql file, it works properly. Can anyone help?
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production and run script"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT -f auto_qa_db_sync.sql
The lines you put in a shell script are (moreless, let's say so for now) equivalent to what you would put right to the Bash prompt (the one ending with '$' or '#' if you're a root). When you execute a script (a list of commands), one command will be run after the previous terminates.
What you wanted to do is to run the client and issue a "\i ./autoqa_script.sh" comand in it.
What you did was to run the client, and after the client terminated, issue that command in Bash.
You should read about Bash pipelines - these are the way to run programs and input text inside them. Following your original idea to solving the problem, you'd write something like:
echo '\i auto_qa_db_sync.sql' | $DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
Hope that helps to understand.

Redirect stderr to stdout in C shell

When I run the following command in csh, I got nothing, but it works in bash.
Is there any equivalent in csh which can redirect the standard error to standard out?
somecommand 2>&1
The csh shell has never been known for its extensive ability to manipulate file handles in the redirection process.
You can redirect both standard output and error to a file with:
xxx >& filename
but that's not quite what you were after, redirecting standard error to the current standard output.
However, if your underlying operating system exposes the standard output of a process in the file system (as Linux does with /dev/stdout), you can use that method as follows:
xxx >& /dev/stdout
This will force both standard output and standard error to go to the same place as the current standard output, effectively what you have with the bash redirection, 2>&1.
Just keep in mind this isn't a csh feature. If you run on an operating system that doesn't expose standard output as a file, you can't use this method.
However, there is another method. You can combine the two streams into one if you send it to a pipeline with |&, then all you need to do is find a pipeline component that writes its standard input to its standard output. In case you're unaware of such a thing, that's exactly what cat does if you don't give it any arguments. Hence, you can achieve your ends in this specific case with:
xxx |& cat
Of course, there's also nothing stopping you from running bash (assuming it's on the system somewhere) within a csh script to give you the added capabilities. Then you can use the rich redirections of that shell for the more complex cases where csh may struggle.
Let's explore this in more detail. First, create an executable echo_err that will write a string to stderr:
#include <stdio.h>
int main (int argc, char *argv[]) {
fprintf (stderr, "stderr (%s)\n", (argc > 1) ? argv[1] : "?");
return 0;
}
Then a control script test.csh which will show it in action:
#!/usr/bin/csh
ps -ef ; echo ; echo $$ ; echo
echo 'stdout (csh)'
./echo_err csh
bash -c "( echo 'stdout (bash)' ; ./echo_err bash ) 2>&1"
The echo of the PID and ps are simply so you can ensure it's csh running this script. When you run this script with:
./test.csh >test.out 2>test.err
(the initial redirection is set up by bash before csh starts running the script), and examine the out/err files, you see:
test.out:
UID PID PPID TTY STIME COMMAND
pax 5708 5364 cons0 11:31:14 /usr/bin/ps
pax 5364 7364 cons0 11:31:13 /usr/bin/tcsh
pax 7364 1 cons0 10:44:30 /usr/bin/bash
5364
stdout (csh)
stdout (bash)
stderr (bash)
test.err:
stderr (csh)
You can see there that the test.csh process is running in the C shell, and that calling bash from within there gives you the full bash power of redirection.
The 2>&1 in the bash command quite easily lets you redirect standard error to the current standard output (as desired) without prior knowledge of where standard output is currently going.
I object the above answer and provide my own. csh DOES have this capability and here is how it's done:
xxx |& some_exec # will pipe merged output to your some_exec
or
xxx |& cat > filename
or if you just want it to merge streams (to stdout) and not redirect to a file or some_exec:
xxx |& tee /dev/null
As paxdiablo said you can use >& to redirect both stdout and stderr. However if you want them separated you can use the following:
(command > stdoutfile) >& stderrfile
...as indicated the above will redirect stdout to stdoutfile and stderr to stderrfile.
xxx >& filename
Or do this to see everything on the screen and have it go to your file:
xxx | & tee ./logfile
What about just
xxx >& /dev/stdout
???
I think this is the correct answer for csh.
xxx >/dev/stderr
Note most csh are really tcsh in modern environments:
rmockler> ls -latr /usr/bin/csh
lrwxrwxrwx 1 root root 9 2011-05-03 13:40 /usr/bin/csh -> /bin/tcsh
using a backtick embedded statement to portray this as follows:
echo "`echo 'standard out1'` `echo 'error out1' >/dev/stderr` `echo 'standard out2'`" | tee -a /tmp/test.txt ; cat /tmp/test.txt
if this works for you please bump up to 1. The other suggestions don't work for my csh environment.

Opening multiple shells with tcsh script

Currently working with kde3.5
Here is what I would eventually like to do to help my workflow:
Have a script that:
Opens multiple konsole shells
Renames each shell
This is what I have so far:
#!/bin/tcsh -fv
set KPID =ps -ef | grep konsole | grep -v grep | awk '{print $2}'| tr "\n" " "
dcop konsole-$KPID konsole newSession
The dcop command works just fine in command line (substituting variable for actual pid) but when I run it through the script, it gives 'object not accessible' error. No other errors present.
I've made sure permissions are ok (777) and even added sudo with it, but no luck.
As per second part again I have it working on command line:
dcop $KONSOLE_DCOP_SESSION renameSession "name"
This however only works for the active (working) shell and am not sure how to get it to do it for the others. I have not put this part in script yet as I am still working on the first part. Any suggestions would be great.
Thanks.
If it's a script, it doesn't need to be tcsh. see http://www.grymoire.com/Unix/CshTop10.txt
But if you want to pass $KPID into your script, use $1 in your script argument #1), and call it with
script $KPID