Got this command: cd /some/dir; /usr/local/bin/git log --diff-filter=A --follow --format=%aI -- /some/dir/file | tail -1
I want to get the output from it.
Tried this:
my $proc2 = run 'cd', $dirname, ';', '/usr/local/bin/git', 'log', '--diff-filter=A', '--follow', '--format=%aI', '--', $output_file, '|', 'tail', '-1', :out, :err;
Nothing output.
Tried this:
my $proc2 = run </usr/local/bin/git -C>, $dirname, <log --diff-filter=A --follow --format=%aI -->, $output_file, <| tail -1>, :out, :err;
Git throws an error:
fatal: --follow requires exactly one pathspec
The same git command runs fine when run directly from the command line.
I've confirmed both $dirname and $output_file are correct.
git log --help didn't shed any light on this for me. Command runs fine straight from command line.
UPDATE: So if I take off the | tail -1 bit, I get output from the command in raku (a date). I also discovered if I take the pipe out when running on the command line, the output gets piped into more. I'm not knowledgeable enough about bash and how it might interact with raku's run command to know for sure what's going on.
You need to run a separate proc for piping:
my $p = run «git -C "$dirname" log --diff-filter=A --format=%aI», :out, :err;
my $p2 = run <tail -1>, :in($p.out), :out;
put .out.slurp: :close with $p2;
Also you don't need tail in this case, you can do:
put .out.lines(:close).tail with $p
Related
I have a CI stage with the following command, which has to be executed remotely and checks if the mentioned file exists, if yes it creates a backup for it.
script: |
ssh ${USER}#${HOST} '([ -f "${PATH}/test_1.txt" ] && cp -v "${PATH}/test_1.txt" ${PATH}/test_1_$CI_COMMIT_TIMESTAMP.txt)'
The issue is, this job always fails whether the file exists or not with the following output:
ssh user#hostname '([ -f /etc/file/path/test_1.txt ] && cp -v /etc/file/path/test_1.txt /etc/file/path/test_1_$CI_COMMIT_TIMESTAMP.txt)'
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Running the same command manually, just works fine. So,
How can I make sure that this job succeeds as long as command logic is executed successfully and only fail incase there are some genuine failures?
There is no way for the job to know if the command you ran remotely worked or not. It can only know if the ssh instruction worked or not. You can force it to always succeed by appending || true to any instruction.
However, if you want to see and save the output of your remote instruction, you can do something like this:
ssh user#host command 2>&1 | tee ssh-session.log
I've got case: there's WordPress project where I'm supposed to create a script for updating plugins and commit source changes to the separated branch. While doing this I had run into a strange issue.
Input variable:
akimset,4.0.3
all-in-one-wp-migration,6.71
What I wanted to do was iterating over each line of this variable
while read -r line; do
echo $line
done <<< "$variable"
and this piece of code worked perfectly fine, but when I have added docker-compose logic everything started to act weirdly
while read -r line; do
docker-compose run backend echo $line
done <<< "$variable"
now only one line was executed and after this script exited with 0 and stopped iterating. I have found workaround with:
echo $variable > file.tmp
for line in $(cat file.tmp); do
docker-compose run backend echo $line
done
and that works perfectly fine and it iterates each line. Now my question is: why? ZSH and shell scripting could be a bit misterious and running in edge-cases like this one isn't anything new for me, but I'm wondering why succesfully executed script broke input stream.
The problem with this
while read -r line; do
docker-compose run backend echo $line
done <<< "$variable"
is that docker allocate pseudo-TTY. After the first execution of docker-compose run (first loop) it access to the terminal using up the next lines as input.
You have to pass -T parameter to 'docker-compose run' command in order to avoid docker allocating pseudo-TTY. Then, a working code is:
while read -r line; do
docker-compose run -T backend echo $line
done < $(variable)
Update
The above solution is for docker version 18 and docker-compose version 1.17. For newer version the parameter -T is not working but you can try:
-d instead of -T to run container in background mode BUT no you will not see stdout in terminal.
If you have docker-compose v1.25.0, in your docker-compose.yml add the parameter stdin_open: false to the service.
I was able to solve the same problem by using a different loop :
for line in $(cat $variable)
do
docker-compose run backend echo $line
done
I ran into a nearly identical problem about a year ago, though the shell was bash (the command/problem was also slightly different, but it applied to your issue). I ended up writing the script in zsh.
I'm not certain what's going on, but it's not actually the exit code (you can confirm by running the following):
variable=$'akimset,4.0.3\nall-in-one-wp-migration,6.71'
while read line; do docker-compose run backend print "$line"; print "$?"; done <<<($variable)
... which yielded ...
(akimset,4.0.3
0
(I'm not at all sure where the ( came from and perhaps solving that would answer why this problem happens)
Working Script
for line in "${(f)variable}"; do
docker-compose run backend echo "$line"
done
The (f) flag tells zsh to split on newlines; the "${(f)variable" is in quotes so that any blank lines aren't lost. If you're going to include escap sequences that you want to not be converted to the corresponding values (something that I often need when reading file contents from a variable), make the flags (fV)
I'm using this inside a script:
VAR=$(grep -c mac myfile.tmp)
echo $VAR
the result is 0 when I run the script. But if I run the command in command line it returns the real value that is 1.
Anyone know what the problem is?
I'm using a library that generates a whole ton of output to stderr (and there is really no way to suppress the output directly in the code; it is ROOT's Minuit2 minimizer which is known for not having a way to suppress the output). I'm running batch jobs through the LSF system, and the error output files are so big that they exceed my disk quota. Erk.
When I run locally on a shell, I do:
python main.py 2> >( grep -v Minuit2 2>&1 )
to suppress the output, as is done here.
This works great, but unfortunately I can't seem to get that or any variation of it to work when running on LSF. I think this is due to LSF not spawning the necessary subshell, but it's not clear.
I run on batch by passing LSF a submit script. The relevant line is:
python main.py $INPUT_FILE
which works great, aside from the aforementioned problem of gigantic error files.
When I try changing that line to
python main.py $INPUT_FILE 2> >( grep -v Minuit2 2>&1 )
I end up with
./singleSubmit.sh: line 16: syntax error near unexpected token `>'
./singleSubmit.sh: line 16: `python $MAIN $1 2> >( grep -v Minuit2 2>&1 )'
in the error log file.
Any idea how I could accomplish what I want, or why this is not working?
Thanks a ton!
The syntax you're using works in bash, not in csh/tcsh. Try changing the first line of your submission script to
#!/bin/bash
I am learning the shell language. I have creating a shell script whose function is to login into the DB and run a .sql file. Following are the contents of the script -
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
echo "Running SQL Dump - auto_qa_db_sync"
\\i auto_qa_db_sync.sql
After running the above script, I get the following error
./autoqa_script.sh: 39: ./autoqa_script.sh: /i: not found
Following one article, I tried reversing the slash but it didn't worked.
I don't understand why this is happening. Because when I try manually running the sql file, it works properly. Can anyone help?
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production and run script"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT -f auto_qa_db_sync.sql
The lines you put in a shell script are (moreless, let's say so for now) equivalent to what you would put right to the Bash prompt (the one ending with '$' or '#' if you're a root). When you execute a script (a list of commands), one command will be run after the previous terminates.
What you wanted to do is to run the client and issue a "\i ./autoqa_script.sh" comand in it.
What you did was to run the client, and after the client terminated, issue that command in Bash.
You should read about Bash pipelines - these are the way to run programs and input text inside them. Following your original idea to solving the problem, you'd write something like:
echo '\i auto_qa_db_sync.sql' | $DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
Hope that helps to understand.