CMake execute_process giving no output - cmake

I am trying to make CMake execute
ls -d /usr/local/gasnet/include/*-conduit | rev | cut -d/ -f1 | rev | cut -d- -f1 | paste -d " " - - - -
and save the result into a variable CONDUITS using the following snippet. However, this defines CONDUITS as an empty string. What is going wrong?
EXECUTE_PROCESS (COMMAND "ls -d" "$ENV{GASNET_HOME}/include/*-conduit"
COMMAND rev
COMMAND "cut -d/ -f1"
COMMAND rev
COMMAND "cut -d- -f1"
COMMAND "paste -d" "\" \" - - - -"
OUTPUT_VARIABLE CONDUITS
COMMAND_ECHO STDOUT)
This is the output from COMMAND_ECHO STDOUT, so it seems the ls command already has empty output.
'ls -d' '/usr/local/gasnet/include/*-conduit'
'rev'
'cut -d/ -f1'
'rev'
'cut -d- -f1'
'paste -d' '" " - - - -'

Related

String manipulation in .gitlab-ci variables

I'm trying to set up my ci file with some variables. I'm able to generate a variable like so;
...
variables:
TARGET_PROJECT_DIR: "${CI_PROJECT_NAME}.git"
However, I don't seem to be able to do this;
...
variables:
PROJECT_PROTOCOL_RELATIVE_URL: "${CI_PROJECT_URL//https:\/\/}.git"
If I run that in bash, I get the expected output which is gitlab.com/my/repo/url.git with the 'https://' removed and the '.git' appended.
My workaround has just been to export it in the 'script' section, but it feels a lot neater to add this to the variables section, since this is part of a template that is being inherited by the actual jobs. Is it possible?
There are several more useful variables defined in the GitLab CI environment.
CI_PROJECT_PATH gives you the <namespace>/<project name> (or just <project name> if you have no extra namespace) string and
CI_SERVER_HOST gives you the server name, so you could do
variables:
PROJECT_PROTOCOL_RELATIVE_URL: ${CI_SERVER_HOST}/${CI_PROJECT_PATH}.git
I have similar setups (also without quotes).
I'm not sure if that will work for you, since my runners and my server are under my control and I don't run pipelines with external projects.
But you can get all available variables displayed in the job log by running a job like this:
stages:
- env
show-env:
stage: env
script:
- env
Also always helpful is https://docs.gitlab.com/ee/ci/variables/predefined_variables.html
After looking around for similar challenges I found your not answered question. Here are my suggestions:
stages:
- todo
todo-job:
stage: todo
only:
- master
script:
#your question / example
- echo ${CI_PROJECT_URL}
- echo ${CI_PROJECT_URL:8:100}.git
#Because you have the word manipulation in the title, I have some more examples:
#Return substring between the two '_'
- INPUT="someletters_12345_moreleters.ext"
- SUBSTRING=`expr match "$INPUT" '.*_\([[:digit:]]*\)_.*' `
- echo $SUBSTRING
#Store a substring in a new variable and create an output
- b=${INPUT:12:5}
- echo $b
#Substring using grep with regex (more readable)
- your_number=$(echo "someletters_12345_moreleters.ext" | grep -E -o '[0-9]{5}')
- echo $your_number
#Substring using variable and 'grep' with regex (more readable)
- your_number=$(echo "$INPUT" | grep -E -o '[0-9]{5}')
- echo $your_number
#split a string and return a part using 'cut'
- your_id=$(echo "Release V14_TEST-42" | cut -d "_" -f2 )
- echo $your_id
#split the string of a variable and return a part using 'cut'
- VAR="Release V14_TEST-42"
- your_number=$(echo "$VAR" | cut -d "_" -f2 )
- echo $your_number
Gitlab output looks like:
$ echo ${CI_PROJECT_URL}
https://gitlab.com/XXXXXXXXXX/gitlab_related_projects/test
$ echo ${CI_PROJECT_URL:8:100}.git
gitlab.com/XXXXXXXXXX/gitlab_related_projects/test.git
$ INPUT="someletters_12345_moreleters.ext"
$ SUBSTRING=`expr match "$INPUT" '.*_\([[:digit:]]*\)_.*' `
$ echo $SUBSTRING
12345
$ b=${INPUT:12:5}
$ echo $b
12345
$ your_number=$(echo "someletters_12345_moreleters.ext" | grep -E -o '[0-9]{5}')
$ echo $your_number
12345
$ your_number=$(echo "$INPUT" | grep -E -o '[0-9]{5}')
$ echo $your_number
12345
$ your_number=$(echo "Release V14_TEST-42" | cut -d "_" -f2 )
$ echo $your_number
TEST-42
$ VAR="Release V14_TEST-42"
$ your_number=$(echo "$VAR" | cut -d "_" -f2 )
$ echo $your_number
TEST-42
Cleaning up project directory and file based variables
00:01
Job succeeded

awk command not working with kubectl exec

From outside container:
$ kubectl exec -it ui-gateway-0 -- bash -c "ps -ef | grep entities_api_svc | head -1"
root 14 9 0 10:34 ? 00:00:02 /svc/bin/entities_api_svc
$ kubectl exec -it ui-gateway-0 -- bash -c "ps -ef | grep entities_api_svc | head -1 | awk '{print $2}'"
root 14 9 0 10:34 ? 00:00:02 /svc/bin/entities_api_svc
From inside container:
[root#ui-gateway-0 /]# ps -ef | grep entities_api_svc | head -1 | awk '{print $2}'
14
I find it easier to use single quotes on the sh/bash command argument so it is closer to what you would type in the shell:
kubectl exec -it ui-gateway-0 -- \
bash -c 'ps -ef | grep entities_api_svc | head -1 | awk "{print \$2}"'
This means the awk uses double quotes, which requires the shell variable marker $ to be escaped.
In the original command, the shell running kubectl was replacing $2 with a zero length string so awk would see only print, which prints the whole line
Multiple levels of nesting
Nested shell escaping gets very obscure very quickly and hard to debug:
$ printf '%q\n' 'echo "single nested $var" | awk "print $2"'
echo\ \"single\ nested\ \$var\"\ \|\ awk\ \"print\ \$2\"
$ printf '%q\n' "$(printf '%q\n' 'echo "double nested $var" | awk "print $2"')"
echo\\\ \\\"double\\\ nested\\\ \\\$var\\\"\\\ \\\|\\\ awk\\\ \\\"print\\\ \\\$2\\\"
If you add a file grep-entities.sh in container
#!/bin/bash
set -uex -o pipefail
ps -ef | grep entities_api_svc | head -1 | awk '{print $2}'
You then don't need to worry about escaping
pid=$(sshpass -p "password" ssh vm#10.10.0.1 kubectl exec ui-gateway-0 -- /grep-entities.sh)
Also pgrep does the scripts job for you
kubectl exec ui-gateway-0 -- pgrep entities_api_svc

grep pattern match with line number

I want to get the line number of the matched pattern but I have a condition that the pattern match should have 'digits' .
If I use
grep -ri -n "package $i " . | grep -P '\d'
then I would get the line number of the lines matching pattern but also I would get the lines with 'package ' without any digits:
Below output shows me line number 71 for 'package ca-certificates' but there are four more lines for gluterfs that I dont need . I dont need those lines as they dont have any digit in them .
for i in $(awk '{print $1}' ~/Version-pkgs)
do
grep -ri -n "package $i " . | grep -P '\d'
done
sh search-version-pkgs.sh
./core.pkglist:71:package ca-certificates 2017.2.14 65.0.1.el6_9 arch noarch
./dev.pkglist:1343:package glusterfs-devel \
./dev.pkglist:1346:package glusterfs-api-devel \
./dev.pkglist:1346:package glusterfs-api-devel \
./dev.pkglist:1346:package glusterfs-api-devel \
./dev.pkglist:1343:package glusterfs-devel \
./core.pkglist:234:package initscripts 9.03.58 1.0.3.el6_9.2prerel7.6.0.0.0_88.51.0 arch ${bestArch}
./core.pkglist:397:package nspr 4.13.1 1.el6
./dev.pkglist:859:package nspr-devel \
./dev.pkglist:859:package nspr-devel \
./core.pkglist:401:package nss 3.28.4 4.0.1.el6_9 arch ${bestArch}
Running below script gives me exact pattern match i.e. 'package ' but I would not get the line number of them
for i in $(awk '{print $1}' ~/Version-pkgs)
do
egrep -ri "package $i " . | grep -P '\d'
done
sh search-version-pkgs.sh
./core.pkglist:package ca-certificates 2017.2.14 65.0.1.el6_9 arch noarch
./core.pkglist:package initscripts 9.03.58 1.0.3.el6_9.2prerel7.6.0.0.0_88.51.0 arch ${bestArch}
./core.pkglist:package nspr 4.13.1 1.el6
./core.pkglist:package nss 3.28.4 4.0.1.el6_9 arch ${bestArch}
./core.pkglist:package nss-util 3.28.4 1.el6_9 arch ${bestArch}
./core.pkglist:package tzdata 2018e 3.el6 arch noarch
How can get the output with the line number along with the pattern match as file:lineno.:package pkgname digits
for i in $(cut -f1 ~/Version-pkgs)
do
grep -rin "package $i.*[0-9]" .
done
no need to use grep twice
Oneliner :
grep -rinf <(sed -E 's,([^ ]*).*,package \1.*[0-9],' ~/Version-pkgs) .

How to paste values of specific columns of a file into another command?

I want to use the fastacmd to extract specific regions of fasta sequences.
To do that I need to put the name of the fasta file -d, the name of the sequence -s and the position of the sequence to extract -L. For example:
fastacmd -d OAP11402.1.fa -s OAP11402.1 -L 50,100
But the problem is that I have hundreds of files (each file has one sequence with the same name of the file) and the info of position of each sequence to extract is in a protein database (info_sequences.txt). So, I want to make a loop to paste the name of the file, sequence and the positions to extract from the protein database info_sequences.txt in the fastacmd.
The look of info_sequences.txt is like this:
File seq_id position_start position_end
OAP11402.1.fa OAP11402.1 50 100
OAP15774.1.fa OAP15774.1 75 200
OAP10214.1.fa OAP10214.1 33 310
I think that awk could help but i'm struggling with the way to paste the info in the fastcmd
source <(
awk 'NR > 1 {
printf "echo fastacmd -d %s -s %s -L %d,%d\n", $1, $2, $3, $4
}' info_sequences.txt
)
The awk command spits out all the commands.
Then the source <( ... ) evaluates the commands in your current shell.
Same advice as Cyrus, if it looks OK remove the echo
Or, do it all in awk:
awk 'NR > 1 {
cmd = "echo fastacmd -d " $1 " -s " $2 " -L " $3 "," $4
system(cmd)
}' info_sequences.txt
awk 'NR>1 {print "-d",$1,"-s",$2,"-L",$3","$4}' info_sequences.txt | xargs -I {} echo fastacmd {}
Output:
fastacmd -d OAP11402.1.fa -s OAP11402.1 -L 50,100
fastacmd -d OAP15774.1.fa -s OAP15774.1 -L 75,200
fastacmd -d OAP10214.1.fa -s OAP10214.1 -L 33,310
If everything looks okay, remove echo.

How to execute a remote command over ssh?

I try to connect to the remote server by ssh and execute the command.
But given the situation, I can only execute a single command.
For example
ssh -i ~/auth/aws.pem ubuntu#server "echo 1"
It works very well, but I have a problem with the following
case1
ssh -i ~/auth/aws.pem ubuntu#server "cd /"
ssh -i ~/auth/aws.pem ubuntu#server "ls"
case2
ssh -i ~/auth/aws.pem ubuntu#server "export a=1"
ssh -i ~/auth/aws.pem ubuntu#server "echo $a"
The session is not maintained.
Of course, you can use "cd /; ls"
but I can only execute one command at a time.
...
Reflecting comments
developed a bash script
function cmd()
{
local command_delete="$#"
if [ -f /tmp/variables.current ]; then
set -a
source /tmp/variables.current
set +a
cd $PWD
fi
if [ ! -f /tmp/variables.before ]; then
comm -3 <(declare | sort) <(declare -f | sort) > /tmp/variables.before
fi
echo $command_delete > /tmp/export_command.sh
source /tmp/export_command.sh
comm -3 <(declare | sort) <(declare -f | sort) > /tmp/variables.after
diff /tmp/variables.before /tmp/variables.after \
| sed -ne 's/^> //p' \
| sed '/^OLDPWD/ d' \
| sed '/^PWD/ d' \
| sed '/^_/ d' \
| sed '/^PPID/ d' \
| sed '/^BASH/ d' \
| sed '/^SSH/ d' \
| sed '/^SHELLOPTS/ d' \
| sed '/^XDG_SESSION_ID/ d' \
| sed '/^FUNCNAME/ d' \
| sed '/^command_delete/ d' \
> /tmp/variables.current
echo "PWD=$(pwd)" >> /tmp/variables.current
}
ssh -i ~/auth/aws.pem ubuntu#server "cmd cd /"
ssh -i ~/auth/aws.pem ubuntu#server "cmd ls"
What better solution?
$ cat <<'EOF' | ssh user#server
export a=1
echo "${a}"
EOF
Pseudo-terminal will not be allocated because stdin is not a terminal.
user#server's password:
1
In this way you will send all commands to ssh as a single file script, so you can put any number of commands. Please note the way to use EOF between single quote '.