AWK invoking sh instead of csh - awk

I'm sure there is a much easier way to do this, so I'm all ears.
sort -nrk 7 my_list.tsv | tail -n 1 | awk '{print("setenv INPUT_DIR `pwd`/"$1)}'
The first item in my .tsv are filenames (sorted) that I'm trying to set as an environmental variable in csh. I want to add the pull the path too. I though this would work but...
sh: setenv: command not found
Even though I'm in csh. Can I get the awk system function to use csh/tcsh?
J

'to add the pull the path too'. ??
Do you mean
echo "fileName" | awk '{printf("setenv '`pwd`'/%s\n", $1)}'
output
setenv INPUT_DIR /home/shellter/filename
I've replaced your sort ... | tail .. as that doesn't seem to be your core question.
Also note that the single-quotes prevent the back-quote command-substitution being processed. You turn off the single-quotes, get your cmd-sub value, the turn single-quotes back on again.
If this not what you mean, please edit your question and replace sort ... tail ... as above with a simple echo "string", AND include the expected output. It will also help to include the output you are currently getting.
To answer the question in your headline, awk is almost certainly using the value stored in the $SHELL environment variable. Or there may be another variable, so do a set | grep /bin/sh and setenv | grep /bin/sh so see where the reference to /bin/sh is getting set. Then decide how you're going manage that.
When I run your code, I don't get any indication that the shell was executed. I get
setenv INPUT_DIR `pwd`/file
but I'm running the code under ksh. I don't have a csh to use test with. But for the given case, it shouldn't matter.
IHTH

Related

Script to display only comments from /etc/services file

I need to write a bash script that takes service name as a parameter and display only comment that is after hash symbol in /etc/services but I have no idea how to cut only the comment part.
The ,,it's working solution'' for me is to just:
grep "^$1" /etc/services | awk '{print $3,$4 ...
but I don't think this is a good one
I'm searching for something like:
[find the service] -> print only the part from # till the end of the line
I'm still learning so any solution with explanation or just a hint will be very helpful for me.
Chances are this is what you're looking for:
awk -v svc="$1" '($1==svc) && sub(/[^#]+#/,"")' /etc/services
but without sample input/output it's a guess.
The above will work using any awk in any shell on every Unix box.
Try this:
SERVICE_NAME=linuxconf; grep -Po "^$SERVICE_NAME.*# \K.*$" /etc/services
-P tells grep to use perl regex.
-o trims the output so that it only includes the regex match.
\K tells the regex engine to exclude previously matched part of the string from the match, i.e. only the part after \K will be present in the final match.

How to place quotes mark in ansible task with grep, awk, sed

My task search for config in CMD column to gather information what is directory of application config and also PID.
---
- hosts: all
pre_tasks:
- name: Check if process is running
become: yes
shell: 'ps -e --format="pid cmd" | grep process.cfg | sed -e "s/[[:space:]]\+/ /g"| grep -v color'
register: proces_out
output looks like this after this command:
32423 /var/local/bin/application -c /var/local/etc/process.cfg
But i think ansible have troubles with 2 greps in 1 command. I need them both because if i dont use reversed "grep -v color" this anoying thing appears "grep --color=auto ", i cant cut out PID that i need in another task which kills process because real process is in second line.
My second idea was to use AWK, which i think would be the best tool for this case, but if i use double quotation marks in --format parameter and in SED command and the single quotation mark in awk parameters they dont want to cooperate. Even if i keep them balanced they interfere with them selfs.
AWK idea:
shell: 'ps -e --format="pid cmd" | grep process.cfg | sed -e "s/[[:space:]]\+/ /g"| awk 'FNR == 2''
I want to ask for a hint what would be the best to avoid incompatibility in code and be able to use it after as a output in variable
## PID
{{ proces_out.stdout.split(' ')[0] }}
## application
{{ proces_out.stdout.split(' ')[1] }}
## config
{{ proces_out.stdout.split(' ')[3] }}
But i think ansible have troubles with 2 greps in 1 command
That is for sure not true
if i dont use reversed "grep -v color" this anoying thing appears "grep --color=auto ", i cant cut out PID that i need in another task which kills process because real process is in second line.
You are running into the classic case of the grep process matching its own regex, as will happen in a lot of "simple" cases. What you want is a regex that matches your string but does not match itself. In that example above it would be:
shell: 'ps -e --format="pid cmd" | grep process[.]cfg | sed -e "s/[[:space:]]\+/ /g"'
because process[.]cfg matches process.cfg but does not match process[.]cfg I also fixed your regex because in a regex, the . means any character, which doesn't appear to be what you really wanted to happen
With regard to that --color bit, you can likely can side-step that nonsense by using the full path to grep, which will cause bash to really execute the binary, versus some alias that uses --color=auto; I actually wouldn't have expected the colors to show up in an ansible run, because it's not the right $TERM but systems are weird
Thank you Matthew for that solution, but i found diffirent option to avoid unnessesery output.
So syntax is almost the same, but i added to --format addonational parameter ppid Parent process id, in most case i belive parent process always have number 1 in output which helps to sort it as i want to.
It look like this:
shell: >
ps -e --format="ppid pid cmd" |
grep process.cfg |
sed -e "s/[[:space:]]\+/ /g"
register: output_process
And output looks like this:
1 54345 /var/local/bin/application -c /var/local/etc/process.cfg
6435 6577 grep --color=auto process.cfg
Now its easy we can use ansible modules to sort it:
- name: Kill process
become: yes
shell: "kill {{ output_process.stdout_lines[0].split(' ')[2] }}"
What it does? it selects line 0 which is first line, splits output between spaces and selects 3rd phrase. In output theres :space: before ppid thats why PID is 3rd
Thank you again for your solution Matthew, it might be helpfull in another case.

Retain backslashes with while read loop in multiple shells

I have the following code:
#!/bin/sh
while read line; do
printf "%s\n" $line
done < input.txt
Input.txt has the following lines:
one\two
eight\nine
The output is as follows
onetwo
eightnine
The "standard" solutions to retain the slashes would be to use read -r.
However, I have the following limitations:
must run under #!/bin/shfor reasons of portability/posix compliance.
not all systems
will support the -r switch to read under /sh
The input file format cannot be changed
Therefore, I am looking for another way to retain the backslash after reading in the line. I have come up with one working solution, which is to use sed to replace the \ with some other value (e.g.||) into a temporary file (thus bypassing my last requirement above) then, after reading them in use sed again to transform it back. Like so:
#!/bin/sh
sed -e 's/[\/&]/||/g' input.txt > tempfile.txt
while read line; do
printf "%s\n" $line | sed -e 's/||/\\/g'
done < tempfile.txt
I'm thinking there has to be a more "graceful" way of doing this.
Some ideas:
1) Use command substitution to store this into a variable instead of a file. Problem - I'm not sure command substitution will be portable here either and my attempts at using a variable instead of a file were unsuccessful. Regardless, file or variable the base solution is really the same (two substitutions).
2) Use IFS somehow? I've investigated a little, but not sure that can help in this issue.
3) ???
What are some better ways to handle this given my constraints?
Thanks
Your constraints seem a little strict. Here's a piece of code I jotted down(I'm not too sure of how valuable your while loop is for the other stuffs you would like to do, so I removed it off just for ease). I don't guarantee this code to be robustness. But anyway, the logic would give you hints in the direction you may wish to proceed. (temp.dat is the input file)
#!/bin/sh
var1="$(cut -d\\ -f1 temp.dat)"
var2="$(cut -d\\ -f2 temp.dat)"
iter=1
set -- $var2
for x in $var1;do
if [ "$iter" -eq 1 ];then
echo $x "\\" $1
else
echo $x "\\" $2
fi
iter=$((iter+1))
done
As Larry Wall once said, writing a portable shell is easier than writing a portable shell script.
perl -lne 'print $_' input.txt
The simplest possible Perl script is simpler still, but I imagine you'll want to do something with $_ before printing it.

awk: setting environment variables directly from within an awk script

first post here, but been a lurker for ages. i have googled for ages, but cant find what i want (many abigious topic subjects which dont request what the topic suggests it does ...). not new to awk or scripting, just a little rusty :)
i'm trying to write an awk script which will set shell env values as it runs - for another bash script to pick up and use later on. i cannot simply use stdout from awk to report this value i want setting (i.e. "export whatever=awk cmd here"), as thats already directed to a 'results file' which the awkscript is creating (plus i have more than one variable to export in the final code anyway).
As an example test script, to demo my issue:
echo $MYSCRIPT_RESULT # returns nothing, not currently set
echo | awk -f scriptfile.awk # do whatever, setting MYSCRIPT_RESULT as we go
echo $MYSCRIPT_RESULT # desired: returns the env value set in scriptfile.awk
within scriptfile.awk, i have tried (without sucess)
1/) building and executing an adhoc string directly:
{
cmdline="export MYSCRIPT_RESULT=1"
cmdline
}
2/) using the system function:
{
cmdline="export MYSCRIPT_RESULT=1"
system(cmdline)
}
... but these do not work. I suspect that these 2 commands are creating a subshell within the shell awk is executing from, and doing what i ask (proven by touching files as a test), but once the "cmd"/system calls have completed, the subshell dies, unfortunatley taking whatever i have set with it - so my env setting changes dont stick from "the caller of awk"'s perspective.
so my question is, how do you actually set env variables within awk directly, so that a calling process can access these env values after awk execution has completed? is it actually possible?
other than the adhoc/system ways above, which i have proven fail for me, i cannot see how this could be done (other than writing these values to a 'random' file somewhere to be picked up and read by the calling script, which imo is a little dirty anyway), hence, help!
all ideas/suggestions/comments welcomed!
You cannot change the environment of your parent process. If
MYSCRIPT_RESULT=$(awk stuff)
is unacceptable, what you are asking cannot be done.
You can also use something like is described at
Set variable in current shell from awk
unset var
var=99
declare $( echo "foobar" | awk '/foo/ {tmp="17"} END {print "var="tmp}' )
echo "var=$var"
var=
The awk END clause is essential otherwise if there are no matches to the pattern declare dumps the current environment to stdout and doesn't change the content of your variable.
Multiple values can be set by separating them with spaces.
declare a=1 b=2
echo -e "a=$a\nb=$b"
NOTE: declare is bash only, for other shells, use eval with the same syntax.
You can do this, but it's a bit of a kludge. Since awk does not allow redirection to a file descriptor, you can use a fifo or a regular file:
$ mkfifo fifo
$ echo MYSCRIPT_RESULT=1 | awk '{ print > "fifo" }' &
$ IFS== read var value < fifo
$ eval export $var=$value
It's not really necessary to split the var and value; you could just as easily have awk print the "export" and just eval the output directly.
I found a good answer. Encapsulate averything in a subshell!
The comand declare works as below:
#Creates 3 variables
declare var1=1 var2=2 var3=3
ex1:
#Exactly the same as above
$(awk 'BEGIN{var="declare "}{var=var"var1=1 var2=2 var3=3"}END{print var}')
I found some really interesting uses for this technique. In the next exemple I have several partitions with labels. I create variables using the labels as variable names and the device name as variable values.
ex2:
#Partition data
lsblk -o NAME,LABEL
NAME LABEL
sda
├─sda1
├─sda2
├─sda5 System
├─sda6 Data
└─sda7 Arch
#Creates a subshell to execute the text
$(\
#Pipe lsblk to awk
lsblk -o NAME,LABEL | awk \
#Initiate the variable with the text for the declare command
'BEGIN{txt="declare "}'\
#Filters devices with labels Arch or Data
'/Data|Arch/'\
#Concatenate txt with itself plus text for the variables(name and value)
#substr eliminates the special caracters before the device name
'{txt=txt$2"="substr($1,3)" "}'\
#AWK prints the text and the subshell execute as a command
'END{print txt}'\
)
The end result of this is 2 variables: Data with value sda6 and Arch with value sda7.
The same exemple in a single line.
$(lsblk -o NAME,LABEL | awk 'BEGIN{txt="declare "}/Data|Arch/{txt=txt$2"="substr($1,3)" "}END{print txt}')

Need help in executing the SQL via shell script and use the result set

I currently have a request to build a shell script to get some data from the table using SQL (Oracle). The query which I'm running return a number of rows. Is there a way to use something like result set?
Currently, I'm re-directing it to a file, but I'm not able to reuse the data again for the further processing.
Edit: Thanks for the reply Gene. The result file looks like:
UNIX_PID 37165
----------
PARTNER_ID prad
--------------------------------------------------------------------------------
XML_FILE
--------------------------------------------------------------------------------
/mnt/publish/gbl/backup/pradeep1/27241-20090722/kumarelec2.xml
pradeep1
/mnt/soar_publish/gbl/backup/pradeep1/11089-20090723/dataonly.xml
UNIX_PID 27654
----------
PARTNER_ID swam
--------------------------------------------------------------------------------
XML_FILE
--------------------------------------------------------------------------------
smariswam2
/mnt/publish/gbl/backup/smariswam2/10235-20090929/swam2.xml
There are multiple rows like this. My requirement is only to use shell script and write this program.
I need to take each of the pid and check if the process is running, which I can take care of.
My question is how do I check for each PID so I can loop and get corresponding partner_id and the xml_file name? Since it is a file, how can I get the exact corresponding values?
Your question is pretty short on specifics (a sample of the file to which you've redirected your query output would be helpful, as well as some idea of what you actually want to do with the data), but as a general approach, once you have your query results in a file, why not use the power of your scripting language of choice (ruby and perl are both good choices) to parse the file and act on each row?
Here is one suggested approach. It wasn't clear from the sample you posted, so I am assuming that this is actually what your sample file looks like:
UNIX_PID 37165 PARTNER_ID prad XML_FILE /mnt/publish/gbl/backup/pradeep1/27241-20090722/kumarelec2.xml pradeep1 /mnt/soar_publish/gbl/backup/pradeep1/11089-20090723/dataonly.xml
UNIX_PID 27654 PARTNER_ID swam XML_FILE smariswam2 /mnt/publish/gbl/backup/smariswam2/10235-20090929/swam2.xml
I am also assuming that:
There is a line-feed at the end of
the last line of your file.
The columns are separated by a single
space.
Here is a suggested bash script (not optimal, I'm sure, but functional):
#! /bin/bash
cat myOutputData.txt |
while read line;
do
myPID=`echo $line | awk '{print $2}'`
isRunning=`ps -p $myPID | grep $myPID`
if [ -n "$isRunning" ]
then
echo "PARTNER_ID `echo $line | awk '{print $4}'`"
echo "XML_FILE `echo $line | awk '{print $6}'`"
fi
done
The script iterates through every line (row) of the input file. It uses awk to extract column 2 (the PID), and then does a check (using ps -p) to see if the process is running. If it is, it uses awk again to pull out and echo two fields from the file (PARTNER ID and XML FILE). You should be able to adapt the script further to suit your needs. Read up on awk if you want to use different column delimiters or do additional text processing.
Things get a little more tricky if the output file contains one row for each data element (as you indicated). A good approach here is to use a simple state mechanism within the script and "remember" whether or not the most recently seen PID is running. If it is, then any data elements that appear before the next PID should be printed out. Here is a commented script to do just that with a file of the format you provided. Note that you must have a line-feed at the end of the last line of input data or the last line will be dropped.
#! /bin/bash
cat myOutputData.txt |
while read line;
do
# Extract the first (myKey) and second (myValue) words from the input line
myKey=`echo $line | awk '{print $1}'`
myValue=`echo $line | awk '{print $2}'`
# Take action based on the type of line this is
case "$myKey" in
"UNIX_PID")
# Determine whether the specified PID is running
isRunning=`ps -p $myValue | grep $myValue`
;;
"PARTNER_ID")
# Print the specified partner ID if the PID is running
if [ -n "$isRunning" ]
then
echo "PARTNER_ID $myValue"
fi
;;
*)
# Check to see if this line represents a file name, and print it
# if the PID is running
inputLineLength=${#line}
if (( $inputLineLength > 0 )) && [ "$line" != "XML_FILE" ] && [ -n "$isRunning" ]
then
isHyphens=`expr "$line" : -`
if [ "$isHyphens" -ne "1" ]
then
echo "XML_FILE $line"
fi
fi
;;
esac
done
I think that we are well into custom software development territory now so I will leave it at that. You should have enough here to customize the script to your liking. Good luck!