I am trying to get only the first and third column of the following output into linux terminal. How can I do this?
my actual output:
akamai-1576314300-xhf78 0/1 Completed 0 5d4h
akamai-1576400700-6m84q 0/1 Completed 0 4d4h
output I need after using awk
akamai-1576314300-xhf78 Completed
akamai-1576400700-6m84q Completed
i am using kubectl get pods | awk '{print $1 print $3}'
but it is not woking...
This is what you are looking for :
kubectl get pods | awk '{ if ($3 == "Completed") { print $1 " " $3 }}'
Hope it helps!
Edit (to create an array of values) :
IFS=$'\n' read -d '' -a myResults <<< "$( kubectl get pods | awk NF | awk '{ if ($3 == "Completed") { print $1 " " $3 }}' )"
And then :
$ echo "${myResults[1]}"
akamai-1576400700-6m84q Completed
$ echo "${myResults[0]}"
akamai-1576314300-xhf78 Completed
Related
In a Linux script program, I've got the following awk command for other purposes and to rename the file.
cat $edifile | awk -F\| '
{ OFS = "|"
print $0
} ' | tr -d "\012" > $newname.hl7
While this is happening, I'd like to grab the 5th field of the MSH segment and save it for later use in the script. Is this possible?
If no, how could I do it later or earlier on?
Example of the segment.
MSH|^~\&|business1|business2|/u/tmp/TR0049-GE-1.b64|routing|201811302126||ORU^R01|20181130212105810|D|2.3
What I want to do is retrieve the path and file name in MSH 5 and concatenate it to the end of the new file.
I've used this to capture the data but no luck. If fpth is getting set, there is no evidence of it and I don't have the right syntax for an echo within the awk phrase.
cat $edifile | awk -F\| '
{ OFS = "|"
{fpth=$(5)}
print $0
} ' | tr -d "\012" > $newname.hl7
any suggestions?
Thank you!
Try
filename=`awk -F'|' '{print $5}' $edifile | head -1`
You can skip the piping through head if the file is a single line
First of all, it must be mentioned that the awk line in your first piece of code, has zero use:
$ cat $edifile | awk -F\| ' { OFS = "|"; print $0 }' | tr -d "\012" > $newname.hl7
This is totally equivalent to
$ cat $edifile | tr -d "\012" > $newname.hl7
because OFS is only used to redefine $0 if you redefine a field.
Example:
$ echo "a|b|c" | awk -F\| '{OFS="/"; print $0}'
a|b|c
$ echo "a|b|c" | awk -F\| '{OFS="/"; $1=$1; print $0}'
a/b/c
I understand that you have a hl7 file in which you have a single line starting with the string "MSH". From this line you want to store the 5th field: this is achieved in the following way:
fpth=$(awk -v outputfile="${newname}.hl7" '
BEGIN{FS="|"; ORS="" }
($1 == "MSH"){ print $5 }
{ print $0 > outputfile }' $edifile)
I have replaced ORS to an empty character set, as it is equivalent to tr -d "\012". The above will work very nicely if you only have a single MSH in your file.
Let's say I have the following text file:
$ cat file1.txt outputs
MarkerName Allele1 Allele2 Freq1 FreqSE P-value Chr Pos
rs2326918 a g 0.8510 0.0001 0.5255 6 130881784
rs2439906 c g 0.0316 0.0039 0.8997 10 6870306
rs10760160 a c 0.5289 0.0191 0.8107 9 123043147
rs977590 a g 0.9354 0.0023 0.8757 7 34415290
rs17278013 t g 0.7498 0.0067 0.3595 14 24783304
rs7852050 a g 0.8814 0.0006 0.7671 9 9151167
rs7323548 a g 0.0432 0.0032 0.4555 13 112320879
rs12364336 a g 0.8720 0.0015 0.4542 11 99515186
rs12562373 a g 0.7548 0.0020 0.6151 1 164634379
Here is an awk command which prints MarkerName if Pos >= 11000000
$ awk '{ if($8 >= 11000000) { print $1 }}' file1.txt
This command outputs the following:
MarkerName
rs2326918
rs10760160
rs977590
rs17278013
rs7323548
rs12364336
rs12562373
Question: I would like to feed this into a grep statement to parse another text file, textfile2.txt. Somehow, one pipes the output from the previous awk command into grep AWKOUTPUT textfile2.txt
I would like each row of the awk command above to be grepped against textfile2.txt, i.e.
grep "rs2326918" textfile2.txt
## and then
grep "rs10760160" textfile2.txt
### and then
...
Naturally, I would save all resulting rows from textfile2.txt into a final file, i.e.
$ awk '{ if($8 >= 11000000) { print $1 }}' file1.txt | grep PIPE_OUTPUT_BY_ROW textfile2.txt > final.txt
How does one grep from a pipe line by line?
EDIT: To clarify, the one constraint I have is that file1.txt is actually the output of a previous pipe. (I'm trying to simplify the question somewhat.) How would that change the answer?
awk + grep solution:
grep -f <(awk '$8 >= 11000000{ print $1 }' file1.txt) textfile2.txt > final.txt
-f file - obtain patterns from file, one per line
You can use bash to do this:
bash-3.1$ echo "rs2326918" > filename2.txt
bash-3.1$ (for i in `awk '{ if($8 >= 11000000) { print $1 }}' file1.txt |
grep -v MarkerName`; do grep $i filename2.txt; done) > final.txt
bash-3.1$ cat final.txt
rs2326918
Alternatively,
bash-3.1$ cat file1.txt | (for i in `awk '{ if($8 >= 11000000) { print $1 }}' |
grep -v MarkerName`; do grep $i filename2.txt; done) > final.txt
The switch grep -v tells grep to reverse its usual activity and print all lines that do not match the pattern. This switch "inVerts" the match.
only using awk can do this for you:
$ awk 'NR>1 && NR==FNR {if ($8 >= 110000000) a[$1]++;next} \
{ for(i in a){if($0~i) print}}' file1.txt file2.txt> final.txt
I have these command lines:
grep -e "[0-9] ERROR" /home/aa/lab/utb/cic/nova-all.log | awk '{ print $6 }' | awk -F'-' '{print $3""$2""$1}' | cut -c 1-4,7-8 > part1date.txt
grep -e "[0-9] ERROR" /home/aa/lab/utb/cic/nova-all.log | awk '{ print $3" "$4" "$5" "$9 }' > part1rest.txt
grep -e "[0-9] ERROR" /home/aa/lab/utb/cic/nova-all.log | awk '{ s = ""; for (i = 15; i <= NF; i++) s = s $i " "; print s}' > part1end.txt
paste -d \ part1date.txt part1rest.txt part1end.txt > temp.txt
rm part1*
cat temp.txt
The first 3 lines will save its output in a text file.
Then I merged the columns of these texts in one file to show the output.
Can someone help me to use same command in one line without saving them in textfile?
This command used to change the standard output:
sep 10 11:13:55 node-20 nova-scheduler 2014-10-12 10:36:55.675 3817 ERROR nova.scheduler....
to this format:
ddmmyy hh:mm:ss node-xx PROCESS LOGLEVEL MESSAGE
that means change place of columns and change the format of the date.
awk '/[0-9] ERROR/{gsub("-","",$6);$2=$6;$6=$9;for(i=0;++i<=NF;)$i=i<6?$(i+1):$(i+9);NF-=9;print}' file
I have been trying to parse a Paypal Email and insert the resultant info into a Database of mine. I have most of the code working but I cannot get a Variable to insert into my awk code to create the sql insert query.
if [ -f email-data.txt ]; then {
grep -e "Transaction ID:" -e "Receipt No: " email-data.txt \
>> ../temp
cat ../temp \
| awk 'NR == 1 {printf("%s\t",$NF)} NR == 2 {printf("%s\n",$NF)}' \
>> ../temp1
awk '{print $1}' $email-data.txt \
| grep # \
| grep -v \( \
| grep -v href \
>> ../address
email_addr=$(cat ../address)
echo $email_addr
cat ../temp1 \
| awk '{print "INSERT INTO users (email,paid,paypal_tran,CCReceipt) VALUES"; print "(\x27"($email_addr)"\x27,'1',\x27"$2"\x27,\x27"$3"\x27);"}' \
> /home/linux014/opt/post-new-member.sql
The output looks like the following
INSERT INTO users (email,paid,paypal_tran,CCReceipt) VALUES('9MU013922L4775929 9MU013922L4775929',1,'9MU013922L4775929','');
Should look like
INSERT INTO users (email,paid,paypal_tran,CCReceipt) VALUES('dogcat#gmail.com',1,'9MU013922L4775929','1234-2345-3456-4567');
(Names changed to protect the innocent)
The trial data I am using is set out below
Apr 18, 2014 10:46:17 GMT-04:00 | Transaction ID: 9MU013922L4775929
You received a payment of $50.00 USD from Dog Cat (dogcat#gmail.com)
Buyer:
Dog Cat
dogcat#gmail.com
Purchase Details
Receipt No: 1234-2345-3456-4567
I cannot figure out why the email-addr is not being inserted properly.
You are calling a shell variable inside awk. The right way to do that is by creating an awk variable using -v option.
For example, say $email is your shell variable, then
... | awk -v awkvar="$email" '{do something with awkvar}' ...
Read this for more details.
However, having said that, here is how I would try and parse the text file:
awk '
/Transaction ID:/ { tran = $NF }
/Receipt No:/ { receipt = $NF }
$1 ~ /#/ { email = $1 }
END {
print "INSERT INTO users (email,paid,paypal_tran,CCReceipt) VALUES";
print "("q email q","1","q tran q","q receipt q");"
}' q="'" data.txt
Output:
INSERT INTO users (email,paid,paypal_tran,CCReceipt) VALUES
('dogcat#gmail.com',1,'9MU013922L4775929','1234-2345-3456-4567');
I have an awk command to extract information from mount points (see the accepted answer in How to extract NFS information from mount on Linux and Solaris?):
awk -F'[: ]' '{if(/^\//)print $3,$4,$1;else print $1,$2,$4}
I would like to include a dig lookup in this awk command to lookup the IP of hostnames. Unfortunately, the mount command sometimes include an IP and sometimes a hostname. I tried the following, but it has an unwanted newline, unwanted return code and does not work if there is an IP address:
For hostnames
echo "example.com:/remote/export on /local/mountpoint otherstuff" | awk -F'[: ]' '{if(/^\//)print system("dig +short " $3),$4,$1;else print system("dig +short " $1),$2,$4}'
Returns
93.184.216.119
0 /remote/export /local/mountpoint
For IPs
echo "93.184.216.119:/remote/export on /local/mountpoint otherstuff" | awk -F'[: ]' '{if(/^\//)print system("dig +short " $3),$4,$1;else print system("dig +short " $1),$2,$4}'
Returns
0 /remote/export /local/mountpoint
I would like to retrieve the following in both cases
93.184.216.119 /remote/export /local/mountpoint
Update:
It seems that some versions of dig return the IP when an IP is provided as query and others return nothing.
Solution:
Based on the accepted answer I used the following adapted awk command:
awk -F'[: ]' '{if(/^\//) { system("dig +short "$3" | grep . || echo "$3" | tr -d \"\n\""); print "",$4,$1 } else { system("dig +short "$1" | grep . || echo "$1" | tr -d \"\n\"");print "",$2,$4 };}'
The additional grep . || echo "$3" takes care that the input IP/hostname is returned if dig returns nothing.
The system command in awk executes a command returns its status. Consider this:
$ awk 'END { print "today is " system("date") " and sunny" }' < /dev/null
Tue Jan 7 20:19:28 CET 2014
today is 0 and sunny
The date command outputs the date and a newline. When running from awk the same thing happens. In this example the system finishes before printf itself, so first we see the line with date, and on the next line our text with the return value 0 of system.
To get what we want we need to split this into multiple commands and we don't need the return value of system:
$ awk 'END { printf "today is "; system("date | tr -d \"\n\""); print " and sunny" }' < /dev/null
today is Tue Jan 7 20:24:01 CET 2014 and sunny
To prevent the newline after date, we piped its output to tr -d "\n".
Long story short, change from this:
print system(...), $2, $4
to this:
system(... | tr -d \"\n\"); print "", $2, $4