How to assign Awk output to netcat? - awk

I want to transfer first field of a file. I am using awk to pull the first field and then send it using netcat. But I don't get anything on the otherside. I am using the following command
awk -F, '{print $1}' sample.csv | netcat -lk 9999
Any hints would be much appreciated.
Regards,
Laeeq

I ran into this same problem when piping awk output to netcat. Turns out, awk buffers a lot.
Text can be flushed on each line with the fflush() command. The following works for me:
awk -F, '{print $1} fflush()' sample.csv | netcat -lk 9999

Related

awk print several substring

I would like to be able to print several substrings via awk.
Here an example of what I usually do;
awk' {print substr($0,index($0,string),10)} ' test.txt > result.txt
This allow me to print 10 letters after the discovery of my string.
But the result is the first one substring, instead of several as I expected.
Here an example if I use the string "ATGC" :
test.txt
ATGCATATAAATGCTTTTTTTTT
result.txt
ATGCATATAA
instead of
ATGCATATAA
ATGCTTTTTT
What I have to add ?
I'm sure the answer is easy for you guys !
Thank you for your help.
If you have gawk (gnu awk), you can make use of FPAT:
awk -v FPAT='ATGC.{6}' '{for(i=1;i<=NF;i++)print $i}' file
With your example:
$ awk -v FPAT='ATGC.{6}' '{for(i=1;i<=NF;i++)print $i}' <<<"ATGCATATAAATGCTTTTTTTTT"
ATGCATATAA
ATGCTTTTTT
awk '{print substr($0,1,10),RS substr($0,length -12,10)}' file
ATGCATATAA
ATGCTTTTTT

isolate similar data from stream

We parse data of the following format -
35953539535393 BG |..|...|REF_DATA^1^Y^|...|...|
35953539535393 B |..|...|REF_DATA_IND^1^B^|...|...|
We need to print unique values of REF_DATA* appearing in the file using script.
So,the output of the above data would be :
REF_DATA^1^Y^
REF_DATA_IND^1^B^
How do we achieve this using grep ,sed or awk - using a one-liner script.
This might work for you (GNU sed & sort):
sed '/\n/!s/[^|]*REF_DATA[^|]*/\n&\n/;/^[^|]*REF_DATA/P;D' file | sort -u
Surround the intended strings by newlines, print only those strings on separate lines and sort those lines showing only unique values.
Could you please try following and let me know if this helps you.
awk 'match($0,/REF_DATA[^|]*/){val=substr($0,RSTART,RLENGTH);if(!array[val]++){print val}}' Input_file
Adding a non-one liner form of solution too now.
awk '
match($0,/REF_DATA[^|]*/){
val=substr($0,RSTART,RLENGTH);
if(!array[val]++){
print val
}
}' Input_file
Assuming you have GNU grep:
command_to_produce_data | grep -oP '(?<=[|])REF_DATA.+?(?=[|])' | sort -u
awk -F\| '{print $4}' file
REF_DATA^1^Y^
REF_DATA_IND^1^B^

Multiple awk print in single command

Here are the 2 command which we need to execute, there are two ways to execute this in one line either by ; or |. Is there any other way to execute it via awk command.
These are the below command which is getting executed twice, is it possible to have one command with multiple awk print as shown in the example command tried.
isi_classic snapshot usage | tail -n 1 | awk '{printf "\t\t\tSnapshot USED %=%.1f%%\n", $4}'
Snapshot USED =0.6%
isi_classic snapshot usage | tail -n -1 | awk '{ print "\t\t\tSnapshot USED:" $1}'
Snapshot USED=3.2T
Example command tried:
isi_classic snapshot usage | tail -n 1 | awk '{printf "\t\t\tSnapshot USED %:%.1f%%\n", $4}'; awk '{ print "\t\t\tSnapshot USED:" $1}'
Snapshot USED =0.6%
Snapshot USED=3.2T
You can definitely use one-line command to do it,
isi_classic snapshot usage | awk -v OFS='\t\t\t' 'END{printf "%sSnapshot USED %=%.1f%%\n%sSnapshot USED:%s\n",OFS,$4,OFS,$1}'
Brief explanation,
No need to use tail, awk 'END{}' can do the same thing
You can combine your printf and print command to one
It would be better to substitute the '\t\t\t' as OFS to make the command more readable

find pattern from log file using awk

[QFJ Timer]:2014-07-02 06:19:09,030:bla.all.com.bla.bla.ppp.xxx.abcsedf:
i would like to extract the date and time.
so the date is no problem :
cat bla.log |awk -F: '{print $2}'|awk '{print $1}'
now the issue is with the time.
if i do : cat bla.log |awk '{print $3}' so i get:
06:19:09,030:bla.all.com.bla.bla.ppp.xxx.abcsedf:
which mean that i need another grep here right?
but i did so many tries using also 'FS' and didn't get only the time.
Can someone please advise?
Thank you.
In the GNU version of awk FS can be a regexp:
echo "[QFJ Timer]:2014-07-02 06:19:09,030:bla.all.com.bla.bla.ppp.xxx.abcsedf:" |
awk -vFS=":|," '{ print $2":"$3":"$4;}'
which spits out
2014-07-02 06:19:09
Your left separator is ':' and the right is ',', and unfortunately hours, minutes and seconds are also separated by your left separator. That is solved by printing $3 and $4. Quick and dirty solution, but it isn't going to be be very robust.
You could use sed for this purpose,
$ echo '[QFJ Timer]:2014-07-02 06:19:09,030:bla.all.com.bla.bla.ppp.xxx.abcsedf:' | sed 's/^[^:]*:\([^,]*\).*/\1/g'
2014-07-02 06:19:09
cat bla.log |awk -F":" '{print $2":"$3":"$4}' | awk -F"," '{print $1}'
Which gets you:
2014-07-02 06:19:09
You can use grep, since it is meant for that:
grep -o '[0-9]\{4\}\(-[0-9]\{2\}\)\{2\}\(\( \|:\)[0-9]\{2\}\)\{3\}' log.file
or, a little bit simpler, egrep:
egrep -o '[0-9]{4}(-[0-9]{2}){2}(( |:)[0-9]{2}){3}' log.file

getting awk to filter endless pipes

with "cat /dev/tty0" I can read a continuous stream of input characters. Some incoming "telegrams" are delimited by a new-line character.
No I want to filter this by awk, but I can't figure out how to start awk doing analysis. It seems it always waits for an end of file, I see no outcome on stdout
So this works, showing my the first word of each line:
cat /dev/tty0 > myfile (cancel somtime with Ctrl-C)
cat myfile | awk '{printf "%s\n",$1}'
But this not, showling nothing:
cat /dev/tty0 | awk '{printf "%s\n",$1}'
any ideas?
Achim
This may be due to buffering. I can't seem to reproduce your results, but try this (stdbuf is part of coreutils):
stdbuf -i0 -o0 -e0 cat /dev/tty0 | awk '{printf "%s\n",$1}'
Or this (unbuffer is part of expect):
unbuffer cat /dev/tty0 | awk '{printf "%s\n",$1}'
Or this:
awk '{printf "%s\n",$1}' < /dev/tty0
You could try using tail -f.
tail -f /dev/tty0 | awk '{printf "%s\n",$1}'
tail has other options which may be of use such as --pid=nnn which stops the tail when a given process dies. Big -F may also work, depending on your flavour.