Piping commands into awk script - awk

cat sample.log.txt|
grep INext-DROP-DEFLT|
sed -e 's/^.*SRC=//' -e 's/ .*DPT=/:/' -e 's/ .*$//' |
sort|
uniq|
ip.awk
I'm trying to send this command to an awk script but I don't know how to pipe it. The ip.awk is the script I want to send it to

Related

Running "ip | grep | awk" within a sed replacement

Problem Set (Raspberry Pi OS):
I have a file example.conf that contains a line IPv4addr=XXXXX. I am attempting to change this to the IP that is generated the in the command
ipTest=$(ip --brief a show | grep eth0 | awk '{ print $3 }')
I want to automate this file change during a script install.sh, the line I am attempting is:
IPtest=$(ip --brief a show | grep eth0 | awk '{ print $3 }')
sudo sed -e "/IPv4addr/s/[^=]*$/$IPtest/" example.conf
Returns error:
sed: -e expression #1, char 32: unknown option to `s'
A simple line in that code works, such as SimpleTest='Works'
Any thoughts? I am open to other solutions as well, however I am not an experienced linux user so I am using the tools I know to work with other problem sets.
$IPtest contains the / character; try something like this:
IPtest=$(ip --brief a show | grep eth0 | awk '{ print $3 }')
sudo sed -e '/IPv4addr/s#[^=]*$#'"$IPtest"'#' example.conf
You can shorten your variable and allow awk to do the job of grep at the same time
IPtest=$(ip --brief a s | awk '/eth0/{print $3}')
Using sed grouping and back referencing
sed -i.bak "s|\([^=]*.\).*|\1$IPtest|" example.conf

Print paragraph if it contains a string stored in a variable (blank lines separate paragraphs)

I am trying to isolate the header of a mail in the /var/spool/mail/mysuser file.
Print a paragraph if it contains AAA (blank lines separate paragraphs)
sed is working when searching with the string "AAA"
$ sed -e '/./{H;$!d;}' -e 'x;/AAA/!d;' /var/spool/mail/mysuser
When using a variable is does not work :
$ MyVar="AAA"
$ sed -e '/./{H;$!d;}' -e 'x;/$MyVar/!d;' /var/spool/mail/mysuser
=> No output as the single quotes prevent the expantion of the variable
Trying with singles quotes
$ sed -e "/./{H;$!d;}" -e "x;/$MyVar/!d; /var/spool/mail/mysuser
sed: -e expression #2, char 27: extra characters after command
Actually, the first search is also not working with doubles quotes
$ sed -e "/./{H;$!d;}" -e 'x;/AAA/!d;" /var/spool/mail/mysuser
sed -e "/./{H;$!d;}" -e "x;/AAA/date;" /var/spool/mail/mysuser
sed: -e expression #2, char 9: extra characters after command
I am also considering awk without success so far
Any advices ?
should be trivial with awk
$ awk -v RS= '/AAA/' file
with a variable, little more needed
$ awk -v RS= -v var='AAA' '$0~var'
or if it's defined elsewhere
$ awk -v RS= -v var="$variable_holding_value" '$0~var'
That is happening because of the single quotes. You need to go out of the single quotes to enable interpolation:
sed -e '/./{H;$!d;}' -e 'x;/'$MyVar'/!d;' /var/spool/mail/mysuser
or, better put the variable in double quotes:
sed -e '/./{H;$!d;}' -e 'x;/'"$MyVar"'/!d;' /var/spool/mail/mysuser
Thanks to karakfa
It works with :
MyVar="AAA"
awk -v RS= -v X=$MyVar '$0~X' file

A script to change file names

I am new to awk and shell based programming. I have a bunch of files name file_0001.dat, file_0002.dat......file_1000.dat. I want to change the file names such as the number after file_ will be a multiple of 4 in comparison to previous file name. SO i want to change
file_0001.dat to file_0004.dat
file_0002.dat to file_0008.dat
and so on.
Can anyone suggest a simple script to do it. I have tried the following but without any success.
#!/bin/bash
a=$(echo $1 sed -e 's:file_::g' -e 's:.dat::g')
b=$(echo "${a}*4" | bc)
shuf file_${a}.dat > file_${b}.dat
This script will do that trick for you:
#!/bin/bash
for i in `ls -r *.dat`; do
a=`echo $i | sed 's/file_//g' | sed 's/\.dat//g'`
almost_b=`bc -l <<< "$a*4"`
b=`printf "%04d" $almost_b`
rename "s/$a/$b/g" $i
done
Files before:
file_0001.dat file_0002.dat
Files after first execution:
file_0004.dat file_0008.dat
Files after second execution:
file_0016.dat file_0032.dat
Here's a pure bash way of doing it (without bc, rename or sed).
#!/bin/bash
for i in $(ls -r *.dat); do
prefix="${i%%_*}_"
oldnum="${i//[^0-9]/}"
newnum="$(printf "%04d" $(( 10#$oldnum * 4 )))"
mv "$i" "${prefix}${newnum}.dat"
done
To test it you can do
mkdir tmp && cd $_
touch file_{0001..1000}.dat
(paste code into convert.sh)
chmod +x convert.sh
./convert.sh
Using bash/sed/find:
files=$(find -name 'file_*.dat' | sort -r)
for file in $files; do
n=$(sed 's/[^_]*_0*\([^.]*\).*/\1/' <<< "$file")
let n*=4
nfile=$(printf "file_%04d.dat" "$n")
mv "$file" "$nfile"
done
ls -r1 | awk -F '[_.]' '{printf "%s %s_%04d.%s\n", $0, $1, 4*$2, $3}' | xargs -n2 mv
ls -r1 list file in reverse order to avoid conflict
the second part will generate new filename. For example: file_0002.dat will become file_0002.dat file_0008.dat
xargs -n2 will pass two arguments every time to mv
This might work for you:
paste <(seq -f'mv file_%04g.dat' 1000) <(seq -f'file_%04g.dat' 4 4 4000) |
sort -r |
sh
This can help:
#!/bin/bash
for i in `cat /path/to/requestedfiles |grep -o '[0-9]*'`; do
count=`bc -l <<< "$i*4"`
echo $count
done

Setting one field at a time?

Trying to turn some butchered data into bar delimited, unbutchered data...
here's some sample data
asd1276vdjs12897364vsk Tue Apr 2 08:19:12 2013 [pid 3] [words] FAIL UPLOAD: Client "00.005.006.006", "/0801NSJH.bbf", 0.00Kbyte/sec
into
asd1276vdjs12897364vsk|Tue Apr 2 08:19:12 2013|[pid 3]|[words]|FAIL UPLOAD: Client "00.005.006.006"|"/0801NSJH.bbf"|0.00Kbyte/sec
The regex's are simple enough, but I don't know how to say first field = regex, second field = regex etc.
This sed is functional but kind of hacky, I'd like to make it work in gawk.
sed 's/ Sun/|Sun/'
sed 's/ Mon/|Mon/'
sed 's/ Tue/|Tue/'
sed 's/ Wed/|Wed/'
sed 's/ Thu/|Thu/'
sed 's/ Fri/|Fri/'
sed 's/ Sat/|Sat/'
sed 's/ Sun/|Sun/'
sed -e 's% \[%|\[%g' -e 's%\] %\]|%g' -e 's%, %|%g'
$ cat tst.awk
{ print gensub(/\
([^[:space:]]+)[[:space:]]+\
([^[]+)[[:space:]]+\
([[][^]]+[]])[[:space:]]+\
([[][^]]+[]])[[:space:]]+\
([^,]+),[[:space:]]+\
([^,]+),[[:space:]]+\
/,
"\\1|\\2|\\3|\\4|\\5|\\6|","")
}
$ awk -f tst.awk file
asd1276vdjs12897364vsk|Tue Apr 2 08:19:12 2013|[pid 3]|[words]|FAIL UPLOAD: Client "00.005.006.006"|"/0801NSJH.bbf"|0.00Kbyte/sec

piping to awk hangs

I am trying to pipe tshark output to awk. The tshark command works fine on its own, and when piped to other programs such as cat, it works fine (real time printing of output). However, when piped to awk, it hangs and nothing happens.
sudo tshark -i eth0 -l -f "tcp" -R 'http.request.method=="GET"' -T fields -e ip.src -e ip.dst -e
tcp.srcport -e tcp.dstport -e tcp.seq -e tcp.ack | awk '{printf("mz -A %s -B %s -tcp \"s=%s sp=%s
dp=%s\"\n", $2, $1, $5, $4, $3)}'
Here is a simplier version:
sudo tshark -i eth0 -f "tcp" -R 'http.request.method=="GET"' | awk '{print $0}'
And to compare, the following works fine (although is not very useful):
sudo tshark -i eth0 -f "tcp" -R 'http.request.method=="GET"' | cat
Thanks in advance.
I had the same problem.
I have found some partial "solutions" that are not completely portable.
Some of them point to use the fflush() or flush() awk functions or -W interactive option
http://mywiki.wooledge.org/BashFAQ/009
I tried both and none works. So awk is not the appropriate command at all.
A few of them suggest to use gawk but it neither does the trick for me.
cut command has the same problem.
My solution: In my case I just needed to put --line-buffered in GREP and not touching awk command but in your case I would try:
sed -u
with the proper regular expression. For example:
sed -u 's_\(.*\) \(.*\) \(.*\) DIFF: \(.*\)_\3 \4_'
This expression gives you the 3rd and 4th columns separate by TAB (written with ctrl+v and TAB combination). With -u option you get unbuffered output and also you have -l option that gives you line buffered output.
I hope you find this answer useful although is late
Per our previous messages in comments, maybe it will work to force closing the input and emitting a linefeed.
sudo tshark -i eth0 -f "tcp" -R 'http.request.method=="GET"' ...... \
| {
awk '{print $0}'
printf "\n"
}
Note, no pipe between awk and printf.
I hope this helps.
I found the solution here https://superuser.com/questions/742238/piping-tail-f-into-awk (by John1024).
It says:
"You don't see it in real time because, for purposes of efficiency, pipes are buffered. tail -f has to fill up the buffer, typically 4 kB, before the output is passed to awk."
The proposed solutions is to use "unbuffer" or "stdbuf -o0" commands to disable buffering. It worked for me like this:
stdbuf -o0 tshark -i ens192 -f "ip" | awk '{print $0}'