awk capability cut capability - awk

I am using the following ssh command to get a list of ids. Now I want to
get only ids greater than a given number in the list of ids; let's say "231219" in this case. How can I incorporate that?
I have a local file "ids_ignore.txt"; anyid we put in this list should be ignored by the command..
Can awk or cut do the above?
ssh -p 29418 company.com gerrit query --commit-message --files --current-patch-set \
status:open project:platform/code branch:master |
grep refs | cut -f4 -d'/'
OUTPUT:-
231222
231221
231220
231219
230084
229092
228673
228635
227877
227759
226138
226118
225817
225815
225246
223554
223527
223452
223447
226137

... | awk '$1 > max' max=8888 | grep -v -F -f ids_ignore.txt
Or, if you want to do it all with awk:
... | awk 'NR==FNR{ no[$1]++ }
NR!=FNR && $1 > max && ! no[$1]' max=NNN ids_ignore.txt -

cut cannot do numeric comparison on the input fields, it's just a simple field extraction tool. awk can do the work of grep and cut:
ssh -p 29418 company.com gerrit ... |
awk -F/ -v min=231219 '
NR == FNR {ignore[$1]; next}
/refs/ && $4>min && !($4 in ignore) {print $4}
' ids_ignore.txt -
The trailing - is important at the end of the awk command: it tells awk to read from stdin after it reads the ids_ignore file.

Related

AWK print between two characters

When I try this command:
/usr/bin/curl -s sketch*.zip "https://www.sketch.com/downloads/mac/" |\
grep 'download.sketchapp.com/sketch-' | awk 'NR==1{print $3}'
The output is:
content="0;URL='https://download.sketchapp.com/sketch-68.2-102594.zip
what I am looking to get is:
68.2
Any help would be appreciated.
It seems you want to extract the number after your pattern, only for the first matcing row. You can use one grep command:
... | grep -oPm1 '(?<=download.sketchapp.com/sketch-)[^-]+' file
or as this is the 3rd field of your 1st curl output row you want, you can use one awk command (split field using hyphen as separator to array and print the element in the middle):
awk '/download.sketchapp.com/sketch-/ && NR==1 {split($3,a,"-"); print a[2]; exit}'
Using sed:
/usr/bin/curl -s sketch*.zip "https://www.sketch.com/downloads/mac/" | \
sed -n 's!.*download.sketchapp.com/sketch-\([^-]*\).*!\1!p;' | \
head -1
head is to get rid of multiple matches. sed command extracts non-hyphen characters after download.sketchapp.com/sketch-.

Use awk to list files with spaces in them

I was doing some experimenting with awk last year. I wrote the following to modify git output:
git status -s | awk '{printf("\t%s: %s %s\n", FNR, $1, $2)}'
This outputs something like
1: M "_notes/Digital
2: M _notes/Perl.md
3: M "_notes/Tech
4: M _notes/vim.md
It works but chokes when the file has a space in it like in lines 1 and 3 in the above example. Note that I'm on MacOS and these two files are surrounded by apostrophes:
'Digital Gardening.md'
'Tech Stuff.md'
How can I modify this so it will output files with spaces properly?
seems you're just numbering the lines, you can just simply do this instead
$ git status -s | nl -s:
or, with awk
$ git status -s | awk '{print NR":",$0}'
With GNU awk 4:
git status -s | awk '{printf("\t%s: %s %s\n", FNR, $1, $2)}' FPAT="([^ ]+)|('[^']+')"
From its manual:
The value of FPAT should be a string that provides a regular expression. This regular expression describes the contents of each field.
In the above example, we need the field to be non-space [^ ]+ or a string that is quoted by ': ('[^']+')
git command has -z option to print a NULL byte instead of new line after each file. You can use -z with RS='\0' in gnu-awk:
git status -sz | awk -v RS='\0' '{col1=$1; $1=""; printf "\t%s: %s%s\n", FNR, col1, $0}'

Printing only part of next line after matching a pattern

I want to print next sentence after match
My file content like this:
SSID:CoreFragment
Passphrase:WiFi1234
SSID:CoreFragment_5G
Passphrase:WiFi1234
SSID:Aleph_inCar
Passphrase:1234567890
As per my search,e.g. If I found WIFI-3(SSID) than, I want to print 1234ABCD. I used this command to search SSID:
grep -oP '^SSID:\K.+' file_name
After this search I want to print Passphrase of that particular match.
I'm working on Ubuntu 18.04
ssid=$(grep -oP &apos;^SSID:\K.+&apos; list_wifi.txt)
for ssid in $(sudo iwlist wlp2s0 scan | grep ESSID | cut -d &apos;"&apos; -f2)
do
if [ $ssid == $ssid_name ]; then
echo "SSID found...";
fi
done
I want to print next line after match.
another awk
$ awk -F: -v s="$ssid" '$0=="SSID:"s{c=NR+1} c==NR{print $2; exit}' file
1234ABCD
will only print the value if it's on the next line.
awk -F: '/WIFI-3/{getline;print $2; exit}' file
1234ABCD
Robustly (wont fail due to partial matches, etc.) and idiomatically:
$ awk -F':' 'f{print $2; exit} ($1=="SSID") && ($2=="WIFI-3"){f=1}' file
1234ABCD
Please try the following:
ssid="WIFI-3"
passphrase=$(grep -A 1 "^SSID:$ssid" file_name | tail -n 1 | cut -d: -f2)
echo "$passphrase"
which yields:
1234ABCD
Since code tags have changed the look of samples so adding this now.
var=$(awk '/SSID:[a-zA-Z]+-[0-9]+/{flag=1;next} flag{sub(/.*:/,"");value=$0;flag=""} END{print value}' Input_file)
echo "$var"
Could you please try following.
awk '/Passphrase/ && match($0,/WIFI-3 Passphrase:[0-9a-zA-Z]+/){val=substr($0,RSTART,RLENGTH);sub(/.*:/,"",val);print val;val=""}' Input_file
Using Perl
$ export ssid="WIFI-3"
$ perl -0777 -lne ' /SSID:$ENV{ssid}\s*Passphrase:(\S+)/ and print $1 ' yash.txt
1234ABCD
$ export ssid="Aleph_inCar"
$ perl -0777 -lne ' /SSID:$ENV{ssid}\s*Passphrase:(\S+)/ and print $1 ' yash.txt
1234567890
$
$ cat yash.txt
SSID:CoreFragment
Passphrase:WiFi1234
SSID:CoreFragment_5G
Passphrase:WiFi1234
SSID:Aleph_inCar
Passphrase:1234567890
SSID:WIFI-1
Passphrase:1234ABCD
SSID:WIFI-2
Passphrase:123456789
SSID:WIFI-3
Passphrase:1234ABCD
You can capture it in variables as
$ passphrase=$(perl -0777 -lne ' /SSID:$ENV{ssid}\s*Passphrase:(\S+)/ and print $1 ' yash.txt)
$ echo $passphrase
1234567890
$

Trying to print awk variable

I am not much of an awk user, but after some Googling, determined it would work best for what I am trying to do...only problem is, I can't get it to work. I'm trying to print out the contents of sudoers while inserting the server name ($i) and a comma before the sudoers entry as I'm directing it to a .csv file.
egrep '^[aA-zZ]|^[%]' //$i/etc/sudoers | awk -v var="$i" '{print "$var," $0}' | tee -a $LOG
This is the output that I get:
$var,unixpvfn ALL = (root)NOPASSWD:/usr/bin/passwd
awk: no program given
Thanks in advance
egrep is superfluous here. Just awk:
awk -v var="$i" '/^[[:alpha:]%]/{print var","$0}' //"$i"/etc/sudoers | tee -a "$LOG"
Btw, you may also use sed:
sed "/^[[:alpha:]%]/s/^/${i},/" //"$i"/etc/sudoers | tee -a "$LOG"
You can save the grep and let awk do all the work:
awk -v svr="$i" '/^[aA-zZ%]/{print svr "," $0}' //$i/etc/sudoers
| tee -a $LOG
If you put things between "..", it means literal string, and variable won't be expanded in awk. Also, don't put $ before a variable, it will indicate the column, not the variable you meant.

Convert bash line to use in perl

How would I go about converting the following bash line into perl? Could I run the system() command, or is there a better way? I'm looking for perl to print out access per day from my apache access_log file.
In bash:
awk '{print $4}' /etc/httpd/logs/access_log | cut -d: -f1 | uniq -c
Prints the following:
632 [27/Apr/2014
156 [28/Apr/2014
awk '{print $4}' /etc/httpd/logs/access_log | cut -d: -f1 | uniq -c
perl -lane'
($val) = split /:/, $F[3]; # First colon-separated elem of the 4th field
++$c{$val}; # Increment number of occurrences of val
END { print for map { "$c{$_} $_" } keys %c } # Print results in no order
' access.log
Switches:
-l automatically appends a newline to the print statement.
-l also removes the newlines from lines read by -n (and -p).
-a splits the line on whitespace into the array #F.
-n loops over the lines of the input but does not print each line.
-e execute the given script body.
Your original command translated to a Perl one-liner:
perl -lane '($k) = $F[3] =~ /^(.*?):/; $h{$k}++ }{ print "$h{$_}\t$_" for keys %h' /etc/httpd/logs/access_log
You can change all your commands to one from:
awk '{print $4}' /etc/httpd/logs/access_log | cut -d: -f1 | uniq -c
to
awk '{split($4,a,":");b[a[1]]++} END {for (i in b) print b[i],i}' /etc/httpd/logs/access_log