extra a value from a file in bash script - awk

i have this file content in my sample file "haproxy-monitoring.conf"
[[inputs.haproxy]]
servers = ["http://localhost:31330/haproxy?stats" ]
Can you please help me, how I can extract just the port number '31330' from the file haproxy-monitoring.conf in a bash script.

with sed
$ sed -rn '/servers/s/.*:([0-9]+).*/\1/p' file
or similarly with awk
$ awk '/servers/{print gensub(/.*:([0-9]+).*/,"\\1",1)}' file

awk -F'[:/]' '{print $5}' file
31330

Or something like
grep -Eo '[0-9]+' file

Questions that state what the output should be without explaining why it should be that leave you open to all sorts of answers unrelated to what you really are trying to do. idk if this is what you want or not since you haven't told us:
$ tr -cd '0-9' < file
31330

Related

Adding hashtag to files

I have a awk program add_hashtag.awk
BEGIN{printf("#")}1
and a bash program
for file in *.asc; do awk -f add_hashtag.awk "$file" > "$file"_in; done
that add hashtag into file. It works, however, I would like to get files with same names. When I run
for file in *.asc; do awk -f add_hashtag.awk "$file" > "$file"; done
I get files only with #.
How to do that? Thank you
Could you please try following.
for file in *.asc; do awk -f add_hashtag.awk "$file" > "temp_file" && mv "temp_file" "$file"; done
I am going with approach where creating a temp_file for output and later renaming it to Input_file so that there will not be any danger of losing or truncating actual Input_file. Also it will not rename temp_file to actual Input_file until/unless awk command is a success(with use of &&)
With gawk 4.1.0 version or so try(haven't tested it since no samples were given):
awk -i inplace -f add_hashtag.awk *.asc
OR in case you want to inplace edit files along with taking their backup:
awk -i inplace -v INPLACE_SUFFIX=.backup -f add_hashtag.awk *.asc

bzip2: Input file file.txt has 1 other link

When calling
bzip2 file.txt
I get this error message
bzip2: Input file file.txt has 1 other link
I'm using OSX, but I think this problem is not specific to OSX, so I'm asking here.
I solved it using the force flag: -f
Don't know why.
My solution was to copy the file:
cp file.txt tmp
rm file.txt
mv tmp file.txt
bzip2 file.txt
But perhaps someone could explain it anyway?

AWK to process compressed files and printing original (compressed) file names

I would like to process multiple .gz files with gawk.
I was thinking of decompressing and passing it to gawk on the fly
but I have an additional requirement to also store/print the original file name in the output.
The thing is there's 100s of .gz files with rather large size to process.
Looking for anomalies (~0.001% rows) and want to print out the list of found inconsistencies ALONG with the file name and row number that contained it.
If I could have all the files decompressed I would simply use FILENAME variable to get this.
Because of large quantity and size of those files I can't decompress them upfront.
Any ideas how to pass filename (in addition to the gzip stdout) to gawk to produce required output?
Assuming you are looping over all the files and piping their decompression directly into awk something like the following will work.
for file in *.gz; do
gunzip -c "$file" | awk -v origname="$file" '.... {print origname " whatever"}'
done
Edit: To use a list of filenames from some source other than a direct glob something like the following can be used.
$ ls *.awk
a.awk e.awk
$ while IFS= read -d '' filename; do
echo "$filename";
done < <(find . -name \*.awk -printf '%P\0')
e.awk
a.awk
To use xargs instead of the above loop will require the body of the command to be in a pre-written script file I believe which can be called with xargs and the filename.
this is using combination of xargs and sh (to be able to use pipe on two commands: gzip and awk):
find *.gz -print0 | xargs -0 -I fname sh -c 'gzip -dc fname | gawk -v origfile="fname" -f printbadrowsonly.awk >> baddata.txt'
I'm wondering if there's any bad practice with the above approach…

Hardcode input file

Given this script
#!/bin/awk -f
{
print $1
}
It can be called like so
foo.awk foo.txt
However I would like the script to always call foo.txt. So I would like to modify the script so that it can be called without the input file, like this
foo.awk
#!/bin/awk -f
BEGIN {ARGV[ARGC++] = "foo.txt"}
{print $1}
This will add foo.txt to the end of the arguments list, as if you had put it there on the command line. This has the added bonus of allowing you to extend your script to do more than just print, without having to put everything in the BEGIN block.
I would use a shell wrapper for this:
#!/bin/bash
foo.awk foo.txt
ok, do this trick maybe? (cheat?)
#!/bin/sh
awk '{print $1}' foo.txt
you could name it as foo.awk
and then
chmod +x foo.awk
now try ./foo.awk under same directory.
EDIT
#!/bin/awk -f
BEGIN{while(getline < "/path/to/foo.txt")print $1 }
strange requirement. why bash wrapper doesn't fit your requirement? You did tag the question with shell
anyway, the above script should be what you need.

Execute a command in the re-verse order of ids present in a file

I am running the following command using awk on file.txt ,currently its running the command on the ids present in file.txt from top to bottom..i want the commmand to be run in the reverse order for the ids present in file.txt..any inputs on how we can do this?
git command $(awk '{print $1}' file.txt)
file.txt contains.
97a65fd1d1b3b8055edef75e060738fed8b31d3
fb8df67ceff40b4fc078ced31110d7a42e407f16
a0631ce8a9a10391ac4dc377cd79d1adf1f3f3e2
.....
If you aren't bound to using awk then tail with the -r (for reverse) argument will do the trick...
myFile.txt
97a65fd1d1b3b8055edef75e060738fed8b31d3
fb8df67ceff40b4fc078ced31110d7a42e407f16
a0631ce8a9a10391ac4dc377cd79d1adf1f3f3e2
Now to print it in reverse...
$ tail -r myFile.txt
a0631ce8a9a10391ac4dc377cd79d1adf1f3f3e2
fb8df67ceff40b4fc078ced31110d7a42e407f16
97a65fd1d1b3b8055edef75e060738fed8b31d3
EDIT:
To output this to a file simply redirect it out...
$ tail -r myFile.txt > newFile.txt
EDIT:
Want to write to the same file? No problem!
tail -r myFile.txt > temp.txt; cat temp.txt > myFile.txt; rm temp.txt;
For some reason when I redirected tail -r to the same file it came back blank, this workaround avoids that issue by writing to a temporary "buffer" file.
To reverse the lines in a file using awk, use
awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }' file
use $1 instead of $0 above to operate on the first field only instead of the whole line.