Adding hashtag to files - awk

I have a awk program add_hashtag.awk
BEGIN{printf("#")}1
and a bash program
for file in *.asc; do awk -f add_hashtag.awk "$file" > "$file"_in; done
that add hashtag into file. It works, however, I would like to get files with same names. When I run
for file in *.asc; do awk -f add_hashtag.awk "$file" > "$file"; done
I get files only with #.
How to do that? Thank you

Could you please try following.
for file in *.asc; do awk -f add_hashtag.awk "$file" > "temp_file" && mv "temp_file" "$file"; done
I am going with approach where creating a temp_file for output and later renaming it to Input_file so that there will not be any danger of losing or truncating actual Input_file. Also it will not rename temp_file to actual Input_file until/unless awk command is a success(with use of &&)
With gawk 4.1.0 version or so try(haven't tested it since no samples were given):
awk -i inplace -f add_hashtag.awk *.asc
OR in case you want to inplace edit files along with taking their backup:
awk -i inplace -v INPLACE_SUFFIX=.backup -f add_hashtag.awk *.asc

Related

awk ignore case isn't working

I am using this code to get ip entries from host file with ignore case and it doesn't seem to work on AIX
Input file
172.23.1.230 enboprtpapzp04.digjam.com enboprtpapzp04
#172.23.0.33 enboprtpapzp04.digjam.com enboprt enboprtpapzp04
172.23.1.230 enboprtpapzp04.fixture.com enboprtpap enboprtpapzp04
awk -v client="$client" 'BEGIN {IGNORECASE = 1}{k=0; for (i=1;i<=NF;i++){if ($i==client){print $1}; k++}}' file
See the output below
client=ENBOPRTPAPZP04
awk -v client="$client" 'BEGIN {IGNORECASE = 1}{k=0; for (i=1;i<=NF;i++){if ($i==client){print $1}; k++}}' file
Nothing comes up
expected output
grep -i ENBOPRTPAPZP04 /etc/hosts | awk '{print $1}' | grep -v "^#"
172.23.1.230
172.23.1.230
It works here:
$ awk -v client="$client" 'BEGIN{IGNORECASE = 1} $2==client && /^[^#]/{print $1}' your_hosts
172.23.1.230
172.23.1.230
Are you sure you are using GNU awk? If not, you could:
$ awk -v client="$client" 'tolower($2)==tolower(client) && /^[^#]/{print $1}' your_hosts
In the light of the resent - whoops, I meant recent - edits to the question and the mentioning of the loop in the comments I'll add this:
$ awk -v client="$client" '{for(i=1;i<=NF;i++) if(tolower($i)==tolower(client) && $1!~/^#/)print $1}' your_new_hosts
172.23.1.230
172.23.1.230
Also, check #EdMorton's last comment below for a non-looping version.
The check for the /^#/ could be outside of the action block in the condition part:
$ awk ... '!/^#/ {for(i=1;i<=NF;i++) if(tolower($i)==tolower(client)) print $1}' your_new_hosts

extra a value from a file in bash script

i have this file content in my sample file "haproxy-monitoring.conf"
[[inputs.haproxy]]
servers = ["http://localhost:31330/haproxy?stats" ]
Can you please help me, how I can extract just the port number '31330' from the file haproxy-monitoring.conf in a bash script.
with sed
$ sed -rn '/servers/s/.*:([0-9]+).*/\1/p' file
or similarly with awk
$ awk '/servers/{print gensub(/.*:([0-9]+).*/,"\\1",1)}' file
awk -F'[:/]' '{print $5}' file
31330
Or something like
grep -Eo '[0-9]+' file
Questions that state what the output should be without explaining why it should be that leave you open to all sorts of answers unrelated to what you really are trying to do. idk if this is what you want or not since you haven't told us:
$ tr -cd '0-9' < file
31330

Awk replacement pieces size limit

Trying to find a single word and replace it with the contents of a file. Works on MacOS, but not under linux.
Here is the awk that fails under linux:
awk -v var="${blah}" '{sub(/%WORD%/,var)}1' file.xml
(file.xml is 122 lines, 4.7K)
Error is:
awk: program limit exceeded: replacement pieces size=255
Same file.xml under MacOS, using a slightly different awk works fine:
awk -v var="${blah//$'\n'/\\n}" '{sub(/%WORD%/,var)1}'
Recompiling awk is not an option. This is Ubuntu 12.04, 32-bit.
You could use sed
FILE=`cat Filename`
sed "s/WORD/${FILE}/g" file.xml > newfile.xml
Turns out that good old 'replace' out performs awk in this use case--who would have thought?
replace -v "%WORD%" "$blah" -- file.xml
Using Gnu Awk version 4, and the readfile extension:
gawk -f a.awk file.xml
where a.awk is:
#load "readfile"
BEGIN{
var = readfile("blah")
if (var == "" && ERRNO != "")
print("problem reading file", ERRNO) > "/dev/stderr"
}
{
sub(/%WORD%/,var)
print
}

Hardcode input file

Given this script
#!/bin/awk -f
{
print $1
}
It can be called like so
foo.awk foo.txt
However I would like the script to always call foo.txt. So I would like to modify the script so that it can be called without the input file, like this
foo.awk
#!/bin/awk -f
BEGIN {ARGV[ARGC++] = "foo.txt"}
{print $1}
This will add foo.txt to the end of the arguments list, as if you had put it there on the command line. This has the added bonus of allowing you to extend your script to do more than just print, without having to put everything in the BEGIN block.
I would use a shell wrapper for this:
#!/bin/bash
foo.awk foo.txt
ok, do this trick maybe? (cheat?)
#!/bin/sh
awk '{print $1}' foo.txt
you could name it as foo.awk
and then
chmod +x foo.awk
now try ./foo.awk under same directory.
EDIT
#!/bin/awk -f
BEGIN{while(getline < "/path/to/foo.txt")print $1 }
strange requirement. why bash wrapper doesn't fit your requirement? You did tag the question with shell
anyway, the above script should be what you need.

Execute a command in the re-verse order of ids present in a file

I am running the following command using awk on file.txt ,currently its running the command on the ids present in file.txt from top to bottom..i want the commmand to be run in the reverse order for the ids present in file.txt..any inputs on how we can do this?
git command $(awk '{print $1}' file.txt)
file.txt contains.
97a65fd1d1b3b8055edef75e060738fed8b31d3
fb8df67ceff40b4fc078ced31110d7a42e407f16
a0631ce8a9a10391ac4dc377cd79d1adf1f3f3e2
.....
If you aren't bound to using awk then tail with the -r (for reverse) argument will do the trick...
myFile.txt
97a65fd1d1b3b8055edef75e060738fed8b31d3
fb8df67ceff40b4fc078ced31110d7a42e407f16
a0631ce8a9a10391ac4dc377cd79d1adf1f3f3e2
Now to print it in reverse...
$ tail -r myFile.txt
a0631ce8a9a10391ac4dc377cd79d1adf1f3f3e2
fb8df67ceff40b4fc078ced31110d7a42e407f16
97a65fd1d1b3b8055edef75e060738fed8b31d3
EDIT:
To output this to a file simply redirect it out...
$ tail -r myFile.txt > newFile.txt
EDIT:
Want to write to the same file? No problem!
tail -r myFile.txt > temp.txt; cat temp.txt > myFile.txt; rm temp.txt;
For some reason when I redirected tail -r to the same file it came back blank, this workaround avoids that issue by writing to a temporary "buffer" file.
To reverse the lines in a file using awk, use
awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }' file
use $1 instead of $0 above to operate on the first field only instead of the whole line.