Adding a character to vertically on column - awk

Input:
dog
fish
elephant
...
Output:
dog |
fish |
elephant|
... |
I want to add a "|" on the 9th character of every row

You should first space pad the lines to the max line width (eg: 8 chars as you say).
Then, you can use
sed 's/./&|/9' <padded.txt >output.txt

Hard-coding the output field width:
$ awk '{printf "%-*s|\n",8,$0}' file
dog |
fish |
elephant|
... |
or specifying the output field width as an argument:
$ awk -v wid=8 '{printf "%-*s|\n",wid,$0}' file
dog |
fish |
elephant|
... |
or dynamically determining the output field width from the input field widths:
$ awk 'NR==FNR{lgth=length($0); wid=(lgth > wid ? lgth : wid); next} {printf "%-*s|\n",wid,$0}' file file
dog |
fish |
elephant|
... |

If you need to further process the records, it might be a good idea to actually make the $0 9 chars wide:
$ awk '{$0=$0 sprintf("%" 9-length() "s","|")}1' file
Output:
dog |
fish |
elephant|
... |

Related

AWK removes my spaces

I have a data like
31 text text t text ?::"!!/
2 te text 32 +ěščřžý
43 te www ##
It is output from uniq -c
I need to get something like
text text t text ?::"!!/
te text 32 +ěščřžý
te www ##
I tried to use something like
a=$1;
$1=""
$0=substr($0, 2);
printf $0;
print "";
But it removes me spaces and I got something like
text text t text ?::"!!/
te text 32 +ěščřžý
te www ##
And I need to save the number too.
Is there anyone, who knows how to do it?
I guess you want to remove the leading digits from each line, sed will be simpler for this task
sed -E 's/^[0-9]+ //' file
awk normalizes white space with the default FS. You can do the same with sub in awk if there is more processing.
Try this one:
$ echo "31 text text t text" |awk '{gsub($1FS$2,$2);print}'
text text t text
You can also try
$ echo "31 text text t text" |awk '{gsub(/^[0-9]+/,"");print}'
text text t text
But in this case you will have a leading space in front of each line.
$ seq 5 | uniq -c
1 1
1 2
1 3
1 4
1 5
$ seq 5 | uniq -c | awk '{sub(/^ *[^ ]+ +/,"")}1'
1
2
3
4
5
$ seq 5 | uniq -c | sed 's/^ *[^ ]* *//'
1
2
3
4
5

need to rearrange and sum column in solaris command

I have below data named atp.csv file
Date_Time,M_ID,N_ID,Status,Desc,AMount,Type
2015-01-05 00:00:00 076,1941321748,BD9010423590206,200,Transaction Successful,2000,PRETOP
2015-01-05 00:00:00 077,1941323504,BD9010423590207,351,Transaction Successful,5000,PRETOP
2015-01-05 00:00:00 078,1941321743,BD9010423590205,200,Transaction Successful,1500,PRETOP
2015-01-05 00:00:00 391,1941323498,BD9010500000003,200,Transaction Successful,1000,PRETOP
i want to count status wise using below command.
cat atp.csv|awk -F',' '{print $4}'|sort|uniq -c
The output is like below:
3 200
1 351
But i want to like below output and also want to sum the amount column in status wise.
200,3,4500
351,1,5000
That is status is first and then count value.Please help..
AWK has associative arrays.
% cat atp.csv | awk -F, 'NR>1 {n[$4]+=1;s[$4]+=$6;} END {for (k in n) { print k "," n[k] "," s[k]; }}' | sort
200,3,4500
351,1,5000
In the above:
The first line (record) is skipped with NR>1.
n[k] is the number of occurrences of key k (so we add 1), and s[k] is the running sum values in field 6 (so we add $6).
Finally, after all records are processed (END), you can iterate over associated arrays by key (for (k in n) { ... }) and print the keys and values in arrays n and s associated with the key.
You can try this awk version also
awk -F',' '{print $4,",", a[$4]+=$6}' FileName | sort -r | uniq -cw 6 | sort -r
Output :
3 200 , 4500
1 351 , 5000
Another Way:
awk -F',' '{print $4,",", a[$4]+=$6}' FileName | sort -r | uniq -cw 6 |sort -r | sed 's/\([^ ]\+\).\([^ ]\+\).../\2,\1,/'
All in (g)awk
awk -F, 'NR>1{a[$4]++;b[$4]+=$6}
END{n=asorti(a,c);for(i=1;i<=n;i++)print c[i]","a[c[i]]","b[c[i]]}' file

Exclude characters using awk

I am trying to find a way to exclude numbers on a file when I cat ti but I only want to exclude the numbers on print $1 and I want to keep the number that is in front of the word. I have something that I thought might might work but is not quite giving me what I want. I have also showed an example of what the file looks like.The file is separated by pipes.
cat files | awk -F '|' ' {print $1 "\t" $2}' |sed 's/0123456789//g'
input:
b1ark45 | dog23 | brown
m2eow66| cat24 |yellow
h3iss67 | snake57 | green
Output
b1ark dog23
m2eow cat24
h3iss nake57
try this:
awk -F'|' -v OFS='|' '{gsub(/[0-9]/,"",$1)}7' file
the output of your example would be:
bark | dog23 | brown
meow| cat24 |yellow
hiss | snake57 | green
EDIT
this outputs col1 (without ending numbers and spaces) and col2, separated by <tab>
kent$ echo "b1ark45 | dog23 | brown
m2eow66| cat24 |yellow
h3iss67 | snake57 | green"|awk -F'|' -v OFS='\t' '{gsub(/[0-9]*\s*$/,"",$1);print $1,$2}'
b1ark dog23
m2eow cat24
h3iss snake57
This might work for you (GNU sed):
sed -r 's/[0-9]*\s*\|\s*(\S*).*/ \1/' file

How do I print a range of data in awk?

I am reviewing my access_logs with a statment like:
cat access_log | grep 16/Sep/2012:17 | awk '{print $12 $13 $14 $15 $16}' | sort | uniq -c | sort -n | tail -40
The purpose is to see the user agent of the anyone that has been hitting my server for the last hour sorted by number of hits. My server has unusual activity to I want stop any unwanted spiders/etc.
But the part: awk '{print $12 $13 $14 $15 $16}' would be much preferred as something like: awk '{print $12-through-end-of-line}' so that I could see the whole user agent which is a different length for each one.
Is there a way to do this with awk?
Not extremely elegant, but this works:
grep 16/Sep/2012:17 access_log | awk '{for (i=12;i<=NF;++i) printf "%s ",$i;print ""}'
It has the side effect of condensing multiple spaces between fields down to one, and putting an extra one at the end of the line, though, which probably isn't critical.
I've never found one; in situations like this, I use cut (assuming I don't need awk's flexible handling of field separation):
# Assuming tab-separated fields, cut's default
grep 16/Sep/2012:17 access_log | cut -f12- | sort | uniq -c | sort -n | tail -40
# For space-separated fields (single spaces, not arbitrary amounts of whitespace)
grep 16/Sep/2012:17 access_log | cut -d' ' -f12- | sort | uniq -c | sort -n | tail -40
(Clarification: I've never found a good way. I've used #twalberg's for-loop when necessary, but prefer using cut if possible.)
$ echo somefields:; cat somefields ; echo from-to.awk: ; \
cat from-to.awk ; echo ;awk -f from-to.awk somefields
somefields:
a b c d e f g h i j k l m n o p q r s t u v w x y z
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
from-to.awk:
{ for (i=12; i<=NF; i++) { printf "%s ", $i }; print "" }
l m n o p q r s t u v w x y z
12 13 14 15 16 17 18 19 20 21
from man awk:
NF The number of fields in the current input record.
So you basically loop through fields (separated by spaces) from 12 to the last one.
why not
#!/bin/bash
awk "/$1/"'{for (i=12;i<=NF;i++) printf("%s ", $i) ;printf "\n"}' log | sort | uniq -c | sort -n | tail -40
in a script file.
Then you can call it like
myMonitor.sh 16/Sep/2012:17
Don't have a way to test this right. Appologies for any formatting/syntax errors.
Hopefully you get the idea.
IHTH
awk '/16/Sep/2012:17/{for(i=1;i<12;i++){$i="";}print}' access_log| sort | uniq -c | sort -n | tail -40

Number of fields returned by awk

Is there a way to get awk to return the number of fields that met a field-separator criteria? Say, for instance, my file contains
a b c d
so, awk --field-separator=" " | <something> should return 4
The NF variable is set to the total number of fields in the input record. So:
echo "a b c d" | awk --field-separator=" " "{ print NF }"
will display
4
Note, however, that:
echo -e "a b c d\na b" | awk --field-separator=" " "{ print NF }"
will display:
4
2
Hope this helps, and happy awking
NF gives the number of fields for a given record:
[]$ echo "a b c d" | gawk '{print NF}'
4
If you would like to know the set of all the numbers of fields in a multiline content you can run:
X | awk '{print NF}' | sort -n | uniq
being X a command that outputs content in the standard output: cat, echo, etc. Example:
With file.txt:
a b
b c
c d
e t a
e u
The command cat file.txt | awk '{print NF}' | sort -n | uniq will print:
2
3
And with file2.txt:
a b
b c
c d
e u
The command cat file2.txt | awk '{print NF}' | sort -n | uniq will print:
2
awk(1) on FreeBSD does not recognize --field-separator. Use -v instead:
echo "a b c d" | awk -v FS=" " "{ print NF }"
It is a portable, POSIX way to define the field separator.