https://www.baeldung.com/linux/remove-last-n-lines-of-file
awk -v n=3 'NR==FNR{total=NR;next} FNR==total-n+1{exit} 1' input.txt input.txt
01 is my line number. Keep me please!
02 is my line number. Keep me please!
03 is my line number. Keep me please!
04 is my line number. Keep me please!
05 is my line number. Keep me please!
06 is my line number. Keep me please!
07 is my line number. Keep me please!
Here is a way to remove the last n lines. But it is not done inplace and the file is read twice, and it only deals with one file at a time.
How can I inplace remove the last n lines of many files without opening them more than once with one gawk command but without using any other external commands?
With your shown samples please try following awk code. Without using any external utilities as per OP's request in question. We could make use of END block here of awk programming.
awk -v n="3" '
{
total=FNR
lines[FNR]=$0
}
END{
till=total-n
for(i=1;i<=till;i++){
print lines[i]
}
}
' Input_file
single-pass awk solution that requires neither arrays nor gawk
— (unless your file is over 500 MB, then it might be slightly slower) :
rm -f file.txt
jot -c 30 51 > file.txt
gcat -n file.txt | rs -t -c$'\n' -C'#' 0 5 | column -s'#' -t
1 3 7 9 13 ? 19 E 25 K
2 4 8 : 14 # 20 F 26 L
3 5 9 ; 15 A 21 G 27 M
4 6 10 < 16 B 22 H 28 N
5 7 11 = 17 C 23 I 29 O
6 8 12 > 18 D 24 J 30 P
mawk -v __='file.txt' -v N='13' 'BEGIN {
OFS = FS = RS
RS = "^$"
getline <(__); close(__)
print $!(NF -= NF < (N+=_==$NF) ? NF : N) >(__) }'
gcat -n file.txt | rs -t -c$'\n' -C'#' 6 | column -s'#' -t ;
1 3 7 9 13 ?
2 4 8 : 14 #
3 5 9 ; 15 A
4 6 10 < 16 B
5 7 11 = 17 C
6 8 12 >
Speed is hardly a concern :
115K rows 198 MB file took 0.254 secs
rows = 115567. | UTF8 chars = 133793410. | bytes = 207390680.
( mawk2 -v __="${fn1}" -v N='13' ; )
0.04s user 0.20s system 94% cpu 0.254 total
rows = 115554. | UTF8 chars = 133779254. | bytes = 207370006.
5.98 million rows 988 MB file took 1.44 secs
rows = 5983333. | UTF8 chars = 969069988. | bytes = 1036334374.
( mawk2 -v __="${fn1}" -v N='13' ; )
0.33s user 1.07s system 97% cpu 1.435 total
rows = 5983320. | UTF8 chars = 969068062. | bytes = 1036332426.
Another way to do it, using GAWK, with option The BEGINFILE and ENDFILE Special Patterns:
{ lines[++numLines] = $0 }
BEGINFILE { fname=FILENAME}
ENDFILE { prt() }
function prt( lineNr,maxLines) {
close(fname)
printf "" > fname
maxLines = numLines - n
for ( lineNr=1; lineNr<=maxLines; lineNr++ ) {
print lines[lineNr] > fname
}
close(fname)
numLines = 0
}
I find that this is the most succinct solution to the problem.
$ gawk -i inplace -v n=3 -v ORS= -e '{ lines[FNR]=$0 RT }
ENDFILE {
for(i=1;i<=FNR-n;++i) {
print lines[i]
}
}' -- file{1..3}.txt
I have a problem. This is my script
#!/bin/bash
for index in {1..100} # I do this script on 100 files, that is s why I use for loop
do
sort -k2,2 -k1,1 eq9_x4_$index.ndx |
uniq -c |
uniq -f2 -c |
awk '
($1==1 && $2==4) {inner+=6}
($1==2 && $2==1) {inner+=3; outer+=3}
($1==2 && $2==2) {inner+=2; outer+=4}
($1==3 && $2==1) {inner+=1; outer+=5}
($1==4 && $2==1) {outer+=6}
END{print inner, outer}' >> inner_outer_water_bridges_x4.txt
done
It counts water bridges and print sum (inner and outer)
This is part of my output file and instead of this
9 15
2 16
8 10
4 14
6
5 25
2 10
6
I want to have this
9 15
2 16
0 0
8 10
4 14
0 6
5 25
2 10
6 0
How to do this is there any good solution in awk?
With ternary operator try following. Couldn't test it since only code samples provided here.
#!/bin/bash
for index in {1..100} # I do this script on 100 files, that is s why I use for loop
do
sort -k2,2 -k1,1 eq9_x4_$index.ndx |
uniq -c |
uniq -f2 -c |
awk '
($1==1 && $2==4) {inner+=6}
($1==2 && $2==1) {inner+=3; outer+=3}
($1==2 && $2==2) {inner+=2; outer+=4}
($1==3 && $2==1) {inner+=1; outer+=5}
($1==4 && $2==1) {outer+=6}
END{print (inner?inner:0), (outer?outer:0)}' >> inner_outer_water_bridges_x4.txt
done
If you are dealing solely with integers you might harness printf following way
END{printf "%d %d\n", inner, outer}
(tested in GNU Awk 5.0.1)
I have below data named atp.csv file
Date_Time,M_ID,N_ID,Status,Desc,AMount,Type
2015-01-05 00:00:00 076,1941321748,BD9010423590206,200,Transaction Successful,2000,PRETOP
2015-01-05 00:00:00 077,1941323504,BD9010423590207,351,Transaction Successful,5000,PRETOP
2015-01-05 00:00:00 078,1941321743,BD9010423590205,200,Transaction Successful,1500,PRETOP
2015-01-05 00:00:00 391,1941323498,BD9010500000003,200,Transaction Successful,1000,PRETOP
i want to count status wise using below command.
cat atp.csv|awk -F',' '{print $4}'|sort|uniq -c
The output is like below:
3 200
1 351
But i want to like below output and also want to sum the amount column in status wise.
200,3,4500
351,1,5000
That is status is first and then count value.Please help..
AWK has associative arrays.
% cat atp.csv | awk -F, 'NR>1 {n[$4]+=1;s[$4]+=$6;} END {for (k in n) { print k "," n[k] "," s[k]; }}' | sort
200,3,4500
351,1,5000
In the above:
The first line (record) is skipped with NR>1.
n[k] is the number of occurrences of key k (so we add 1), and s[k] is the running sum values in field 6 (so we add $6).
Finally, after all records are processed (END), you can iterate over associated arrays by key (for (k in n) { ... }) and print the keys and values in arrays n and s associated with the key.
You can try this awk version also
awk -F',' '{print $4,",", a[$4]+=$6}' FileName | sort -r | uniq -cw 6 | sort -r
Output :
3 200 , 4500
1 351 , 5000
Another Way:
awk -F',' '{print $4,",", a[$4]+=$6}' FileName | sort -r | uniq -cw 6 |sort -r | sed 's/\([^ ]\+\).\([^ ]\+\).../\2,\1,/'
All in (g)awk
awk -F, 'NR>1{a[$4]++;b[$4]+=$6}
END{n=asorti(a,c);for(i=1;i<=n;i++)print c[i]","a[c[i]]","b[c[i]]}' file
Is there a way to get awk to return the number of fields that met a field-separator criteria? Say, for instance, my file contains
a b c d
so, awk --field-separator=" " | <something> should return 4
The NF variable is set to the total number of fields in the input record. So:
echo "a b c d" | awk --field-separator=" " "{ print NF }"
will display
4
Note, however, that:
echo -e "a b c d\na b" | awk --field-separator=" " "{ print NF }"
will display:
4
2
Hope this helps, and happy awking
NF gives the number of fields for a given record:
[]$ echo "a b c d" | gawk '{print NF}'
4
If you would like to know the set of all the numbers of fields in a multiline content you can run:
X | awk '{print NF}' | sort -n | uniq
being X a command that outputs content in the standard output: cat, echo, etc. Example:
With file.txt:
a b
b c
c d
e t a
e u
The command cat file.txt | awk '{print NF}' | sort -n | uniq will print:
2
3
And with file2.txt:
a b
b c
c d
e u
The command cat file2.txt | awk '{print NF}' | sort -n | uniq will print:
2
awk(1) on FreeBSD does not recognize --field-separator. Use -v instead:
echo "a b c d" | awk -v FS=" " "{ print NF }"
It is a portable, POSIX way to define the field separator.
Too cumbersome:
awk '{print " "$4" "$5" "$6" "$7" "$8" "$9" "$10" "$11" "$12" "$13}' things
awk '{for(i=1;i<4;i++) $i="";print}' file
use cut
$ cut -f4-13 file
or if you insist on awk and $13 is the last field
$ awk '{$1=$2=$3="";print}' file
else
$ awk '{for(i=4;i<=13;i++)printf "%s ",$i;printf "\n"}' file
A solution that does not add extra leading or trailing whitespace:
awk '{ for(i=4; i<NF; i++) printf "%s",$i OFS; if(NF) printf "%s",$NF; printf ORS}'
### Example ###
$ echo '1 2 3 4 5 6 7' |
awk '{for(i=4;i<NF;i++)printf"%s",$i OFS;if(NF)printf"%s",$NF;printf ORS}' |
tr ' ' '-'
4-5-6-7
Sudo_O proposes an elegant improvement using the ternary operator NF?ORS:OFS
$ echo '1 2 3 4 5 6 7' |
awk '{ for(i=4; i<=NF; i++) printf "%s",$i (i==NF?ORS:OFS) }' |
tr ' ' '-'
4-5-6-7
EdMorton gives a solution preserving original whitespaces between fields:
$ echo '1 2 3 4 5 6 7' |
awk '{ sub(/([^ ]+ +){3}/,"") }1' |
tr ' ' '-'
4---5----6-7
BinaryZebra also provides two awesome solutions:
(these solutions even preserve trailing spaces from original string)
$ echo -e ' 1 2\t \t3 4 5 6 7 \t 8\t ' |
awk -v n=3 '{ for ( i=1; i<=n; i++) { sub("^["FS"]*[^"FS"]+["FS"]+","",$0);} } 1 ' |
sed 's/ /./g;s/\t/->/g;s/^/"/;s/$/"/'
"4...5...6.7.->.8->."
$ echo -e ' 1 2\t \t3 4 5 6 7 \t 8\t ' |
awk -v n=3 '{ print gensub("["FS"]*([^"FS"]+["FS"]+){"n"}","",1); }' |
sed 's/ /./g;s/\t/->/g;s/^/"/;s/$/"/'
"4...5...6.7.->.8->."
The solution given by larsr in the comments is almost correct:
$ echo '1 2 3 4 5 6 7' |
awk '{for (i=3;i<=NF;i++) $(i-2)=$i; NF=NF-2; print $0}' | tr ' ' '-'
3-4-5-6-7
This is the fixed and parametrized version of larsr solution:
$ echo '1 2 3 4 5 6 7' |
awk '{for(i=n;i<=NF;i++)$(i-(n-1))=$i;NF=NF-(n-1);print $0}' n=4 | tr ' ' '-'
4-5-6-7
All other answers before Sep-2013 are nice but add extra spaces:
Example of answer adding extra leading spaces:
$ echo '1 2 3 4 5 6 7' |
awk '{$1=$2=$3=""}1' |
tr ' ' '-'
---4-5-6-7
Example of answer adding extra trailing space
$ echo '1 2 3 4 5 6 7' |
awk '{for(i=4;i<=13;i++)printf "%s ",$i;printf "\n"}' |
tr ' ' '-'
4-5-6-7-------
Try this:
awk '{ $1=""; $2=""; $3=""; print $0 }'
The correct way to do this is with an RE interval because it lets you simply state how many fields to skip, and retains inter-field spacing for the remaining fields.
e.g. to skip the first 3 fields without affecting spacing between remaining fields given the format of input we seem to be discussing in this question is simply:
$ echo '1 2 3 4 5 6' |
awk '{sub(/([^ ]+ +){3}/,"")}1'
4 5 6
If you want to accommodate leading spaces and non-blank spaces, but again with the default FS, then it's:
$ echo ' 1 2 3 4 5 6' |
awk '{sub(/[[:space:]]*([^[:space:]]+[[:space:]]+){3}/,"")}1'
4 5 6
If you have an FS that's an RE you can't negate in a character set, you can convert it to a single char first (RS is ideal if it's a single char since an RS CANNOT appear within a field, otherwise consider SUBSEP), then apply the RE interval subsitution, then convert to the OFS. e.g. if chains of "."s separated the fields:
$ echo '1...2.3.4...5....6' |
awk -F'[.]+' '{gsub(FS,RS);sub("([^"RS"]+["RS"]+){3}","");gsub(RS,OFS)}1'
4 5 6
Obviously if OFS is a single char AND it can't appear in the input fields you can reduce that to:
$ echo '1...2.3.4...5....6' |
awk -F'[.]+' '{gsub(FS,OFS); sub("([^"OFS"]+["OFS"]+){3}","")}1'
4 5 6
Then you have the same issue as with all the loop-based solutions that reassign the fields - the FSs are converted to OFSs. If that's an issue, you need to look into GNU awks' patsplit() function.
Pretty much all the answers currently add either leading spaces, trailing spaces or some other separator issue. To select from the fourth field where the separator is whitespace and the output separator is a single space using awk would be:
awk '{for(i=4;i<=NF;i++)printf "%s",$i (i==NF?ORS:OFS)}' file
To parametrize the starting field you could do:
awk '{for(i=n;i<=NF;i++)printf "%s",$i (i==NF?ORS:OFS)}' n=4 file
And also the ending field:
awk '{for(i=n;i<=m=(m>NF?NF:m);i++)printf "%s",$i (i==m?ORS:OFS)}' n=4 m=10 file
awk '{$1=$2=$3="";$0=$0;$1=$1}1'
Input
1 2 3 4 5 6 7
Output
4 5 6 7
echo 1 2 3 4 5| awk '{ for (i=3; i<=NF; i++) print $i }'
Another way to avoid using the print statement:
$ awk '{$1=$2=$3=""}sub("^"FS"*","")' file
In awk when a condition is true print is the default action.
I can't believe nobody offered plain shell:
while read -r a b c d; do echo "$d"; done < file
Options 1 to 3 have issues with multiple whitespace (but are simple).
That is the reason to develop options 4 and 5, which process multiple white spaces with no problem.
Of course, if options 4 or 5 are used with n=0 both will preserve any leading whitespace as n=0 means no splitting.
Option 1
A simple cut solution (works with single delimiters):
$ echo '1 2 3 4 5 6 7 8' | cut -d' ' -f4-
4 5 6 7 8
Option 2
Forcing an awk re-calc sometimes solve the problem (works with some versions of awk) of added leading spaces:
$ echo '1 2 3 4 5 6 7 8' | awk '{ $1=$2=$3="";$0=$0;} NF=NF'
4 5 6 7 8
Option 3
Printing each field formated with printf will give more control:
$ echo ' 1 2 3 4 5 6 7 8 ' |
awk -v n=3 '{ for (i=n+1; i<=NF; i++){printf("%s%s",$i,i==NF?RS:OFS);} }'
4 5 6 7 8
However, all previous answers change all FS between fields to OFS. Let's build a couple of solutions to that.
Option 4
A loop with sub to remove fields and delimiters is more portable, and doesn't trigger a change of FS to OFS:
$ echo ' 1 2 3 4 5 6 7 8 ' |
awk -v n=3 '{ for(i=1;i<=n;i++) { sub("^["FS"]*[^"FS"]+["FS"]+","",$0);} } 1 '
4 5 6 7 8
NOTE: The "^["FS"]*" is to accept an input with leading spaces.
Option 5
It is quite possible to build a solution that does not add extra leading or trailing whitespace, and preserve existing whitespace using the function gensub from GNU awk, as this:
$ echo ' 1 2 3 4 5 6 7 8 ' |
awk -v n=3 '{ print gensub("["FS"]*([^"FS"]+["FS"]+){"n"}","",1); }'
4 5 6 7 8
It also may be used to swap a field list given a count n:
$ echo ' 1 2 3 4 5 6 7 8 ' |
awk -v n=3 '{ a=gensub("["FS"]*([^"FS"]+["FS"]+){"n"}","",1);
b=gensub("^(.*)("a")","\\1",1);
print "|"a"|","!"b"!";
}'
|4 5 6 7 8 | ! 1 2 3 !
Of course, in such case, the OFS is used to separate both parts of the line, and the trailing white space of the fields is still printed.
Note1: ["FS"]* is used to allow leading spaces in the input line.
Cut has a --complement flag that makes it easy (and fast) to delete columns. The resulting syntax is analogous with what you want to do -- making the solution easier to read/understand. Complement also works for the case where you would like to delete non-contiguous columns.
$ foo='1 2 3 %s 5 6 7'
$ echo "$foo" | cut --complement -d' ' -f1-3
%s 5 6 7
$
Perl solution which does not add leading or trailing whitespace:
perl -lane 'splice #F,0,3; print join " ",#F' file
The perl #F autosplit array starts at index 0 while awk fields start with $1
Perl solution for comma-delimited data:
perl -F, -lane 'splice #F,0,3; print join ",",#F' file
Python solution:
python -c "import sys;[sys.stdout.write(' '.join(line.split()[3:]) + '\n') for line in sys.stdin]" < file
For me the most compact and compliant solution to the request is
$ a='1 2\t \t3 4 5 6 7 \t 8\t ';
$ echo -e "$a" | awk -v n=3 '{while (i<n) {i++; sub($1 FS"*", "")}; print $0}'
And if you have more lines to process as for instance file foo.txt, don't forget to reset i to 0:
$ awk -v n=3 '{i=0; while (i<n) {i++; sub($1 FS"*", "")}; print $0}' foo.txt
Thanks your forum.
As I was annoyed by the first highly upvoted but wrong answer I found enough to write a reply there, and here the wrong answers are marked as such, here is my bit. I do not like proposed solutions as I can see no reason to make answer so complex.
I have a log where after $5 with an IP address can be more text or no text. I need everything from the IP address to the end of the line should there be anything after $5. In my case, this is actualy withn an awk program, not an awk oneliner so awk must solve the problem. When I try to remove the first 4 fields using the old nice looking and most upvoted but completely wrong answer:
echo " 7 27.10.16. Thu 11:57:18 37.244.182.218 one two three" | awk '{$1=$2=$3=$4=""; printf "[%s]\n", $0}'
it spits out wrong and useless response (I added [] to demonstrate):
[ 37.244.182.218 one two three]
Instead, if columns are fixed width until the cut point and awk is needed, the correct and quite simple answer is:
echo " 7 27.10.16. Thu 11:57:18 37.244.182.218 one two three" | awk '{printf "[%s]\n", substr($0,28)}'
which produces the desired output:
[37.244.182.218 one two three]
I've found this other possibility, maybe it could be useful also...
awk 'BEGIN {OFS=ORS="\t" }; {for(i=1; i<14; i++) print $i " "; print $NF "\n" }' your_file
Note: 1. For tabular data and from column $1 to $14
Use cut:
cut -d <The character between characters> -f <number of first column>,<number of last column> <file name>
e.g.: If you have file1 containing : car.is.nice.equal.bmw
Run : cut -d . -f1,3 file1 will print car.is.nice
This isn't very far from some of the previous answers, but does solve a couple of issues:
cols.sh:
#!/bin/bash
awk -v s=$1 '{for(i=s; i<=NF;i++) printf "%-5s", $i; print "" }'
Which you can now call with an argument that will be the starting column:
$ echo "1 2 3 4 5 6 7 8 9 10 11 12 13 14" | ./cols.sh 3
3 4 5 6 7 8 9 10 11 12 13 14
Or:
$ echo "1 2 3 4 5 6 7 8 9 10 11 12 13 14" | ./cols.sh 7
7 8 9 10 11 12 13 14
This is 1-indexed; if you prefer zero indexed, use i=s + 1 instead.
Moreover, if you would like to have to arguments for the starting index and end index, change the file to:
#!/bin/bash
awk -v s=$1 -v e=$2 '{for(i=s; i<=e;i++) printf "%-5s", $i; print "" }'
For example:
$ echo "1 2 3 4 5 6 7 8 9 10 11 12 13 14" | ./cols.sh 7 9
7 8 9
The %-5s aligns the result as 5-character-wide columns; if this isn't enough, increase the number, or use %s (with a space) instead if you don't care about alignment.
AWK printf-based solution that avoids % problem, and is unique in that it returns nothing (no return character) if there are less than 4 columns to print:
awk 'NF > 3 { for(i=4; i<NF; i++) printf("%s ", $(i)); print $(i) }'
Testing:
$ x='1 2 3 %s 4 5 6'
$ echo "$x" | awk 'NF > 3 { for(i=4; i<NF; i++) printf("%s ", $(i)); print $(i) }'
%s 4 5 6
$ x='1 2 3'
$ echo "$x" | awk 'NF > 3 { for(i=4; i<NF; i++) printf("%s ", $(i)); print $(i) }'
$ x='1 2 3 '
$ echo "$x" | awk 'NF > 3 { for(i=4; i<NF; i++) printf("%s ", $(i)); print $(i) }'
$