Delete third-to-last line of file using sed or awk - awk

I have several text files with different row numbers and I have to delete in all of them the third-to-last line . Here is a sample file:
bear
horse
window
potato
berry
cup
Expected result for this file:
bear
horse
window
berry
cup
Can we delete the third-to-last line of a file:
a. not based on any string/pattern.
b. based only on a condition that it has to be the third-to-last line
I have problem on how to index my files beginning from the last line. I have tried this from another SO question for the second-to-last line:
> sed -i 'N;$!P;D' output1.txt

With tac + awk solution, could you please try following. Just set line variable of awk to line(from bottom) whichever you want to skip.
tac Input_file | awk -v line="3" 'line==FNR{next} 1' | tac
Explanation: Using tac will read the Input_file reverse(from bottom line to first line), passing its output to awk command and then checking condition if line is equal to line(which we want to skip) then don't print that line, 1 will print other lines.
2nd solution: With awk + wc solution, kindly try following.
awk -v lines="$(wc -l < Input_file)" -v skipLine="3" 'FNR!=(lines-skipLine+1)' Input_file
Explanation: Starting awk program here and creating a variable lines which has total number of lines present in Input_file in it. variable skipLine has that line number which we want to skip from bottom of Input_file. Then in main program checking condition if current line is NOT equal to lines-skipLine+1 then printing the lines.
3rd solution: Adding solution as per Ed sir's comment here.
awk -v line=3 '{a[NR]=$0} END{for (i=1;i<=NR;i++) if (i != (NR-line)) print a[i]}' Input_file
Explanation: Adding detailed explanation for 3rd solution.
awk -v line=3 ' ##Starting awk program from here, setting awk variable line to 3(line which OP wants to skip from bottom)
{
a[NR]=$0 ##Creating array a with index of NR and value is current line.
}
END{ ##Starting END block of this program from here.
for(i=1;i<=NR;i++){ ##Starting for loop till value of NR here.
if(i != (NR-line)){ ##Checking condition if i is NOT equal to NR-line then do following.
print a[i] ##Printing a with index i here.
}
}
}
' Input_file ##Mentioning Input_file name here.

With ed
ed -s ip.txt <<< $'$-2d\nw'
# thanks Shawn for a more portable solution
printf '%s\n' '$-2d' w | ed -s ip.txt
This will do in-place editing. $ refers to last line and you can specify a negative relative value. So, $-2 will refer to last but second line. w command will then write the changes.
See ed: Line addressing for more details.

This might work for you (GNU sed):
sed '1N;N;$!P;D' file
Open a window of 3 lines in the file then print/delete the first line of the window until the end of the file.
At the end of the file, do not print the first line in the window i.e. the 3rd line from the end of the file. Instead, delete it, and repeat the sed cycle. This will try to append a line after the end of file, which will cause sed to bail out, printing the remaining lines in the window.
A generic solution for n lines back (where n is 2 or more lines from the end of the file), is:
sed ':a;N:s/[^\n]*/&/3;Ta;$!P;D' file
Of course you could use:
tac file | sed 3d | tac
But then you would be reading the file 3 times.

To delete the 3rd-to-last line of a file, you can use head and tail:
{ head -n -3 file; tail -2 file; }
In case of a large input file, when perfomance matters, this is very fast, because it doesn't read and write line by line. Also, do not modify the semicolons and the spaces next to the brackets, see about commands grouping.
Or use sed with tac:
tac file | sed '3d' | tac
Or use awk with tac:
tac file | awk 'NR!=3' | tac

Related

Line number of last occurrence of a pattern with awk

I am new to awk commands. I am trying a way to print the last line number for my pattern match.
I need to integrate that awk command in tcl script..
If someone has answer to it, please let me know.
exec awk -v search=$var {$0~search{print NR; exit}} file_name
I am using this to print the line number of first occurrence.
I would harness GNU AWK for this task following, let file.txt content be
12
15
120
150
1200
1500
then
awk '$0~"12"{n=NR}END{print n}' file.txt
output
5
Explanation: I am looking for last line containing 12 somewhere, when such line is encountered I set value of variable n to number of row (NR), when all lines of lines are processed I print said value.
(tested in gawk 4.2.1)
Or, without awk:
set fh [open file_name]
set lines [split [read $fh] \n]
close $fh
set line_nums [lmap idx [lsearch -all -regexp $lines with] {expr {$idx + 1}}]
set last_line_num [lindex $line_nums end]
With your shown samples and efforts please try following tac + awk code.
tac Input_file |
awk -v lines=$(wc -l < Input_file) '/12/{print lines-FNR+1;exit}'
Explanation:
Using tac command to print Input_file's output in reverse order from bottom to top(basically to get very last match very quickly at first and exit from awk program, explained later on).
Sending tac Input_file output to awk program as an input.
In awk program creating variable named lines which has total number of lines of Input_file and in main program checking if line contains 12 then printing lines-FNR+1 value and using exit exiting from program then, by which we need NOT to read whole Input_file.

How can I extract using sed or awk between newlines after a specific pattern?

I like to check if there is other alternatives where I can print using other bash commands to get the range of IPs under #Hiko other than the below sed, tail and head which I actually figured out to get what I needed from my hosts file.
I'm just curious and keen in learning more on bash, hope I could gain more knowledge from the community.
:D
$ sed -n '/#Hiko/,/#Pico/p' /etc/hosts | tail -n +3 | head -n -2
/etc/hosts
#Tito
192.168.1.21
192.168.1.119
#Hiko
192.168.1.243
192.168.1.125
192.168.1.94
192.168.1.24
192.168.1.242
#Pico
192.168.1.23
192.168.1.93
192.168.1.121
1st solution: With shown samples could you please try following. Written and tested in GNU awk.
awk -v RS= '/#Pico/{exit} /#Hiko/{found=1;next} found' Input_file
Explanation:
awk -v RS= ' ##Starting awk program from here.
/#Pico/{ ##Checking condition if line has #Pico then do following.
exit ##exiting from program.
}
/#Hiko/{ ##Checking condition if line has #Hiko is present in line.
found=1 ##Setting found to 1 here.
next ##next will skip all further statements from here.
}
found ##Checking condition if found is SET then print the line.
' Input_file ##mentioning Input_file name here.
2nd solution: Without using RS function try following.
awk '/#Pico/{exit} /#Hiko/{found=1;next} NF && found' Input_file
3rd solution: You could look for record #Hiko and then could print its next record and come out with shown samples.
awk -v RS= '/#Hiko/{found=1;next} found{print;exit}' Input_file
NOTE: These all solutions above check if string #Hiko or #Pico are present in anywhere in line, in case you want to look exact string then change above only /#Hiko/ and /#Pico/ part to /^#Hiko$/ and /^#Pico$/ respectively.
With sed (checked with GNU sed, syntax might differ for other implementations)
$ sed -n '/#Hiko/{n; :a n; /^$/q; p; ba}' /etc/hosts
192.168.1.243
192.168.1.125
192.168.1.94
192.168.1.24
192.168.1.242
-n turn off automatic printing of pattern space
/#Hiko/ if line contains #Hiko
n get next line (assuming there's always an empty line)
:a label a
n get next line (using n will overwrite any previous content in the pattern space, so only single line content is present in this case)
/^$/q if the current line is empty, quit
p print the current line
ba branch to label a
You can use
awk -v RS= '/^#Hiko$/{getline;print;exit}' file
awk -v RS= '$0 == "#Hiko"{getline;print;exit}' file
Which means:
RS= - make awk read the file paragraph by paragraph
/^#Hiko$/ or '$0 == "#Hiko" - finds a paragraph that is equal to #Hiko
{getline;print;exit} - gets the next paragraph, prints it and exits.
See the online demo.
You may use:
awk -v RS= 'p && NR == p + 1; $1 == "#Hiko" {p = NR}' /etc/hosts
192.168.1.243
192.168.1.125
192.168.1.94
192.168.1.24
192.168.1.242
This might work for you (GNU sed):
sed -n '/^#/h;G;/^[0-9].*\n#Hiko/P' file
Copy the header to the hold buffer.
Append the hold buffer to each line.
If the line begins with a digit and contains the required header, print the first line in the pattern space.

Replace a letter with another from the last word from the last two lines of a text file

How could I possibly replace a character with another, selecting the last word from the last two lines of a text file in shell, using only a single command? In my case, replacing every occurrence of a with E from the last word only.
Like, from a text file containing this:
tree;apple;another
mango.banana.half
monkey.shelf.karma
to this:
tree;apple;another
mango.banana.hElf
monkey.shelf.kErmE
I tried using sed -n 'tail -2 'mytext.txt' -r 's/[a]+/E/*$//' but it doesn't work (my error: sed expression #1, char 10: unknown option to 's).
Could you please try following, tac + awk solution. Completely based on OP's samples only.
tac Input_file |
awk 'FNR<=2{if(/;/){FS=OFS=";"};if(/\./){FS=OFS="."};gsub(/a/,"E",$NF)} 1' |
tac
Output with shown samples is:
tree;apple;another
mango.banana.hElf
monkey.shelf.kErmE
NOTE: Change gsub to sub in case you want to substitute only very first occurrence of character a in last field.
This might work for you (GNU sed):
sed -E 'N;${:a;s/a([^a.]*)$/E\1/mg;ta};P;D' file
Open a two line window throughout the length of the file by using the N to append the next line to the previous and the P and D commands to print then delete the first of these. Thus at the end of the file, signified by the $ address the last two lines will be present in the pattern space.
Using the m multiline flag on the substitution command, as well as the g global flag and a loop between :a and ta, replace any a in the last word (delimited by .) by an E.
Thus the first pass of the substitution command will replace the a in half and the last a in karma. The next pass will match nothing in the penultimate line and replace the a in karmE. The third pass will match nothing and thus the ta command will fail and the last two lines will printed with the required changes.
If you want to use Sed, here's a solution:
tac input_file | sed -E '1,2{h;s/.*[^a-zA-Z]([a-zA-Z]+)/\1/;s/a/E/;x;s/(.*[^a-zA-Z]).*/\1/;G;s/\n//}' | tac
One tiny detail. In your question you say you want to replace a letter, but then you transform karma in kErme, so what is this? If you meant to write kErma, then the command above will work; if you meant to write kErmE, then you have to change it just a bit: the s/a/E/ should become s/a/E/g.
With tac+perl
$ tac ip.txt | perl -pe 's/\w+\W*$/$&=~tr|a|E|r/e if $.<=2' | tac
tree;apple;another
mango.banana.hElf
monkey.shelf.kErmE
\w+\W*$ match last word in the line, \W* allows any possible trailing non-word characters to be matched as well. Change \w and \W accordingly if numbers and underscores shouldn't be considered as word characters - for ex: [a-zA-Z]+[^a-zA-Z]*$
$&=~tr|a|E|r change all a to E only for the matched portion
e flag to enable use of Perl code in replacement section instead of string
To do it in one command, you can slurp the entire input as single string (assuming this'll fit available memory):
perl -0777 -pe 's/\w+\W*$(?=(\n.*)?\n\z)/$&=~tr|a|E|r/gme'
Using GNU awk forsplit() 4th arg since in the comments of another solution the field delimiter is every sequence of alphanumeric and numeric characters:
$ gawk '
BEGIN {
pc=2 # previous counter, ie how many are affected
}
{
for(i=pc;i>=1;i--) # buffer to p hash, a FIFO
if(i==pc && (i in p)) # when full, output
print p[i]
else if(i in p) # and keep filling
p[i+1]=p[i] # above could be done using mod also
p[1]=$0
}
END {
for(i=pc;i>=1;i--) {
n=split(p[i],t,/[^a-zA-Z0-9\r]+/,seps) # split on non alnum
gsub(/a/,"E",t[n]) # replace
for(j=1;j<=n;j++) {
p[i]=(j==1?"":p[i] seps[j-1]) t[j] # pack it up
}
print p[i] # output
}
}' file
Output:
tree;apple;another
mango.banana.hElf
monkey.shelf.kErmE
Would this help you ? on GNU awk
$ cat file
tree;apple;another
mango.banana.half
monkey.shelf.karma
$ tac file | awk 'NR<=2{s=gensub(/(.*)([.;])(.*)$/,"\\3",1);gsub(/a/,"E",s); print gensub(/(.*)([.;])(.*)$/,"\\1\\2",1) s;next}1' | tac
tree;apple;another
mango.banana.hElf
monkey.shelf.kErmE
Better Readable version :
$ tac file | awk 'NR<=2{
s=gensub(/(.*)([.;])(.*)$/,"\\3",1);
gsub(/a/,"E",s);
print gensub(/(.*)([.;])(.*)$/,"\\1\\2",1) s;
next
}1' | tac
With GNU awk you can set FS with the two separators, then gsub for the replacement in $3, the third field, if NR>1
awk -v FS=";|[.]" 'NR>1 {gsub("a", "E",$3)}1' OFS="." file
tree;apple;another
mango.banana.hElf
monkey.shelf.kErmE
With GNU awk for the 3rd arg to match() and gensub():
$ awk -v n=2 '
NR>n { print p[NR%n] }
{ p[NR%n] = $0 }
END {
for (i=0; i<n; i++) {
match(p[i],/(.*[^[:alnum:]])(.*)/,a)
print a[1] gensub(/a/,"E","g",a[2])
}
}
' file
tree;apple;another
mango.banana.hElf
monkey.shelf.kErmE
or with any awk:
awk -v n=2 '
NR>n { print p[NR%n] }
{ p[NR%n] = $0 }
END {
for (i=0; i<n; i++) {
match(p[i],/.*[^[:alnum:]]/)
lastWord = substr(p[i],1+RLENGTH)
gsub(/a/,"E",lastWord )
print substr(p[i],1,RLENGTH) lastWord
}
}
' file
If you want to do it for the last 50 lines of a file instead of the last 2 lines just change -v n=2 to -v n=50.
The above assumes there are at least n lines in your input.
You can let sed repeat changing an a into E only for the last word with a label.
tac mytext.txt| sed -r ':a; 1,2s/a(\w*)$/E\1/; ta' | tac

How can i merge every block of 3 lines together while ignoring lower numbers of consecutive lines?

I have a text file like the following, contains blocks of text, blocks are in multiples of 3 lines or just 1 line:
AAAAAAAAAAAAA
BBBBBBBBBBBBB
CCCCCCCCCCCCC
DDDDDDDDDDDDD
EEEEEEEEEEEEE
FFFFFFFFFFFFF
GGGGGGGGGGGGG
HHHHHHHHHHHHH
IIIIIIIIIIIII
JJJJJJJJJJJJJ
KKKKKKKKKKKKK
LLLLLLLLLLLLL
MMMMMMMMMMMMM
NNNNNNNNNNNNN
OOOOOOOOOOOOO
PPPPPPPPPPPPP
QQQQQQQQQQQQQ
RRRRRRRRRRRRR
SSSSSSSSSSSSS
TTTTTTTTTTTTT
UUUUUUUUUUUUU
VVVVVVVVVVVVV
WWWWWWWWWWWWW
XXXXXXXXXXXXX
YYYYYYYYYYYYY
ZZZZZZZZZZZZZ
1111111111111
I would like to merge every block of 3 consecutive lines together, starting with the first in the block. I want to ignore lines that are in less then a block of 3 consecutive lines.
Characters and lengths of lines are always different. ( i have made the lines the same size in the example so it doesn't look too ugly).
So the output would be
AAAAAAAAAAAAA BBBBBBBBBBBBB CCCCCCCCCCCCC
DDDDDDDDDDDDD EEEEEEEEEEEEE FFFFFFFFFFFFF
GGGGGGGGGGGGG
HHHHHHHHHHHHH IIIIIIIIIIIII JJJJJJJJJJJJJ
KKKKKKKKKKKKK
LLLLLLLLLLLLL MMMMMMMMMMMMM NNNNNNNNNNNNN
OOOOOOOOOOOOO PPPPPPPPPPPPP QQQQQQQQQQQQQ
RRRRRRRRRRRRR SSSSSSSSSSSSS TTTTTTTTTTTTT
UUUUUUUUUUUUU
VVVVVVVVVVVVV WWWWWWWWWWWWW XXXXXXXXXXXXX
YYYYYYYYYYYYY ZZZZZZZZZZZZZ 1111111111111
I have tried to use
xargs -n3
However im not sure how to ignore singular lines
How can i acheive this?
With GNU awk for gensub():
$ awk -v RS= -v ORS='\n\n' '{$1=$1; print gensub(/(([^ ]+ ){2}[^ ]+) /,"\\1\n","g")}' file
AAAAAAAAAAAAA BBBBBBBBBBBBB CCCCCCCCCCCCC
DDDDDDDDDDDDD EEEEEEEEEEEEE FFFFFFFFFFFFF
GGGGGGGGGGGGG
HHHHHHHHHHHHH IIIIIIIIIIIII JJJJJJJJJJJJJ
KKKKKKKKKKKKK
LLLLLLLLLLLLL MMMMMMMMMMMMM NNNNNNNNNNNNN
OOOOOOOOOOOOO PPPPPPPPPPPPP QQQQQQQQQQQQQ
RRRRRRRRRRRRR SSSSSSSSSSSSS TTTTTTTTTTTTT
UUUUUUUUUUUUU
VVVVVVVVVVVVV WWWWWWWWWWWWW XXXXXXXXXXXXX
YYYYYYYYYYYYY ZZZZZZZZZZZZZ 1111111111111
In awk:
$ awk -v FS="\n" -v RS="" '{for(i=1;i<=NF;i+=3)print $i,$(i+1),$(i+2);print ""}' file
Output:
AAAAAAAAAAAAA BBBBBBBBBBBBB CCCCCCCCCCCCC
DDDDDDDDDDDDD EEEEEEEEEEEEE FFFFFFFFFFFFF
GGGGGGGGGGGGG
HHHHHHHHHHHHH IIIIIIIIIIIII JJJJJJJJJJJJJ
...
Update Version that won't leave trailing space:
$ awk -v FS="\n" -v RS="" '{for(i=1;i<=NF;i++)printf "%s%s",$i,(i%3==0||i==NF?ORS:OFS);print ""}' file
Please see discussion on some features in the comments. Thanks to the commentators for the constructive feedback.
Here is a different which will always work:
awk '(NF==0){print rec ORS; rec="";c=0; next}
{rec = rec (c ? (c%3==0 ? ORS : OFS) : "") $0; c++ }
END {print rec}' file
This might work for you (GNU sed):
sed '/\S/{N;/\n\s*$/b;N;//b;s/\n/ /g}' file
If the current line is not empty, append the next line.
If the appended line is not empty, append the next line.
If that line is also not empty, replace the newlines by spaces.
In all other cases print the line(s) as is.
An alternative, that is more programmatic:
sed ':a;N;s/\n/&/2;Ta;/^\s*$/M{P;D};s/\n/ /g' file

Join lines into one line using awk

I have a file with the following records
ABC
BCD
CDE
EFG
I would like to convert this into
'ABC','BCD','CDE','EFG'
I attempted to attack this problem using Awk in the following way:
awk '/START/{if (x)print x;x="";next}{x=(!x)?$0:x","$0;}END{print x;}'
but I obtain not what I expected:
ABC,BCD,CDE,EFG
Are there any suggestions on how we can achieve this?
Could you please try following.
awk -v s1="'" 'BEGIN{OFS=","} {val=val?val OFS s1 $0 s1:s1 $0 s1} END{print val}' Input_file
Output will be as follows.
'ABC','BCD','CDE','EFG'
With GNU awk for multi-char RS:
$ awk -v RS='\n$' -F'\n' -v OFS="','" -v q="'" '{$1=$1; print q $0 q}' file
'ABC','BCD','CDE','EFG'
There are many ways of achieving this:
with pipes:
sed "s/.*/'&'/" <file> | paste -sd,
awk '{print '"'"'$0'"'"'}' <file> | paste -sd,
remark: we do not make use of tr here as this would lead to an extra , at the end.
reading the full file into memory:
sed ':a;N;$!ba;s/\n/'"','"'/g;s/.*/'"'&'"'/g' <file> #POSIX
sed -z 's/^\|\n$/'"'"'/g;s/\n/'"','"'/g;' <file> #GNU
and the solution of #EdMorton
without reading the full file into memory:
awk '{printf (NR>1?",":"")"\047"$0"\047"}' <file>
and some random other attempts:
awk '(NR-1){s=s","}{s=s"\047"$0"\047"}END{print s}' <file>
awk 'BEGIN{printf s="\047";ORS=s","s}(NR>1){print t}{t=$0}END{ORS=s;print t} <file>
So what is going on with the OP's attempts?
Writing down the OP's awk line, we have
/START/{if (x)print x;x="";next}
{x=(!x)?$0:x","$0;}
END{print x;}
What does this do? Let us analyze step by step:
/START/{if (x)print x;x="";next}:: This reads If the current record/line contains the string START, then do
if (x) print x:: if x is not an empty string, print the value of x
x="" set x to be an empty string
next:: skip to the next record/line
In this code block, the OP probably assumed that /START/ means do this at the beginning of all things. In awk, this is however written as BEGIN and since in the beginning, all variables are empty strings or zero, the if statement is not executed by default. This block could be replaced by:
BEGIN{x=""}
But again, this is not needed and thus one can remove it:
{x=(!x)?$0:x","$0;}:: concatenate the string with the correct delimiter. This is good, especially due to the usage of the ternary operator. Sadly the delimiter is set to , and not ',' which in awk is best written as \047,\047. So the line could read:
{x=(!x)?$0:x"\047,\047"$0;}
This line, can be written shorter if you realize that x could be an empty string. For an empty string, x=$0 is equivalent to x=x $0 and all you want to do is add a separator which all or not could be an empty string. So you can write this as
{x= x ((!x)?"":"\047,\047") $0}
or inverting the logic to get rid of some more characters:
{x=x(x?"\047,\047":"")$0}
one could even write
{x=x(x?"\047,\047":x)$0}
but this is not optimal as it needs to read what is the memory of x again. However, this form can be used to finally optimize it to (per #EdMorton's comment)
{x=(x?x"\047,\047":"")$0}
This is better as it removes an extra concatenation operator.
END{print x}:: Here the OP prints the result. This, however, will miss the final single-quotes at the beginning and end of the string, so they could be added
END{print "\047" x "\047"}
So the corrected version of the OP's code would read:
awk '{x=(x?x"\047,\047":"")$0}END{print "\047" x "\047"}'
awk may be better
awk '{printf fmt,$1}' fmt="'%s'\n" file | paste -sd, -
'ABC','BCD','CDE','EFG'