With sed or awk, move line matching pattern to bottom of file - awk

I have a similar problem. I need to move a line in /etc/sudoers to the end of the file.
The line I am wanting to move:
#includedir /etc/sudoers.d
I have tried with a variable
#creates variable value
templine=$(cat /etc/sudoers | grep "#includedir /etc/sudoers.d")
#delete value
sed '/"${templine}"/d' /etc/sudoers
#write value to the bottom of the file
cat ${templine} >> /etc/sudoers
Not getting any errors nor the result I am looking for.
Any suggestions?

With awk:
awk '$0=="#includedir /etc/sudoers.d"{lastline=$0;next}{print $0}END{print lastline}' /etc/sudoers
That says:
If the line $0 is "#includedir /etc/sudoers.d" then set the variable lastline to this line's value $0 and skip to the next line next.
If you are still here, print the line {print $0}
Once every line in file is processed, print whatever is in the lastline variable.
Example:
$ cat test.txt
hi
this
is
#includedir /etc/sudoers.d
a
test
$ awk '$0=="#includedir /etc/sudoers.d"{lastline=$0;next}{print $0}END{print lastline}' test.txt
hi
this
is
a
test
#includedir /etc/sudoers.d

You could do the whole thing with sed:
sed -e '/#includedir .etc.sudoers.d/ { h; $p; d; }' -e '$G' /etc/sudoers

This might work for you (GNU sed):
sed -n '/regexp/H;//!p;$x;$s/.//p' file
This removes line(s) containing a specified regexp and appends them to the end of the file.
To only move the first line that matches the regexp, use:
sed -n '/regexp/{h;$p;$b;:a;n;p;$!ba;x};p' file
This uses a loop to read/print the remainder of the file and then append the matched line.

If you have multiple entries which you want to move to the end of the file, you can do the following:
awk '/regex/{a[++c]=$0;next}1;END{for(i=1;i<=c;++i) print a[i]}' file
or
sed -n '/regex/!{p;ba};H;:a;${x;s/.//;p}' file

Related

How to print only the line which contains a blank lines before and after

How i can print only the line which contains blank line(s) before and after it.
I'm trying various awk and grep combination but somehow unable to get it.
tuv0657
tuv2330
tuv2133 Unable to get the ssh connection
tuv1988 Unable to get the ssh connection
tuv1049
tuv1683 Unable to get the ssh connection
tuv2101
Desired:
tuv0657
tuv1049
tuv2330
tuv2101
What i tried:
i tried below but did not get the results..
$ awk '{if ($2=="") print $0}' file
$ grep -E --line-number --with-filename '^$'
With your shown samples, could you please try following.
awk -v RS="" 'NF==1' Input_file
tuv0657
tuv2330
tuv1049
tuv2101
Based on your example data, try the following awk solution:
awk '$1 !="" && $2 == "" { print }' file
Where the first space separated field is not blank and the second field is blank, print the line.
This might work for you (GNU sed):
sed -nE '/^\S+$/p' file
Print the current line if it only contains the first field.

Check if all multiple strings exist in one line

I have a file that have this info
IRE_DRO_Fabric_A drogesx0112_IRE_DRO_A_ISIL03_091_871
IRE_DRO_Fabric_A drogesx0112_IRE_DRO_A_NETAPP_7890_2D5_1D8
IRE_DRO_Fabric_A drogesx0112_SAN_A
IRE_DRO_Fabric_B drogesx0112_IRE_DRO_B_ISIL03_081_873
IRE_DRO_Fabric_B drogesx0112_IRE_DRO_B_NETAPP_7890_9D3_2D8
IRE_DRO_Fabric_B drogesx0112_SAN_B
and wanted to check if multiple string were found per line. Tried this command but it's not working. Not sure if it's possible for the current text type?
grep 'drogesx0112.*ISIL03_091_871\|ISIL03_091_871.*drogesx0112' file << tried this but not working
grep 'drogesx0112' file | grep 'ISIL03_091_871' << tried this but not working
Looking for this output (I'm actually looking for string1(drogesx0112) and string2(ISIL03_091_871)
>grep 'drogesx0112.*ISIL03_091_871\|ISIL03_091_871.*drogesx0112' file # command
>IRE_DRO_Fabric_A drogesx0112_IRE_DRO_A_ISIL03_091_871 < output
so it's like i wanted to check if drogesx0112 and ISIL03_091_871 are present in a single line in a file.
Simple awk
$ awk ' /drogesx0112/ && /ISIL03_091_871/ ' gafm.txt
IRE_DRO_Fabric_A drogesx0112_IRE_DRO_A_ISIL03_091_871
$
Simple Perl
$ perl -ne ' print if /drogesx0112/ and /ISIL03_091_871/ ' gafm.txt
IRE_DRO_Fabric_A drogesx0112_IRE_DRO_A_ISIL03_091_871
$
If you are not looking for any order and simply want to check if both strings are present in a single line or not then try following.
awk '/drogesx0112/ && /ISIL03_091_871/' Input_file
In case you are looking for sequence of strings in line:
If your line has drogesx0112 first and then ISIL03_091_871 then try following.
awk '/drogesx0112.*ISIL03_091_871/' Input_file
If your line has ISIL03_091_871 first and then drogesx0112 then try following.
awk '/ISIL03_091_871.*drogesx0112/' Input_file
This might work for you (GNU sed):
sed '/drogesx0112/!d;/ISIL03_091_871/!d' file
Delete the current line if it does not contain drogesx0112 and delete it if does not contain ISIL03_091_871 too.
Another way:
sed -n '/drogesx0112/{/ISIL03_091_871/p}' file
A third:
sed '/drogesx0112/{/ISIL03_091_871/p};d' file

How can I print only lines that are immediately preceeded by an empty line in a file using sed?

I have a text file with the following structure:
bla1
bla2
bla3
bla4
bla5
So you can see that some lines of text are preceeded by an empty line.
I understand that sed has the concept of two buffers, a pattern space buffer and a hold space buffer, so I'm guessing these need to come in to play here, but I'm unclear how to specify them to accomplish what I need.
In my contrived example above, I'd expect to see the following lines outputted:
bla3
bla5
sed is for doing s/old/new on individual lines, that is all. Any time you start talking about buffers or doing anything related to multi-lines comparisons you're using the wrong tool.
You could do this with awk:
$ awk -v RS= -F'\n' 'NR>1{print $1}' file
bla3
bla5
but it would fail to print the first non-empty line if the first line(s) in the file were empty so this may be what you want if you want lines of all space chars considered to be empty lines:
$ awk 'NF && !p{print} {p=NF}' file
bla3
bla5
and this otherwise:
$ awk '($0!="") && (p==""){print} {p=$0}' file
bla3
bla5
All of the above will work even if there are multiple empty lines preceding any given non-empty line.
To see the difference between the 3 approaches (which you won't see given the sample input in the question):
PS1> printf '\nfoo\n \nbar\n\netc\n' | cat -E
$
foo$
$
bar$
$
etc$
PS1> printf '\nfoo\n \nbar\n\netc\n' | awk -v RS= -F'\n' 'NR>1{print $1}'
etc
PS1> printf '\nfoo\n \nbar\n\netc\n' | awk 'NF && !p{print} {p=NF}'
foo
bar
etc
PS1> printf '\nfoo\n \nbar\n\netc\n' | awk '($0!="") && (p==""){print} {p=$0}'
foo
etc
You can use the hold buffer easily to print the line before the blank like this:
sed -n -e '/^$/{x; p;}' -e h input
But I don't see an easy way to use it for your use case. For your case, instead of using the hold buffer, you could do:
sed -n -e '/^$/ba' -e d -e :a -e n -e p input
But I would do this with awk.
awk 'NR!=1{print $1}' RS= FS=\\n input-file
awk 'p;{p=/^$/}' file
above command does these for each line:
if p is 1, print line;
if line is empty, set p to 1.
if lines consisting of one or more spaces are also considered empty:
awk 'p;{p=!NF}' file
to print non-empty lines each coming right after an empty line, you can use this:
awk 'p*!(p=/^$/)' file
if p is 1 and this line is not empty (1*!(0) = 1*1 = 1), print this line;
otherwise (1*!(1) = 1*0 = 0, 0*anything = 0), don't print anything.
note that this one may not work with all awks, a portable version of this would look like:
awk 'p*(/./);{p=/^$/}' file
if lines consisting of one or more spaces are also considered empty:
awk 'p*NF;{p=!NF}' file
see them online here, and here.
If sed/awk is not mandatory, you can do it with grep:
grep -A 1 '^$' input.txt | grep -v -E '^$|--'
You can use sed to match a range of lines and do sub-matches inside the matches, like so:
# - use the "-n" option to omit printing of lines
# - match lines between a blank line (/^$/) and a non-blank one (/^./),
# then print only the line that contains at least a character,
# i.e, the non-blank line.
sed -ne '
/^$/,/^./ {
/^./{ p; }
}' input.txt
tested by gnu sed, your data in 'a':
$ sed -nE '/^$/{N;s/\n(.+)/\1/p}' a
bla3
bla5
add -i option precedes -n to real editing

Why does awk not filter the first column in the first line of my files?

I've got a file with following records:
depots/import/HDN1YYAA_15102018.txt;1;CAB001
depots/import/HDN1YYAA_20102018.txt;2;CLI001
depots/import/HDN1YYAA_20102018.txt;32;CLI001
depots/import/HDN1YYAA_25102018.txt;1;CAB001
depots/import/HDN1YYAA_50102018.txt;1;CAB001
depots/import/HDN1YYAA_65102018.txt;1;CAB001
depots/import/HDN1YYAA_80102018.txt;2;CLI001
depots/import/HDN1YYAA_93102018.txt;2;CLI001
When I execute following oneliner awk:
cat lignes_en_erreur.txt | awk 'FS=";"{ if(NR==1){print $1}}END {}'
the output is not the expected:
depots/import/HDN1YYAA_15102018.txt;1;CAB001
While I am suppose get only the frist column:
If I run it through all the records:
cat lignes_en_erreur.txt | awk 'FS=";"{ if(NR>0){print $1}}END {}'
then it will start filtering only after the second line and I get the following output:
depots/import/HDN1YYAA_15102018.txt;1;CAB001
depots/import/HDN1YYAA_20102018.txt
depots/import/HDN1YYAA_20102018.txt
depots/import/HDN1YYAA_25102018.txt
depots/import/HDN1YYAA_50102018.txt
depots/import/HDN1YYAA_65102018.txt
depots/import/HDN1YYAA_80102018.txt
depots/import/HDN1YYAA_93102018.txt
Does anybody knows why awk is skiping the first line only.
I tried deleting first record but the behaviour is the same, it will skip the first line.
First, it should be
awk 'BEGIN{FS=";"}{ if(NR==1){print $1}}END {}' filename
You can omit the END block if it is empty:
awk 'BEGIN{FS=";"}{ if(NR==1){print $1}}' filename
You can use the -F command line argument to set the field delimiter:
awk -F';' '{if(NR==1){print $1}}' filename
Furthermore, awk programs consist of a sequence of CONDITION [{ACTIONS}] elements, you can omit the if:
awk -F';' 'NR==1 {print $1}' filename
You need to specify delimiter in either BEGIN block or as a command-line option:
awk 'BEGIN{FS=";"}{ if(NR==1){print $1}}'
awk -F ';' '{ if(NR==1){print $1}}'
cut might be better suited here, for all lines
$ cut -d';' -f1 file
to skip the first line
$ sed 1d file | cut -d';' -f1
to get the first line only
$ sed 1q file | cut -d';' -f1
however at this point it's better to switch to awk
if you have a large file and only interested in the first line, it's better to exit early
$ awk -F';' '{print $1; exit}' file

Inserting characters into specified fields in large files

Here is my question. I have several hundred files that are too large to edit utilizing vi editor. I'm looking for a possible awk or sed command to manipulate my files. Bit of a rookie. I have a simplified file:
001|1|3|053412|16|1234|||
001|21|4|123618|15|88|||
The files were created, with the fourth field being in the wrong format.
The fourth field should be 05:34:12 reflecting HH:MM:SS. The time values are correct, I just need to insert the : in the appropriate places.
How do I insert the colons after the second character and the fourth characters in the fourth field? I cannot do it by character count since the values before and after the fourth field are variable.
gawk to the rescue!
$ awk -F\| -v OFS=\| '{$4=gensub(/(..)(..)(..)/,"\\1:\\2:\\3","g",$4)}1' file
001|1|3|05:34:12|16|1234|||
001|21|4|12:36:18|15|88|||
otherwise you can do the same with substr($4,1,2)":"...
With GNU awk for gensub() and inplace editing
awk -i inplace 'BEGIN{FS=OFS="|"} {$4=gensub(/(..)(..)/,"\\1:\\2:",1,$4)} 1' *
Similarly with GNU sed for EREs and inplace editing:
sed -i -E 's/(([^|]*\|){3}..)(..)/\1:\3:/' *
e.g.:
$ awk 'BEGIN{FS=OFS="|"} {$4=gensub(/(..)(..)/,"\\1:\\2:",1,$4)} 1' file
001|1|3|05:34:12|16|1234|||
001|21|4|12:36:18|15|88|||
$ sed -E 's/(([^|]*\|){3}..)(..)/\1:\3:/' file
001|1|3|05:34:12|16|1234|||
001|21|4|12:36:18|15|88|||
With sed:
$ sed -r 's/([^|]*|[^|]*|[^|]*|)([0-9]{2})([0-9]{2})([0-9]{2})/\1\2:\3:\4/' file
001|1|3|05:34:12|16|1234|||
001|21|4|12:36:18|15|88|||
This might work for you (GNU sed):
sed -r 's/^(([^|]*\|){3})(..)(..)/\1\3:\4:/' file
Use back references to group the first three fields and the following two pairs. Then format the 4th field as required.
Try this awk!
awk -F"|" -v OFS="|" '{r=split($4,T,"");for(i=2;i<=r;i+=2){if(i!=r)T[i]=T[i]":"}tmp="";for(i=1;i<=r;i++){tmp=tmp T[i]}$4=tmp;}1' file
001|1|3|05:34:12|16|1234|||
001|21|4|12:36:18|15|88|||
Longer whit explanation:
BEGIN{
FS=OFS="|"; #Field separator and output field separator
}
{
tmp="";
r=split($4,time_field,""); # Chunk field into pieces
for(i=2;i<=r;i+=2) # Loop two by two
{
if(i!=r)
{
time_field[i]=time_field[i]":"; # Add ":"
}
}
for(i=1;i<=r;i++) # Loop over again to rebuild
{
tmp=tmp time_field[i];
}
$4=tmp; #rebuid field
print
}
How you could use it in bash: Save it as whatever.awk
while IFS='' read -r file
do
awk -f whatever.awk "$file" > out_file
done < list_of_files_to_edit.txt
If you want edit the files in place, you can add the option -i to Kenavoz sed command. sed -ri ...