Replacing Values in Yaml file using Sed cmd - awk

I have the below YAML file
Test: '5.3.4.7'
Cloudmode: 'Azure'
I want to replace the value of Test .i.e. '5.3.4.7' to '6.3.4.7'
Below is what I have got, it is replacing the value but it's not displaying in a different line. I also don't want to hardcode "5.3.4.7" value. Any advice, please?
sed -i -e \'s/5.3.4.7/\'${version}\'\\n/\' -e $\'s/cloudmode/\\\n cloudmode/g\' defaults.yaml

Could you please try following awk + tac solution, written and tested with shown samples in GNU awk. Since OP mentioned that OP couldn't use yq so adding this solution.
There is an awk variable named new_version which will have new version which OP needs in output, you could later change version number there as per your need and need not to edit main block of code.
tac Input_file |
awk -v s1="'" -v new_version="6.3.4.7" '
/Cloudmode.*Azure/{
found=1
print
next
}
found{
$NF=s1 new_version s1
found=""
}
1
' | tac
In case you are Happy with above and want to inplace save into Input_file then use following.
tac Input_file |
awk -v s1="'" -v new_version="6.3.4.7" '
/Cloudmode.*Azure/{
found=1
print
next
}
found{
$NF=s1 new_version s1
found=""
}
1
' | tac > temp && mv temp Input_file

Related

add a line between matching pattern - unix

I want to insert "123" below madguy-xyz- line in "module xyz".
There are multiple modules having similar lines. But i want to add it in only "module xyz".
module abc
njkenjkfvsfd
madguy-xyz-mafdvnskjfvn
enfvjkesn
endmodule
module xyz
njkenjkfvsfd
madguy-xyz-mafdvnskjfvn
enfvjkesn
endmodule
This is the code i tried but doesn't work,
sed -i "/module xyz/,/^endmodule/{/madguy-xyz-/a 123}" <file_name>
This is the error I got:
sed: -e expression #1, char 0: unmatched `{'
This might work for you (GNU sed):
sed '/module xyz/{:a;n;/madguy-xyz-/!ba;p;s/\S.*/123/}' file
For a line containing module xyz, continue printing lines until one containing madguy-xyz-.
Print this line too and then replace it with 123.
Another alternative solution:
sed '/module/h;G;/madguy-xyz.*\nmodule xyz/{P;s/\S.*/123/};P;d' file
Store any module line in the hold space.
Append the module line to each line.
If the first line contains madguy-xyz- and the second module xyz, print the first then substitute the second for 123.
Print the first line and delete the whole.
With your shown samples, please try following.
awk '1; /^endmodule$/{found=""};/^module xyz$/{found=1} found && /^ +madguy-xyz-/{print "123"} ' Input_file
Once you are happy with results of above command, to save output into Input_file itself try following then:
awk '1;/^endmodule$/{found=""} /^module xyz$/{found=1} found && /^ +madguy-xyz-/{print "123"} ' Input_file > temp && mv temp Input_file
Explanation: Adding detailed explanation for above.
awk ' ##Starting awk program from here.
1;
/^endmodule$/{found=""} ##Printing current line here.
/^module xyz$/{ ##Checking condition if line contains module xyz then do following.
found=1 ##Setting found to 1 here.
}
found && /^ +madguy-xyz-/{ ##Checking if found is SET and line contains madguy-xyz- then do following.
print "123" ##Printing 123 here.
}
' Input_file ##Mentioning Input_file name here.
NOTE: In case your line exactly having module xyz value then change above /module xyz/ (this condition) to $0=="module xyz" too.
With GNU sed I suggest:
sed -i -e "/module xyz/,/^endmodule/{/madguy-xyz-/a 123" -e "}" file
Using any POSIX awk in any shell on every Unix box, the following will work for the sunny day case in your question and all rainy day cases such as the ones I mentioned in my comment and more:
$ cat tst.awk
{ print }
$1 == "endmodule" {
inMod = 0
}
inMod && (index($1,old) == 1) {
sub(/[^[:space:]].*/,"")
print $0 new
}
($1 == "module") && ($2 == mod) {
inMod = 1
}
$ awk -v mod='xyz' -v old='madguy-xyz-' -v new='123' -f tst.awk file
module abc
njkenjkfvsfd
madguy-xyz-mafdvnskjfvn
enfvjkesn
endmodule
module xyz
njkenjkfvsfd
madguy-xyz-mafdvnskjfvn
123
enfvjkesn
endmodule

How can I extract using sed or awk between newlines after a specific pattern?

I like to check if there is other alternatives where I can print using other bash commands to get the range of IPs under #Hiko other than the below sed, tail and head which I actually figured out to get what I needed from my hosts file.
I'm just curious and keen in learning more on bash, hope I could gain more knowledge from the community.
:D
$ sed -n '/#Hiko/,/#Pico/p' /etc/hosts | tail -n +3 | head -n -2
/etc/hosts
#Tito
192.168.1.21
192.168.1.119
#Hiko
192.168.1.243
192.168.1.125
192.168.1.94
192.168.1.24
192.168.1.242
#Pico
192.168.1.23
192.168.1.93
192.168.1.121
1st solution: With shown samples could you please try following. Written and tested in GNU awk.
awk -v RS= '/#Pico/{exit} /#Hiko/{found=1;next} found' Input_file
Explanation:
awk -v RS= ' ##Starting awk program from here.
/#Pico/{ ##Checking condition if line has #Pico then do following.
exit ##exiting from program.
}
/#Hiko/{ ##Checking condition if line has #Hiko is present in line.
found=1 ##Setting found to 1 here.
next ##next will skip all further statements from here.
}
found ##Checking condition if found is SET then print the line.
' Input_file ##mentioning Input_file name here.
2nd solution: Without using RS function try following.
awk '/#Pico/{exit} /#Hiko/{found=1;next} NF && found' Input_file
3rd solution: You could look for record #Hiko and then could print its next record and come out with shown samples.
awk -v RS= '/#Hiko/{found=1;next} found{print;exit}' Input_file
NOTE: These all solutions above check if string #Hiko or #Pico are present in anywhere in line, in case you want to look exact string then change above only /#Hiko/ and /#Pico/ part to /^#Hiko$/ and /^#Pico$/ respectively.
With sed (checked with GNU sed, syntax might differ for other implementations)
$ sed -n '/#Hiko/{n; :a n; /^$/q; p; ba}' /etc/hosts
192.168.1.243
192.168.1.125
192.168.1.94
192.168.1.24
192.168.1.242
-n turn off automatic printing of pattern space
/#Hiko/ if line contains #Hiko
n get next line (assuming there's always an empty line)
:a label a
n get next line (using n will overwrite any previous content in the pattern space, so only single line content is present in this case)
/^$/q if the current line is empty, quit
p print the current line
ba branch to label a
You can use
awk -v RS= '/^#Hiko$/{getline;print;exit}' file
awk -v RS= '$0 == "#Hiko"{getline;print;exit}' file
Which means:
RS= - make awk read the file paragraph by paragraph
/^#Hiko$/ or '$0 == "#Hiko" - finds a paragraph that is equal to #Hiko
{getline;print;exit} - gets the next paragraph, prints it and exits.
See the online demo.
You may use:
awk -v RS= 'p && NR == p + 1; $1 == "#Hiko" {p = NR}' /etc/hosts
192.168.1.243
192.168.1.125
192.168.1.94
192.168.1.24
192.168.1.242
This might work for you (GNU sed):
sed -n '/^#/h;G;/^[0-9].*\n#Hiko/P' file
Copy the header to the hold buffer.
Append the hold buffer to each line.
If the line begins with a digit and contains the required header, print the first line in the pattern space.

awk command to read a key value pair from a file

I have a file input.txt which stores information in KEY:VALUE form. I'm trying to read GOOGLE_URL from this input.txt which prints only http because the seperator is :. What is the problem with my grep command and how should I print the entire URL.
SCRIPT
$> cat script.sh
#!/bin/bash
URL=`grep -e '\bGOOGLE_URL\b' input.txt | awk -F: '{print $2}'`
printf " $URL \n"
INPUT_FILE
$> cat input.txt
GOOGLE_URL:https://www.google.com/
OUTPUT
https
DESIRED_OUTPUT
https://www.google.com/
Since there are multiple : in your input, getting $2 will not work in awk because it will just give you 2nd field. You actually need an equivalent of cut -d: -f2- but you also need to check key name that comes before first :.
This awk should work for you:
awk -F: '$1 == "GOOGLE_URL" {sub(/^[^:]+:/, ""); print}' input.txt
https://www.google.com/
Or this non-regex awk approach that allows you to pass key name from command line:
awk -F: -v k='GOOGLE_URL' '$1==k{print substr($0, length(k FS)+1)}' input.txt
Or using gnu-grep:
grep -oP '^GOOGLE_URL:\K.+' input.txt
https://www.google.com/
Could you please try following, written and tested with shown samples in GNU awk. This will look for string GOOGLE_URL and will catch further either http or https value from url, in case you need only https then change http[s]? to https in following solution please.
awk '/^GOOGLE_URL:/{match($0,/http[s]?:\/\/.*/);print substr($0,RSTART,RLENGTH)}' Input_file
Explanation: Adding detailed explanation for above.
awk ' ##Starting awk program from here.
/^GOOGLE_URL:/{ ##Checking condition if line starts from GOOGLE_URL: then do following.
match($0,/http[s]?:\/\/.*/) ##Using match function to match http[s](s optional) : till last of line here.
print substr($0,RSTART,RLENGTH) ##Printing sub string of matched value from above function.
}
' Input_file ##Mentioning Input_file name here.
2nd solution: In case you need anything coming after first : then try following.
awk '/^GOOGLE_URL:/{match($0,/:.*/);print substr($0,RSTART+1,RLENGTH-1)}' Input_file
Take your pick:
$ sed -n 's/^GOOGLE_URL://p' file
https://www.google.com/
$ awk 'sub(/^GOOGLE_URL:/,"")' file
https://www.google.com/
The above will work using any sed or awk in any shell on every UNIX box.
I would use GNU AWK following way for that task:
Let file.txt content be:
EXAMPLE_URL:http://www.example.com/
GOOGLE_URL:https://www.google.com/
KEY:GOOGLE_URL:
Then:
awk 'BEGIN{FS="^GOOGLE_URL:"}{if(NF==2){print $2}}' file.txt
will output:
https://www.google.com/
Explanation: GNU AWK FS might be pattern, so I set it to GOOGLE_URL: anchored (^) to begin of line, so GOOGLE_URL: in middle/end will not be seperator (consider 3rd line of input). With this FS there might be either 1 or 2 fields in each line - latter is case only if line starts with GOOGLE_URL: so I check number of fields (NF) and if this is second case I print 2nd field ($2) as first record in this case is empty.
(tested in gawk 4.2.1)
Yet another awk alternative:
gawk -F'(^[^:]*:)' '/^GOOGLE_URL:/{ print $2 }' infile

Delete third-to-last line of file using sed or awk

I have several text files with different row numbers and I have to delete in all of them the third-to-last line . Here is a sample file:
bear
horse
window
potato
berry
cup
Expected result for this file:
bear
horse
window
berry
cup
Can we delete the third-to-last line of a file:
a. not based on any string/pattern.
b. based only on a condition that it has to be the third-to-last line
I have problem on how to index my files beginning from the last line. I have tried this from another SO question for the second-to-last line:
> sed -i 'N;$!P;D' output1.txt
With tac + awk solution, could you please try following. Just set line variable of awk to line(from bottom) whichever you want to skip.
tac Input_file | awk -v line="3" 'line==FNR{next} 1' | tac
Explanation: Using tac will read the Input_file reverse(from bottom line to first line), passing its output to awk command and then checking condition if line is equal to line(which we want to skip) then don't print that line, 1 will print other lines.
2nd solution: With awk + wc solution, kindly try following.
awk -v lines="$(wc -l < Input_file)" -v skipLine="3" 'FNR!=(lines-skipLine+1)' Input_file
Explanation: Starting awk program here and creating a variable lines which has total number of lines present in Input_file in it. variable skipLine has that line number which we want to skip from bottom of Input_file. Then in main program checking condition if current line is NOT equal to lines-skipLine+1 then printing the lines.
3rd solution: Adding solution as per Ed sir's comment here.
awk -v line=3 '{a[NR]=$0} END{for (i=1;i<=NR;i++) if (i != (NR-line)) print a[i]}' Input_file
Explanation: Adding detailed explanation for 3rd solution.
awk -v line=3 ' ##Starting awk program from here, setting awk variable line to 3(line which OP wants to skip from bottom)
{
a[NR]=$0 ##Creating array a with index of NR and value is current line.
}
END{ ##Starting END block of this program from here.
for(i=1;i<=NR;i++){ ##Starting for loop till value of NR here.
if(i != (NR-line)){ ##Checking condition if i is NOT equal to NR-line then do following.
print a[i] ##Printing a with index i here.
}
}
}
' Input_file ##Mentioning Input_file name here.
With ed
ed -s ip.txt <<< $'$-2d\nw'
# thanks Shawn for a more portable solution
printf '%s\n' '$-2d' w | ed -s ip.txt
This will do in-place editing. $ refers to last line and you can specify a negative relative value. So, $-2 will refer to last but second line. w command will then write the changes.
See ed: Line addressing for more details.
This might work for you (GNU sed):
sed '1N;N;$!P;D' file
Open a window of 3 lines in the file then print/delete the first line of the window until the end of the file.
At the end of the file, do not print the first line in the window i.e. the 3rd line from the end of the file. Instead, delete it, and repeat the sed cycle. This will try to append a line after the end of file, which will cause sed to bail out, printing the remaining lines in the window.
A generic solution for n lines back (where n is 2 or more lines from the end of the file), is:
sed ':a;N:s/[^\n]*/&/3;Ta;$!P;D' file
Of course you could use:
tac file | sed 3d | tac
But then you would be reading the file 3 times.
To delete the 3rd-to-last line of a file, you can use head and tail:
{ head -n -3 file; tail -2 file; }
In case of a large input file, when perfomance matters, this is very fast, because it doesn't read and write line by line. Also, do not modify the semicolons and the spaces next to the brackets, see about commands grouping.
Or use sed with tac:
tac file | sed '3d' | tac
Or use awk with tac:
tac file | awk 'NR!=3' | tac

Encapsulate output with ' '

Current Output:
'31123456787
31123456788
31123456789
Required Output:
'31123456787'
'31123456788'
'31123456789'
Current code being used:
variable=`awk -F, '{OFS=",";print $1,$2}' /app/isc/Test/archive/data/Final_Account_Status_Check.csv | tr -d ',' `
echo "'$variable'" >> /app/isc/Test/archive/data/Final_Account_Status_Check.txt
EDIT(Improve OP's attempt): After seeing OP's code trying to improve and do the task in single awk command itself by improving OP's code.
awk -v s1="'" 'BEGIN{FS=","} {print s1 $1,$2 s1}' /app/isc/Test/archive/data/Final_Account_Status_Check.csv > /app/isc/Test/archive/data/Final_Account_Status_Check.txt
Improvements done in OP's code:
Removed OFS="," and tr parts from OP's first command since later OP is removing them from td so it doesn't make sense to have it.
Declare an awk variable s1 whose value is ' which we will add to output later.
Added s1 before $1 and after $2 to get output in 'bla bla bla' form as per OP's requirement.
Codes as per OP's ask:
Could you please try following(this solution assumes that we need to read an Input_file to get OP's expected output).
awk -v s1="'" '{print s1 $0 s1}' Input_file
Since you have not shown complete requirement or samples of input or expected output so by seeing your attempts seems you could be printing a variable's value and pass it to awk then.
echo "$your_variable" | awk -v s1="'" '{print s1 $0 s1}'
So you want to add ' to the beginning and ending of each line.
echo "$variable" | sed "s/^/'/; s/$/'/"