I want to append a string variable.
for e.g. WINDOW=WINDOW winName="fp_w_RetrocessionTrigger" winTitle="Retrocession Trigger" comp="FPGUI" funcId="14316" domainId="bancs" preLoad="true">
This is my string variable(WINDOW). I want this varible of string get append in another file using sed or awk cmd.
Although I have tried the cmd like sed -i ''$n'i "'$WINDOW'"' test.xml but it only prints space there. Please help me.
Your question is not very clear, however if u want to append a text in the beginning and/or at the end of each line of a file, you can try the below
$ cat 1.txt
1
2
3
4
5
Add the text in the beginning of the line :
$ sed -e 's/^/start_of_the_line/' -i 1.txt
Add the text at the end of the line :
$ sed -e 's/$/end_of_the_line/' -i 1.txt
Output
$cat 1.txt
start_of_the_line1end_of_the_line
start_of_the_line2end_of_the_line
start_of_the_line3end_of_the_line
start_of_the_line4end_of_the_line
start_of_the_line5end_of_the_line
Related
If I use cp inside a bash script the copied file will have weird charachters around the destination filename.
The destination name comes from the results of an operation, it's put inside a variable, and echoing the variable shows normal output.
The objective is to name a file after a string.
#!/bin/bash
newname=`cat outputfile | grep 'hostname ' | sed 's/hostname //g'
newecho=`echo $newname`
echo $newecho
cp outputfile "$newecho"
If I launch the script the echo looks ok
$ ./rename.sh
mo-swc-56001
However the file is named differently
~$ ls
'mo-swc-56001'$'\r'
As you can see the file contains extra charachters which the echo does not show.
Edit: the newline of the file is like this
# file outputfile
outputfile: ASCII text, with CRLF, CR line terminators
I tried in every possible way to get rid of the ^M charachter but this is an example of the hundreds of attempts
# cat outputfile | grep 'hostname ' | sed 's/hostname //g' | cat -v
mo-swc-56001^M
# cat outputfile | grep 'hostname ' | sed 's/hostname //g' | cat -v | sed 's/\r//g' | cat -v
mo-swc-56001^M
This newline will stay there. Any ideas?
Edit: crazy, the only way is to perform a dos2unix on the output...
Looks like your outputfile has \r characters in it, so you could add logic there to remove them and give it a try then.
#!/bin/bash
##remove control M first from outputfile by tr command.
tr -d '\r' < outputfile > temp && mv temp outputfile
newname=$(sed 's/hostname //g' outputfile)
newecho=`echo $newname`
echo $newecho
cp outputfile "$newecho"
The only way was to use dos2unix
I have a similar problem. I need to move a line in /etc/sudoers to the end of the file.
The line I am wanting to move:
#includedir /etc/sudoers.d
I have tried with a variable
#creates variable value
templine=$(cat /etc/sudoers | grep "#includedir /etc/sudoers.d")
#delete value
sed '/"${templine}"/d' /etc/sudoers
#write value to the bottom of the file
cat ${templine} >> /etc/sudoers
Not getting any errors nor the result I am looking for.
Any suggestions?
With awk:
awk '$0=="#includedir /etc/sudoers.d"{lastline=$0;next}{print $0}END{print lastline}' /etc/sudoers
That says:
If the line $0 is "#includedir /etc/sudoers.d" then set the variable lastline to this line's value $0 and skip to the next line next.
If you are still here, print the line {print $0}
Once every line in file is processed, print whatever is in the lastline variable.
Example:
$ cat test.txt
hi
this
is
#includedir /etc/sudoers.d
a
test
$ awk '$0=="#includedir /etc/sudoers.d"{lastline=$0;next}{print $0}END{print lastline}' test.txt
hi
this
is
a
test
#includedir /etc/sudoers.d
You could do the whole thing with sed:
sed -e '/#includedir .etc.sudoers.d/ { h; $p; d; }' -e '$G' /etc/sudoers
This might work for you (GNU sed):
sed -n '/regexp/H;//!p;$x;$s/.//p' file
This removes line(s) containing a specified regexp and appends them to the end of the file.
To only move the first line that matches the regexp, use:
sed -n '/regexp/{h;$p;$b;:a;n;p;$!ba;x};p' file
This uses a loop to read/print the remainder of the file and then append the matched line.
If you have multiple entries which you want to move to the end of the file, you can do the following:
awk '/regex/{a[++c]=$0;next}1;END{for(i=1;i<=c;++i) print a[i]}' file
or
sed -n '/regex/!{p;ba};H;:a;${x;s/.//;p}' file
I have a csv file in which every other line is blank. I have tried everything, nothing removes the lines. What should make it easier is that the the digits 44 appear in each valid line. Things I have tried:
grep -ir 44 file.csv
sed '/^$/d' <file.csv
cat -A file.csv
sed 's/^ *//; s/ *$//; /^$/d' <file.csv
egrep -v "^$" file.csv
awk 'NF' file.csv
grep '\S' file.csv
sed 's/^ *//; s/ *$//; /^$/d; /^\s*$/d' <file.csv
cat file.csv | tr -s \n
Decided I was imagining the blank lines, but import into Google Sheets and there they are still! Starting to question my sanity! Can anyone help?
sed -n -i '/44/p' file
-n means skip printing
-i inplace (overwrite same file)
- /44/p print lines where '44' exists
without '44' present
sed -i '/^\s*$/d' file
\s is matching whitespace, ^startofline, $endofline, d delete line
Use the -i option to replace the original file with the edited one.
sed -i '/^[ \t]*$/d' file.csv
Alternatively output to another file and rename it, which is doing the exactly what -i does.
sed '/^[[:blank:]]*$/d' file.csv > file.csv.out && mv file.csv.out file.csv
Given:
$ cat bl.txt
Line 1 (next line has a tab)
Line 2 (next has several space)
Line 3
You can remove blank lines with Perl:
$ perl -lne 'print unless /^\s*$/' bl.txt
Line 1 (next line has a tab)
Line 2 (next has several space)
Line 3
awk:
$ awk 'NF>0' bl.txt
Line 1 (next line has a tab)
Line 2 (next has several space)
Line 3
sed + tr:
$ cat bl.txt | tr '\t' ' ' | sed '/^ *$/d'
Line 1 (next line has a tab)
Line 2 (next has several space)
Line 3
Just sed:
$ sed '/^[[:space:]]*$/d' bl.txt
Line 1 (next line has a tab)
Line 2 (next has several space)
Line 3
Aside from the fact that your commands do not show that you capture their output in a new file to be used in place of the original, there's nothing wrong with them, EXCEPT that:
cat file.csv | tr -s \n
should be:
cat file.csv | tr -s '\n' # more efficient alternative: tr -s '\n' < file.csv
Otherwise, the shell eats the \ and all that tr sees is n.
Note, however, that the above only eliminates only truly empty lines, whereas some of your other commands also eliminate blank lines (empty or all-whitespace).
Also, the -i (for case-insensitive matching) in grep -ir 44 file.csv is pointless, and while using -r (for recursive searches) will not change the fact that only file.csv is searched, it will prepend the filename followed by : to each matching line.
If you have indeed captured the output in a new file and that file truly still has blank lines, the cat -A (cat -et on BSD-like platforms) you already mention in your question should show you if any unusual characters are present in the file, in the form of ^<char> sequences, such as ^M for \r chars.
If you like awk, this should do:
awk '/44/' file
It will only print lines that contains 44
I have following variables:
STR1="CD45RA_naive"
STR2="CD45RO_mem"
STR3="CD127_Treg"
STR4="IL17_Th_stim_MACS"
STR5="IL17_Th17_stim"
names=($STR1 $STR2 $STR3 $STR4 $STR5)
and the files with the same names as variables with a ".bed" extension.
in each file I need to add a following header:
track name='enhancer region' description='${names[$i]}'"
My idea was the following:
for ((i=0; i<${#names[#]}; i++))
do
echo "output/annotation/"${names[$i]}".bed"
sed -i '' "1 i \track name='enhancer region' description='"${names[$i]}"'" "output/annotation/"${names[$i]}".bed"
done
However I get an error:
sed: 1: "1 i \track name='enhanc ...": extra characters after \ at the end of i command
What is a proper way to add a corresponding header to each file (gawk,sed)?
You typically need the line that is inserted for the i command to be on its own line. In other words, you need to add a newline after the backslash. For example:
$ cat input
line 1
$ sed -e '1i\
foo' input
foo
line 1
$ sed -e '1i\ foo' input
sed: 1: "1i\ foo
": extra characters after \ at the end of i command
I did search and found how to replace each occurrence of a string in files. Besides that I want to add one line to a file only at the first occurrence of the string.
I know this
grep -rl 'windows' ./ | xargs sed -i 's/windows/linux/g'
will replace each occurrence of string. So how do I add a line to that file at first match of the string? Can any have an idea how to do that? Appreciate your time.
Edited :
Exaple : replace xxx with TTT in file, add a line at starting of file for first match.
Input : file1, file2.
file1
abc xxx pp
xxxy rrr
aaaaaaaaaaaddddd
file2
aaaaaaaaaaaddddd
Output
file1
#ADD LINE HERE FOR FIRST MATCH DONT ADD FOR REST OF MATCHES
abc TTT pp
TTTy rrr
aaaaaaaaaaaddddd
file2
aaaaaaaaaaaddddd
Cribbing from the answers to this question.
Something like this would seem to work:
sed -e '0,/windows/{s/windows/linux/; p; T e; a \new line
;:e;d}; s/windows/linux/g'
From start of the file to the first match of /windows/ do:
replace windows with linux
print the line
if s/windows/linux/ did not replace anything jump to label e
add the line new line
create label e
delete the current pattern space, read the next line and start processing again
Alternatively:
awk '{s=$0; gsub(/windows/, "linux")} 7; (s ~ /windows/) && !w {w=1; print "new line"}' file
save the line in s
replace windows with linux
print the line (7 is true and any true pattern runs the default action of {print})
if the original line contained windows and w is false (variables are empty strings by default and empty strings are false-y in awk)
set w to 1 (truth-y value)
add the new line
If I understand you correctly, all you need is:
find . -type f -print |
while IFS= read -r file; do
awk 'gsub(/windows/,"unix"){if (!f) $0 = $0 ORS "an added line"; f=1} 1' "$file" > tmp &&
mv tmp "$file"
done
Note that the above, like sed and grep would, is working with REs, not strings. To use strings would require the use of index() and substr() in awk, is not possible with sed, and with grep requires an extra flag.
To add a leading line to the file if a change is made using gNU awk for multi-char RS (and we may as well do sed-like inplace editing since we're using gawk):
find . -type f -print |
while IFS= read -r file; do
gawk -i inplace -v RS='^$' -v ORS= 'gsub(/windows/,"unix"){print "an added line"} 1' "$file"
done