I'm working on a shell script and i need to change some strings from different lines of a file into a while read statement. The structure need to be like this, because the String_Search and String_result will be calculated on each line.
while read line
do
varA="String_Search"
resA="String_Result"
line=`echo $line | sed -e "s/$varA/$resA"`
echo $line >> outputFile.txt
done < "inputFile.txt"
The script doesn't works and its showing to me this error message:
sed: -e expression #1, char 31: unterminated `s' command
Anyone Can Help Me?
Thanks to All
You need to end the substitution pattern by a slash /
line=`echo $line | sed -e "s/$varA/$resA/"`
Related
I am trying to append to the first line only of a file using sed which includes a variable.
This is the command that I am trying:
data=foo
sed -i "1 s/$/$data/" myfile.csv
This is the result I am getting:
sed: -e expression #1, char 8: unknown option to `s'
(I also would like to add a comma along with the data since it is a csv..)
I am new to shell script. I am sourcing a file, which is created in Windows and has carriage returns, using the source command. After I source when I append some characters to it, it always comes to the start of the line.
test.dat (which has carriage return at end):
testVar=value123
testScript.sh (sources above file):
source test.dat
echo $testVar got it
The output I get is
got it23
How can I remove the '\r' from the variable?
yet another solution uses tr:
echo $testVar | tr -d '\r'
cat myscript | tr -d '\r'
the option -d stands for delete.
You can use sed as follows:
MY_NEW_VAR=$(echo $testVar | sed -e 's/\r//g')
echo ${MY_NEW_VAR} got it
By the way, try to do a dos2unix on your data file.
Because the file you source ends lines with carriage returns, the contents of $testVar are likely to look like this:
$ printf '%q\n' "$testVar"
$'value123\r'
(The first line's $ is the shell prompt; the second line's $ is from the %q formatting string, indicating $'' quoting.)
To get rid of the carriage return, you can use shell parameter expansion and ANSI-C quoting (requires Bash):
testVar=${testVar//$'\r'}
Which should result in
$ printf '%q\n' "$testVar"
value123
use this command on your script file after copying it to Linux/Unix
perl -pi -e 's/\r//' scriptfilename
Pipe to sed -e 's/[\r\n]//g' to remove both Carriage Returns (\r) and Line Feeds (\n) from each text line.
for a pure shell solution without calling external program:
NL=$'\n' # define a variable to reference 'newline'
testVar=${testVar%$NL} # removes trailing 'NL' from string
I have following variables:
STR1="CD45RA_naive"
STR2="CD45RO_mem"
STR3="CD127_Treg"
STR4="IL17_Th_stim_MACS"
STR5="IL17_Th17_stim"
names=($STR1 $STR2 $STR3 $STR4 $STR5)
and the files with the same names as variables with a ".bed" extension.
in each file I need to add a following header:
track name='enhancer region' description='${names[$i]}'"
My idea was the following:
for ((i=0; i<${#names[#]}; i++))
do
echo "output/annotation/"${names[$i]}".bed"
sed -i '' "1 i \track name='enhancer region' description='"${names[$i]}"'" "output/annotation/"${names[$i]}".bed"
done
However I get an error:
sed: 1: "1 i \track name='enhanc ...": extra characters after \ at the end of i command
What is a proper way to add a corresponding header to each file (gawk,sed)?
You typically need the line that is inserted for the i command to be on its own line. In other words, you need to add a newline after the backslash. For example:
$ cat input
line 1
$ sed -e '1i\
foo' input
foo
line 1
$ sed -e '1i\ foo' input
sed: 1: "1i\ foo
": extra characters after \ at the end of i command
I have an SQL file that I need to remove all the comments
-- Sql comment line
How can I achieve this in Linux using GREP or other tool?
Best Regards,
The grep tool has a -v option which reverses the sense of the filter. For example:
grep -v pax people
will give you all lines in the people file that don't contain pax.
An example is:
grep -v '^ *-- ' oldfile >newfile
which gets rid of lines with only white space preceding a comment marker. It won't however change lines like:
select blah blah -- comment here.
If you wanted to do that, you would use something like sed:
sed -e 's/ --.*$//' oldfile >newfile
which edits each line removing any characters from " --" to the end of the line.
Keep in mind you need to be careful with finding the string " --" in real SQL statements like (the contrived):
select ' -- ' | colm from blah blah blah
If you have these, you're better off creating/using an SQL parser rather than a simple text modification tool.
A transcript of the sed in operation:
pax$ echo '
...> this is a line with -- on it.
...> this is not
...> and -- this is again' | sed -e 's/ --.*$//'
this is a line with
this is not
and
For the grep:
pax$ echo '
-- this line starts with it.
this line does not
and -- this one is not at the start' | grep -v '^ *-- '
this line does not
and -- this one is not at the start
You can use the sed command as sed -i '/\-\-/d' <filename>
Try using sed on shell:
sed -e "s/(--.*)//" sql.filename
I'm writing a shell script and I need to strip FIND ME out of something like this:
* *[**FIND ME**](find me)*
and assign it to an array. I had the code working flawlessly .. until I moved the script in Solaris to a non-global zone. Here is the code I used before:
objectArray[$i]=`echo $line | nawk -F '*[**|**]' '{print $2}'`
Now Prints:
awk: syntax error near line 1
awk: bailing out near line 1
It was suggested that I try the same command with nawk, but I receive this error now instead:
nawk: illegal primary in regular expression `* *[**|**]` at `*[**|**]`
input record number 1
source line number 1
Also, /usr/xpg4/bin/awk does not exist.
I think you need to be clearer on what you want to get. For me your awk line doesn't 'strip FIND ME out'
echo "* *[**FIND ME**](find me)*" | nawk -F '* *[**|**]' '{print $2}'
[
So it would help if you gave some examples of the input/output you are expecting. Maybe there's a way to do what you want with sed?
EDIT:
From comments you actually want to select "FIND ME" from line, not strip it out.
I guess the dialect of regular expressions accepted by this nawk is different than gawk. So maybe a tool that's better suited to the job is in order.
echo "* *[**FIND ME**](find me)*" | sed -e"s/.*\* \*\[\*\*\(.[^*]*\)\*\*\].*/\1/"
FIND ME
quote your $line variable like this: "$line". If still doesn't work, you can do it another way with nawk, since you only want to find one instance of FIND ME,
$ echo "$line" | nawk '{gsub(/.*\*\[\*\*|\*\*\].*/,"");print}'
FIND ME
or if you are using bash/ksh on Solaris,
$ line="${line#*\[\*\*}"
$ echo "${line%%\*\*\]*}"
FIND ME