Variable is not recognized when called using sed in Gnuplot - variables

I'm having a problem when i call a variable defined in gnuplot while using sed:
pi.plt
N= 10000
set term gif animate delay 80
set output "pi.gif"
j = 1
load 'pi2.plt'
pi2.pĺt
k = ` sed -n "$j p" pi.dat | cut -f3 -d ' ' `
set label 1 sprintf('Pi = %f', k) at graph 0.85, 0.85
set parametric
plot fx(t), fy(t), "pi.dat" every ::::j using 1:2 with points
j = j + 100
if (j < N+1) reread
The variable j, although is defined in gnuplot, is not recognized by sed and i keep getting the error "invalid command".
Can anyone help me solving this issue? Thanks in advance!

Try:
k = real(system(sprintf('sed -n "%d p" pi.dat | cut -f3 -d " "', j)))

Related

Find and replace and move a line that contains a specific string

Assuming I have the following text file:
a b c d 1 2 3
e f g h 1 2 3
i j k l 1 2 3
m n o p 1 2 3
How do I replace '1 2 3' with '4 5 6' in the line that contains the letter (e) and move it after the line that contains the letter (k)?
N.B. the line that contains the letter (k) may come in any location in the file, the lines are not assumed to be in any order
My approach is
Remove the line I want to replace
Find the lines before the line I want to move it after
Find the lines after the line I want to move it after
append the output to a file
grep -v 'e' $original > $file
grep -B999 'k' $file > $output
grep 'e' $original | sed 's/1 2 3/4 5 6/' >> $output
grep -A999 'k' $file | tail -n+2 >> $output
rm $file
mv $output $original
but there is a lot of issues in this solution:
a lot of grep commands that seems unnecessary
the argument -A999 and -B999 are assuming the file would not contain lines more than 999, it would be better to have another way to get lines before and after the matched line
I am looking for a more efficient way to achieve that
Using sed
$ sed '/e/{s/1 2 3/4 5 6/;h;d};/k/{G}' input_file
a b c d 1 2 3
i j k l 1 2 3
e f g h 4 5 6
m n o p 1 2 3
Here is a GNU awk solution:
awk '
/\<e\>/{
s=$0
sub("1 2 3", "4 5 6", s)
next
}
/\<k\>/ && s {
printf("%s\n%s\n",$0,s)
next
} 1
' file
Or POSIX awk:
awk '
function has(x) {
for(i=1; i<=NF; i++) if ($i==x) return 1
return 0
}
has("e") {
s=$0
sub("1 2 3", "4 5 6", s)
next
}
has("k") && s {
printf("%s\n%s\n",$0,s)
next
} 1
' file
Either prints:
a b c d 1 2 3
i j k l 1 2 3
e f g h 4 5 6
m n o p 1 2 3
This works regardless of the order of e and k in the file:
awk '
function has(x) {
for(i=1; i<=NF; i++) if ($i==x) return 1
return 0
}
has("e") {
s=$0
sub("1 2 3", "4 5 6", s)
next
}
FNR<NR && has("k") && s {
printf("%s\n%s\n",$0,s)
s=""
next
}
FNR<NR
' file file
This awk should work for you:
awk '
/(^| )e( |$)/ {
sub(/1 2 3/, "4 5 6")
p = $0
next
}
1
/(^| )k( |$)/ {
print p
p = ""
}' file
a b c d 1 2 3
i j k l 1 2 3
e f g h 4 5 6
m n o p 1 2 3
This might work for you (GNU sed):
sed -n '/e/{s/1 2 3/4 5 6/;s#.*#/e/d;/k/s/.*/\&\\n&/#p};' file | sed -f - file
Design a sed script by passing the file twice and applying the sed instructions from the first pass to the second.
Another solution is to use ed:
cat <<\! | ed file
/e/s/1 2 3/4 5 6/
/e/m/k/
wq
!
Or if you prefer:
<<<$'/e/s/1 2 3/4 5 6/\n.m/k/\nwq' ed -s file

syntax error in awk 'if-else if-else' with multiple operations

There's a weird thing with awk conditional statements:
when running awk 'if-else if-else' with a single operation after each condition, it works fine as below:
awk 'BEGIN {a=30; \
if (a==10) print "a = 10"; \
else if (a == 20) print "a = 20"; \
else print "a = 30"}'
output:
a = 30
However, when running awk 'if-else if-else' with multiple operations (properly braced) after 'else if' , syntax error occured:
awk 'BEGIN {a=30; \
if (a==10) print "a = 10"; \
else if (a == 20) {print "a = 20"; print "b = 20"}; \
else print "a = 30"}'
output:
awk: cmd. line:4: else print "a = 30"}
awk: cmd. line:4: ^ syntax error
Can anyone tell if this is an awk issue that intrinsically doesn't allow multiple operations in such cases, or if it's just my syntax error that could be corrected?
P.S. I looked through all relevant posts of awk 'if else' syntax error, but none of them is addressing this issue.
Removed semi-colon at end of third line after close brace.
awk 'BEGIN {a=30; \
if (a==10) print "a = 10"; \
else if (a == 20) {print "a = 20"; print "b = 20"} \
else print "a = 30"}'
Output: a = 30

Show sort ordinal numbers as count in a list

I have the following input file
system info com1 command set
system info com2 command set
system command set1 information description test
system command 21-22 information description pass
system command T1-T2-T3 information description file
system command commonset information description info
and the following command
awk '/system command/&&/information description/ { gsub("\"","") ; print ++ OFS") Command = " $3}' inputfile.txt
gives me the following output
1) Command = set1
2) Command = 21-22
3) Command = T1-T2-T3
4) Command = commonset
Is there a way that my commands list count is not with simple numbers but with sort ordinal numbers
and to have an output like this
1st) Command = set1
2nd) Command = 21-22
3rd) Command = T1-T2-T3
4th) Command = commonset
There is no built function to get that ordinal. We need to get our hands dirty and write this code:
awk 'function ordinal(i, mod, str) {mod = i%10; str=i; if (i~/1[1-3]$/) str=str "th"; else if (mod==1) str=str "st"; else if (mod==2) str=str "nd"; else if (mod==3) str=str "rd"; else str=str "th"; return str;} /system command/&&/information description/ { gsub(/"/,"") ; print ordinal(++i) ") Command = " $3}' file
1st) Command = set1
2nd) Command = 21-22
3rd) Command = T1-T2-T3
4th) Command = commonset
Expanded form:
awk '
function ordinal(i, mod, str) {
mod = i%10
str = i
if (i~/1[1-3]$/) # for numbers ending in 11, 12, 13
str = str "th"
else if (mod==1)
str = str "st"
else if (mod==2)
str = str "nd"
else if (mod==3)
str = str "rd"
else
str = str "th"
return str
}
/system command/&&/information description/ {
gsub(/"/,"")
print ordinal(++i) ") Command = " $3
}' file
An alternative way to implement the above function would be:
function ordinal(num, idx, sfxs) {
split("st nd rd th",sfxs," ")
idx = ( (num ~ /[123]$/) && (num !~ /1[123]$/) ? num % 10 : 4 )
return num sfxs[idx]
}

How to print out lines starting with keyword and connected with backslash with sed or awk

For example, I'd like to print out the line starting with set_2 and connected with \ like this. I'd like to know whether it's possible do to it with sed, awk or any other text process command lines.
< Before >
set_1 abc def
set_2 a b c d\
e f g h\
i j k l\
m n o p
set_3 ghi jek
set_2 aaa bbb\
ccc ddd\
eee fff
set_4 1 2 3 4
< After text process >
set_2 a b c d\
e f g h\
i j k l\
m n o p
set_2 aaa bbb\
ccc ddd\
eee fff
Try the following:
awk -v st="set_2" '/^set/ {set=$1} /\\$/ && set==st { prnt=1 } prnt==1 { print } !/\\$/ { prnt=0 }' file
Explanation:
awk -v st="set_2" ' # Pass the set to track as a variable st
/^set/ {
set=$1 # When the line begins with "set", track the set in the variable set
}
/\\$/ && set==st {
prnt=1 # When we are in the required set block and the line ends with "/", set a print marker (prnt) to 1
}
prnt==1 {
print # When the print marker is 1, print the line
}
!/\\$/ {
prnt=0 # If the line doesn't end with "/". set the print marker to 0
}' file
Would you try the sed solution:
sed -nE '
/^set_2/ { ;# if the line starts with "set_2" execute the block
:a ;# define a label "a"
/\\[[:space:]]*$/! {p; bb} ;# if the line does not end with "\", print the pattern space and exit the block
N ;# append the next line to the pattern space
ba ;# go to label "a"
} ;# end of the block
:b ;# define a label "b"
' file
Please note the character class [[:space:]]* is inserted just because the OP's posted example contains whitespaces after the slash.
[Alternative]
If perl is your option, following will also work:
perl -ne 'print if /^set_2/..!/\\\s*$/' file
This simple awk command should do the job:
awk '!/^[[:blank:]]/ {p = ($1 == "set_2")} p' file
set_2 a b c d\
e f g h\
i j k l\
m n o p
set_2 aaa bbb\
ccc ddd\
eee fff
And with this awk :
awk -F'[[:blank:]]*' '$1 == "set_2" || $NF ~ /\$/ {print $0;f=1} f && $1 == ""' file
set_2 a b c d\
e f g h\
i j k l\
m n o p
set_2 aaa bbb\
ccc ddd\
eee fff
This might work for you (GNU sed):
sed ':a;/set_2/{:b;n;/set_/ba;bb};d' file
If a line contains set_2 print it and go on printing until another line containing set_ then repeat the first test.
Otherwise delete the line.

awk: printing lines side by side when the first field is the same in the records

I have a file containing lines like
a x1
b x1
q xq
c x1
b x2
c x2
n xn
c x3
I would like to test on the fist field in each line, and if there is a match I would like to append the matching lines to the first line. The output should look like
a x1
b x1 b x2
q xq
c x1 c x2 c x3
n xn
any help will be greatly appreciated
Using awk you can do this:
awk '{arr[$1]=arr[$1]?arr[$1] " " $0:$0} END {for (i in arr) print arr[i]}' file
n xn
a x1
b x1 b x2
c x1 c x2 c x3
q xq
To preserve input ordering:
$ awk '
{
if ($1 in vals) {
prev = vals[$1] " "
}
else {
prev = ""
keys[++k] = $1
}
vals[$1] = prev $0
}
END {
for (k=1;k in keys;k++)
print vals[keys[k]]
}
' file
a x1
b x1 b x2
q xq
c x1 c x2 c x3
n xn
What I ended up doing. (The answers by Ed Morton and Jonte are obviously more elegant.)
First I saved the 1st column of the input file in a separate file.
awk '{print $1}' input.file.txt > tmp0
Then saved the input file with lines, which has duplicate values at $1 field, removed.
awk 'BEGIN { FS = "\t" }; !x[$1]++ { print $0}' input_file.txt > tmp1
Then saved all the lines with duplicate $1 field.
awk 'BEGIN { FS = "\t" }; x[$1]++ { print $0}' input_file.txt >tmp2
Then saved the $1 fields of the non-duplicate file (tmp1).
awk '{ print $1}' tmp1 > tmp3
I used a for loop to pull in lines from the duplicate file (tmp2) and the duplicates removed file (tmp1) into an output file.
for i in $(cat tmp3)
do
if [ $(grep -w $i tmp0 | wc -l) = 1 ] #test for single instance in the 1st col of input file
then
echo "$(grep -w $i tmp1)" >> output.txt #if single then pull that record from no dupes
else
echo -e "$(grep -w $i tmp1) \t $(grep -w $i tmp2 | awk '{
printf $0"\t" }; END { printf "\n" }')" >> output.txt # if not single then pull that record from no_dupes first then all the records from dupes in a single line.
fi
done
Finally remove the tmp files
rm tmp* # remove all the tmp files