Tidal cycles : parse error on input `let' - syntax-error

I do tests to learn tidal, I'm a video via youtube and the problem is that "let" is not recognized and I do not understand why !? :s
thank a lot
setcps 0.6
d0
let timeBetweenStutters = shift' 10 $ choose [0.0625, (0.0625/2), 0.125]
let stutterEffect = ( (|* lpf 0.7) . (|* gain 0.9) )
let initialTime = shift' 11 $ choose [(0.125*3), 0.125, 0.0625*3]
d1
$ stutWith 2 initialTime ( stutWith 20 timeBetweenStutters stutterEffect )
$ s "other:3"
# gain 1.2
# size 0.9
# lpf 10000
d2 $ s "other:5" # gain 1.2
hush

Related

How to use bob.measure.load.split()

I'm a student studying with a focus on machine learning, and I'm interested in authentication.
I am interested in your library because I want to calculate the EER.
Sorry for the basic question, but please tell me about bob.measure.load.split().
Is the file format required by this correct in the perception that the first column is the correct label and the second column is the predicted score of the model?
like
# file.txt
|label|prob |
| -1 | 0.3 |
| 1 | 0.5 |
| -1 | 0.8 |
...
In addition, to actually calculate the EER, should I follow the following procedure?
neg, pos = bob.measure.load.split('file.txt')
eer = bob.measure.eer(neg, pos)
Sincerely.
You have two options of calculating EER with bob.measure:
Use the Python API to calculate EER using numpy arrays.
Use the command line application to generate error rates (including EER) and plots
Using Python API
First, you need to load the scores into memory and split them into positive and negative scores.
For examples:
import numpy as np
import bob.measure
positives = np.array([0.5, 0.5, 0.6, 0.7, 0.2])
negatives = np.array([0.0, 0.0, 0.6, 0.2, 0.2])
eer = bob.measure.eer(negatives, positives)
print(eer)
This will print 0.2. All you need to take care is that your positive comparison scores are higher than negative comparisons. That is your model should score higher for positive samples.
Using command line
bob.measure also comes with a suite of command line commands that can help you get the error rates. To use the command line, you need to save the scores in a text file. This file is made of two columns where columns are separated by space. For example the score file for the same example would be:
$ cat scores.txt
1 0.5
1 0.5
1 0.6
1 0.7
1 0.2
-1 0.0
-1 0.0
-1 0.6
-1 0.2
-1 0.2
and then you would call
$ bob measure metrics scores.txt
[Min. criterion: EER ] Threshold on Development set `scores.txt`: 3.500000e-01
================================ =============
.. Development
================================ =============
False Positive Rate 20.0% (1/5)
False Negative Rate 20.0% (1/5)
Precision 0.8
Recall 0.8
F1-score 0.8
Area Under ROC Curve 0.8
Area Under ROC Curve (log scale) 0.7
================================ =============
Ok it didn't print EER exactly but EER = (FPR+FNR)/2.
Using bob.bio.base command line
If your scores are the results of a biometrics experiment,
then you want to save your scores in the 4 or 5 column formats of bob.bio.base.
See an example in https://gitlab.idiap.ch/bob/bob.bio.base/-/blob/3efccd3b637ee73ec68ed0ac5fde2667a943bd6e/bob/bio/base/test/data/dev-4col.txt and documentation in https://www.idiap.ch/software/bob/docs/bob/bob.bio.base/stable/experiments.html#evaluating-experiments
Then, you would call bob bio metrics scores-4-col.txt to get biometrics related metrics.

plotting lines between points in gnuplot

I've got next script to plot dots from file "puntos"
set title "recorrido vehiculos"
set term png
set output "rutasVehiculos.png"
plot "puntos" u 2:3:(sprintf("%d",$1)) with labels font ",7" point pt 7 offset char 0.5,0.5 notitle
file "puntos" has next format:
#i x y
1 2.1 3.2
2 0.2 0.3
3 2.9 0.3
in another file called "routes" i have the routes that joins the points, for example:
2
1 22 33 20 18 14 8 27 1
1 13 2 17 31 1
Route 1 joins points 1, 22, 33, etc.
Route 2 joins points 1, 13, 12, etc.
Is there a way that perform this with gnuplot?
PS: sorry for my English
Welcome to stackoverflow. This is an interesting task. It's pretty clear what to do, however, to my opinion not very obvious how to do this gnuplot.
The following code seems to work, probably with room for improvements. Tested in gnuplot 5.2.5
Tested with the files puntos.dat and routes.dat:
# puntos.dat
#i x y
1 2.1 3.2
2 0.2 0.3
3 2.9 0.3
4 1.3 4.5
5 3.1 2.3
6 1.9 0.7
7 3.6 1.7
8 2.3 1.5
9 1.0 2.0
and
# routes.dat
2
1 5 7 3 6 2 9
6 8 5 9 4
and the code:
### plot different routes
reset session
set title "recorrido vehiculos"
set term pngcairo
set output "rutasVehiculos.png"
POINTS = "puntos.dat"
ROUTES = "routes.dat"
# load routes file into datablock
set datafile separator "\n"
set table $Routes
plot ROUTES u (stringcolumn(1)) with table
unset table
# loop routes
set datafile separator whitespace
stats $Routes u 0 nooutput # get the number of routes
RoutesCount = STATS_records-1
set print $RoutesData
do for [i=1:RoutesCount] {
# get the points of a single route
set datafile separator "\n"
set table $Dummy
plot ROUTES u (SingleRoute = stringcolumn(1),$1) every ::i::i with table
unset table
# create a table of the coordinates of the points of a single route
set datafile separator whitespace
do for [j=1:words(SingleRoute)] {
set table $Dummy2
plot POINTS u (a=$2,$2):(b=$3,$3) every ::word(SingleRoute,j)-1::word(SingleRoute,j)-1 with table
print sprintf("%g %s %g %g", j, word(SingleRoute,j), a, b)
unset table
}
print "" # add empty line
}
set print
print sprintf("%g different Routes\n", RoutesCount)
print "RoutesData:"
print $RoutesData
set colorsequence classic
plot \
POINTS u 2:3:(sprintf("%d",$1)) with labels font ",7" point pt 7 offset char 0.5,0.5 notitle,\
for [i=1:RoutesCount] $RoutesData u 3:4 every :::i-1::i-1 w lp lt i title sprintf("Route %g",i)
set output
### end code
which results in something like:

Binding a scalar to a sigilless variable (Perl 6)

Let me start by saying that I understand that what I'm asking about in the title is dubious practice (as explained here), but my lack of understanding concerns the syntax involved.
When I first tried to bind a scalar to a sigilless symbol, I did this:
my \a = $(3);
thinking that $(...) would package the Int 3 in a Scalar (as seemingly suggested in the documentation), which would then be bound to symbol a. This doesn't seem to work though: the Scalar is nowhere to be found (a.VAR.WHAT returns (Int), not (Scalar)).
In the above-referenced post, raiph mentions that the desired binding can be performed using a different syntax:
my \a = $ = 3;
which works. Given the result, I suspect that the statement can be phrased equivalently, though less concisely, as: my \a = (my $ = 3), which I could then understand.
That leaves the question: why does the attempt with $(...) not work, and what does it do instead?
What $(…) does is turn a value into an item.
(A value in a scalar variable ($a) also gets marked as being an item)
say flat (1,2, (3,4) );
# (1 2 3 4)
say flat (1,2, $((3,4)) );
# (1 2 (3 4))
say flat (1,2, item((3,4)) );
# (1 2 (3 4))
Basically it is there to prevent a value from flattening. The reason for its existence is that Perl 6 does not flatten lists as much as most other languages, and sometimes you need a little more control over flattening.
The following only sort-of does what you want it to do
my \a = $ = 3;
A bare $ is an anonymous state variable.
my \a = (state $) = 3;
The problem shows up when you run that same bit of code more than once.
sub foo ( $init ) {
my \a = $ = $init; # my \a = (state $) = $init;
(^10).map: {
sleep 0.1;
++a
}
}
.say for await (start foo(0)), (start foo(42));
# (43 44 45 46 47 48 49 50 51 52)
# (53 54 55 56 57 58 59 60 61 62)
# If foo(42) beat out foo(0) instead it would result in:
# (1 2 3 4 5 6 7 8 9 10)
# (11 12 13 14 15 16 17 18 19 20)
Note that variable is shared between calls.
The first Promise halts at the sleep call, and then the second sets the state variable before the first runs ++a.
If you use my $ instead, it now works properly.
sub foo ( $init ) {
my \a = my $ = $init;
(^10).map: {
sleep 0.1;
++a
}
}
.say for await (start foo(0)), (start foo(42));
# (1 2 3 4 5 6 7 8 9 10)
# (43 44 45 46 47 48 49 50 51 52)
The thing is that sigiless “variables” aren't really variables (they don't vary), they are more akin to lexically scoped (non)constants.
constant \foo = (1..10).pick; # only pick one value and never change it
say foo;
for ^5 {
my \foo = (1..10).pick; # pick a new one each time through
say foo;
}
Basically the whole point of them is to be as close as possible to referring to the value you assign to it. (Static Single Assignment)
# these work basically the same
-> \a {…}
-> \a is raw {…}
-> $a is raw {…}
# as do these
my \a = $i;
my \a := $i;
my $a := $i;
Note that above I wrote the following:
my \a = (state $) = 3;
Normally in the declaration of a state var, the assignment only happens the first time the code gets run. Bare $ doesn't have a declaration as such, so I had to prevent that behaviour by putting the declaration in parens.
# bare $
for (5 ... 1) {
my \a = $ = $_; # set each time through the loop
say a *= 2; # 15 12 9 6 3
}
# state in parens
for (5 ... 1) {
my \a = (state $) = $_; # set each time through the loop
say a *= 2; # 15 12 9 6 3
}
# normal state declaration
for (5 ... 1) {
my \a = state $ = $_; # set it only on the first time through the loop
say a *= 2; # 15 45 135 405 1215
}
Sigilless variables are not actually variables, they are more of an alias, that is, they are not containers but bind to the values they get on the right hand side.
my \a = $(3);
say a.WHAT; # OUTPUT: «(Int)␤»
say a.VAR.WHAT; # OUTPUT: «(Int)␤»
Here, by doing $(3) you are actually putting in scalar context what is already in scalar context:
my \a = 3; say a.WHAT; say a.VAR.WHAT; # OUTPUT: «(Int)␤(Int)␤»
However, the second form in your question does something different. You're binding to an anonymous variable, which is a container:
my \a = $ = 3;
say a.WHAT; # OUTPUT: «(Int)␤»
say a.VAR.WHAT;# OUTPUT: «(Scalar)␤»
In the first case, a was an alias for 3 (or $(3), which is the same); in the second, a is an alias for $, which is a container, whose value is 3. This last case is equivalent to:
my $anon = 3; say $anon.WHAT; say $anon.VAR.WHAT; # OUTPUT: «(Int)␤(Scalar)␤»
(If you have some suggestion on how to improve the documentation, I'd be happy to follow up on it)

awk script to calculate delay from trace file ns-3

I want to measure time between transmitted and received packets in below trace file.
Input:
+ 0.01 /NodeList/1/DeviceList/1/$ns3::PointToPointNetDevice/TxQueue/Enqueue
- 0.01 /NodeList/1/DeviceList/1/$ns3::PointToPointNetDevice/TxQueue/Dequeue
r 0.0200001 /NodeList/0/DeviceList/2/$ns3::PointToPointNetDevice/MacRx
+ 0.11 /NodeList/1/DeviceList/1/$ns3::PointToPointNetDevice/TxQueue/Enqueue
- 0.11 /NodeList/1/DeviceList/1/$ns3::PointToPointNetDevice/TxQueue/Dequeue
r 0.12 /NodeList/0/DeviceList/2/$ns3::PointToPointNetDevice/MacRx
+ 0.12 /NodeList/0/DeviceList/3/$ns3::PointToPointNetDevice/TxQueue/Enqueue
- 0.12 /NodeList/0/DeviceList/3/$ns3::PointToPointNetDevice/TxQueue/Dequeue
r 0.120001 /NodeList/2/DeviceList/2/$ns3::PointToPointNetDevice/MacRx
Here + represents transmitted data and r represents received data. 2nd column in the trace file shows the time.
How can I measure time between r and + for the whole file using awk code?
The expected output can be as below:
Output:
0.0100001
0.01
0.000001
I'll be grateful if anyone helps.
I generated my own trace file, called trace, as follows:
+ 0.1 Stuff stuff and more stuff
- 0.2 Yet more stuff
r 0.4 Something new
+ 0.8 Something else, not so new
- 1.6 Jiggery
r 3.2 Pokery
+ 6.4 Higgledy Piggledy
Then, I would approach your question with awk as follows:
awk '/^+/{tx=$2} /^r/{rx=$2; d=rx-tx; $1=$1 "(d=" d ")"} 1' trace
Sample Output
+ 0.1 Stuff stuff and more stuff
- 0.2 Yet more stuff
r(d=0.3) 0.4 Something new
+ 0.8 Something else, not so new
- 1.6 Jiggery
r(d=2.4) 3.2 Pokery
+ 6.4 Higgledy Piggledy
That says... "If you see a line starting with +, save the second field as variable tx. If you see a line starting with r, save the second field as variable rx. Calculate the difference between rx and tx and save it as d. Rebuild the first field of the line by appending (d=variable d) to the end of whatever it was. The 1 at the end tells awk to do its natural thing - i.e. print the line."

Converting Rows into Columns using awk or sed

I have a file with *.xvg format.
It contains six columns with 500 numbers each.
Except the time column (first column) all other columns contain floats.
I want to generate an output file in same format, in which these columns are converted into rows with each number separated by space.
I have written a program in C, which works fine for me but I am looking for an alternative way using awk or sed, which will allow me to do the same.
I am absolutely new to these scripting languages. I couldn't find any relevant answer for me in previously asked questions. So, If somebody can help me out with this task I will be grateful.
Input file looks like this :-
# This file was created Thu Oct 1 17:18:10 2015
# by the following command:
# /home/durba/gmx455/bin/mdrun -np 1 -deffnm md0 -v
#
# title "dH/d\xl\f{}, \xD\f{}H"
# xaxis label "Time (ps)"
# yaxis label "(kJ/mol)"
#TYPE xy
# subtitle "T = 200 (K), \xl\f{} = 0"
# view 0.15, 0.15, 0.75, 0.85
# legend on
# legend box on
# legend loctype view
# legend 0.78, 0.8
# legend length 2
# s0 legend "dH/d\xl\f{} \xl\f{} 0"
# s1 legend "\xD\f{}H \xl\f{} 0.05"
0 19.3191 1.16531 1.8 -447.07 -47.07
2 -447.072 -17.6454 1.5 -17.633 -1.33
4 -17.633 -0.446508 1.3 -75.455 -5.45
6 -75.4555 -2.83981 1.4 -28.724 -28.4
8 -28.7246 -0.884639 1.5 -41.877 -14.87
10 -41.8779 -1.45569 2.8 -43.685 -3.685
12 -43.6851 -1.4797 -3.1 -91.651 -91.651
14 -91.6515 -3.52492 -3.5 -61.135 -1.135
16 -61.1356 -2.30129 -3.2 -48.847 -48.47
output file should look like this :-
# This file was created Thu Oct 1 17:18:10 2015
# by the following command:
# /home/durba/gmx455/bin/mdrun -np 1 -deffnm md0 -v
#
# title "dH/d\xl\f{}, \xD\f{}H"
# xaxis label "Time (ps)"
# yaxis label "(kJ/mol)"
#TYPE xy
# subtitle "T = 200 (K), \xl\f{} = 0"
# view 0.15, 0.15, 0.75, 0.85
# legend on
# legend box on
# legend loctype view
# legend 0.78, 0.8
# legend length 2
# s0 legend "dH/d\xl\f{} \xl\f{} 0"
# s1 legend "\xD\f{}H \xl\f{} 0.05"
0 2 4 6 8 10 12
19.3191 -447.072 -17.633 -17.633 -75.4555 -28.7246 -41.8779 -43.6851 -91.6515 -61.1356
1.16531 -17.6454 -0.446508 -2.83981 -0.884639 -1.45569 -1.4797 -3.52492 -2.30129
1.8 1.5 1.3 1.4 1.5 2.8 -3.1 -3.5 -3.2
-447.07 -17.633 -75.455 -28.724 -41.877 -43.685 -91.651 -61.135 -48.847
-47.07 -1.33 -5.45 -28.4 -14.87 -3.685 -91.651 -1.135 -48.47
Please note that lines starting with "#" and "#" should be the same in both files.
Answer for original question
Let's consider this test file:
$ cat file
123 1.2 1.3 1.4 1.5
124 2.2 2.3 2.4 2.5
125 3.2 3.3 3.4 3.5
To convert columns to row:
$ awk '{for (i=1;i<=NF;i++)a[i,NR]=$i} END{for (i=1;i<=NF;i++) for (j=1;j<=NR;j++) printf "%s%s",a[i,j],(j==NR?ORS:OFS)}' file
123 124 125
1.2 2.2 3.2
1.3 2.3 3.3
1.4 2.4 3.4
1.5 2.5 3.5
How it works
for (i=1;i<=NF;i++)a[i,NR]=$i
As we loop through each line, we save the values in array a.
END{for (i=1;i<=NF;i++) for (j=1;j<=NR;j++) printf "%s%s",a[i,j],(j==NR?ORS:OFS)}
After we reach the end of the file, we print each of the values followed by the output field separator (OFS) if we are in the midst of a line or the output record separator (ORS) if we are at the end of the line.
Multi-line version
If you like your code spread over several lines:
awk '
{
for (i=1;i<=NF;i++)
a[i,NR]=$i
}
END{
for (i=1;i<=NF;i++)
for (j=1;j<=NR;j++)
printf "%s%s",a[i,j],(j==NR?ORS:OFS)
}
' file
Answer for revised question
In the revised question, there are lines at the beginning of the file that start with # or # that are not to be changed. In this case:
$ awk '/^[##]/{print;next}{k++; for (i=1;i<=NF;i++)a[i,k]=$i;} END{for (i=1;i<=NF;i++) for (j=1;j<=k;j++) printf "%s%s",a[i,j],(j==k?ORS:OFS)}' input
# This file was created Thu Oct 1 17:18:10 2015
# by the following command:
# /home/durba/gmx455/bin/mdrun -np 1 -deffnm md0 -v
#
#
#
# title "dH/d\xl\f{}, \xD\f{}H"
# xaxis label "Time (ps)"
# yaxis label "(kJ/mol)"
#TYPE xy
# subtitle "T = 200 (K), \xl\f{} = 0"
# view 0.15, 0.15, 0.75, 0.85
# legend on
# legend box on
# legend loctype view
# legend 0.78, 0.8
# legend length 2
# s0 legend "dH/d\xl\f{} \xl\f{} 0"
# s1 legend "\xD\f{}H \xl\f{} 0.05"
0 2 4 6 8 10 12 14 16
19.3191 -447.072 -17.633 -75.4555 -28.7246 -41.8779 -43.6851 -91.6515 -61.1356
1.16531 -17.6454 -0.446508 -2.83981 -0.884639 -1.45569 -1.4797 -3.52492 -2.30129
1.8 1.5 1.3 1.4 1.5 2.8 -3.1 -3.5 -3.2
-447.07 -17.633 -75.455 -28.724 -41.877 -43.685 -91.651 -61.135 -48.847
-47.07 -1.33 -5.45 -28.4 -14.87 -3.685 -91.651 -1.135 -48.47
This might work for you (GNU sed):
sed -r 'H;$!d;x;:a;h;s/\n(\S+)[^\n]*/\1 /g;s/ $//p;g;s/\n\S+ ?/\n/g;ta;d' file
Slurp the file into hold space (HS) deleting the pattern space (PS) until the end-of-file condition is met. At end-of-file swap the HS for the PS. Copy the PS to the HS and then remove all but the first field following a newline with the first field followed by a space, globally. Remove the last space and print the line. Then recall the copy of the line from the HS and do the inverse. If any of the substitutions were successful repeat the process until nothing but newlines exist. Delete the unwanted newlines.
Since first answering the original question changed. The new solution below caters for the new question using essentially the same method:
sed -r '/^[0-9]/{s/ +/ /g;H};//!p;$!d;x;:a;h;s/\n(\S+)[^\n]*/\1 /g;s/ $//p;g;s/\n\S+ ?/\n/g;ta;d' file