CSound ignores 0dbfs assignment and terminates immediately - csound

I am working through the CSound FLOSS manual and am perplexed by the results I'm getting with one particular example demonstrating the use of RMS in CSound. The example can be found on page 28 in the pdf version, or on this page under the 'RMS Measurement' heading within the html version.
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;example by Martin Neukom, adapted by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
giSine ftgen 0, 0, 2^10, 10, 1 ;table with a sine wave
instr 1
a3 init 0
kamp linseg 0, 1.5, 0.2, 1.5, 0 ;envelope for initial input
asnd poscil kamp, 440, giSine ;initial input
if p4 == 1 then ;choose between two sines ...
adel1 poscil 0.0523, 0.023, giSine
adel2 poscil 0.073, 0.023, giSine,.5
else ;or a random movement for the delay lines
adel1 randi 0.05, 0.1, 2
adel2 randi 0.08, 0.2, 2
endif
a0 delayr 1 ;delay line of 1 second
a1 deltapi adel1 + 0.1 ;first reading
a2 deltapi adel2 + 0.1 ;second reading
krms rms a3 ;rms measurement
delayw asnd + exp(-krms) * a3 ;feedback depending on rms
a3 reson -(a1+a2), 3000, 7000, 2 ;calculate a3
aout linen a1/3, 1, p3, 1 ;apply fade in and fade out
outs aout, aout
endin
</CsInstruments>
<CsScore>
i 1 0 60 1 ;two sine movements of delay with feedback
i 1 61 . 2 ;two random movements of delay with feedback
</CsScore>
</CsoundSynthesizer>
When I run csound with the file as input using csound ex5.csd the following output follows.
0dBFS level = 32768.0
--Csound version 6.10 (double samples) 2018-01-27
[commit: none]
libsndfile-1.0.28
UnifiedCSD: ex5cp.csd
STARTING FILE
Creating options
Creating orchestra
closing tag
Creating score
rtaudio: ALSA module enabled
rtmidi: ALSA Raw MIDI module enabled
csound command: Segmentation fault
end of score. overall amps: 0.0
overall samples out of range: 0
0 errors in performance
This all happens immediately, with no sound output to be heard. I am guessing this isn't the intention of the example, and that the 0dBFS level = 32768.0 log message has something to do with the problem. I am asking here because I get the same result even when I copy paste the program from the book, so I am sort of stumped.
Any insight into what's going on here?
Thanks

i don't think it has anything to do with the 0dbfs assignment. you can compare with any other .csd which works for you: you will find it always first in the output console.
i checked the example, and it works for me as expected (csound 6.14 develop).
as you use csound 6.10, my first recommendation is to update your csound. the problem ist that you get a segmentation fault, so no way to know more about the issue without special tools.
by the way, there is a new version of the csound floss manual at https://flossmanual.csound.com/
the examples can now be played directly from the browser (for now, chrome/chromium works best).
hope this helps -
joachim

Related

Understanding spacy textcat_multilabel scorer output

I'm trying to understand the output of my textcat_multilabel job. I have 4 text categories and I'm using spacy version 3.2.0 (The methodologies have changed a lot recently and I don't really understand the documentation).
E
#
LOSS TEXTC...
CATS_SCORE
SCORE
0
0
1.00
51.86
0.52
0
200
122.15
52.90
0.53
This is what I have in my config file. (btw. What is v1?)
scorer = {"#scorers":"spacy.textcat_multilabel_scorer.v1"}
threshold = 0.5
In fact, everything in the standard config file is unchanged from the suggestions except the dropout which I increased to 0.5.
The final row of my job shows these values: 0 8400 2.59 87.29 0.87
I am very impressed with the results that I'm getting with this job. Just need to understand what I'm doing.
E is epochs
# is training iterations / batches (see here)
LOSS_TEXTCAT is the loss of your textcat component. Loss normally fluctuates the first few iterations and then trends downward. The exact values are meaningless.
SCORE_TEXTCAT is the score of your textcat component on your dev set, see the docs for some details on that.
SCORE is overall score of your pipeline, a weighted average of any components you have. But you only have a textcat so it's basically the same as that score.
v1 is just version 1, components are versioned in case they are updated later so you can use older versions with newer code.

Matplotlib.odeint giving wrong solution for simple formula of acc = Gm/r**2

I made a program to calculate the motion of any object (in this case moon) by earth's pull, with zero initial velocity, the moon should just oscillate in a straight line back and forth the earth until it stops at earth exactly right on top of the Earth, so plotting time versus distance I should get a graph similar to a damping signal, instead, I am getting an infinitely decrementing plot.
I have tried :-
giving different initial speeds to the moon, but got the same result, it seems like the way I am using odeint to solve the differential equation is wrong? Not sure. Very new to coding.
Assuming 1000 seconds in not enough for this to happen, so I increased the time to 1e+5,1e+10,1e+20, it seems like odeint couldn't handle that because it gave a different solution every time I run the program for the exact same parameters, received the follow warning:
ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information. warnings.warn(warning_msg, ODEintWarning)
Is there some other function I should use to solve this differential equation?
Reduced the masses to 10,20 and G and r to 10,10, and received the same warning as in case (2)
Any feed back helps
from scipy.integrate import odeint
import matplotlib.pyplot as plt
G=6.67408e-11 #N-m2/kg2 #N-m2/kg2
m1 = 5.972e+24 # kg , mass of earth
m2 = 7.348e+22 # kg , mass of moon
def dvdt(S, t):
# v = dr./dt, so a = dv/dt
r,v = S
return [v,
-G*m1 / r**2]
# initial values
r10 = 0 # position of earth in meters
r20 = 4e+8 # position of moon from earth in meters
v10 = 0 # velocity of earth m/s
v20 = 0 # velocity of moon relative to earth m/s
S0 = [r20, v20]
t = np.linspace(0,1000,100)
# solving the differential eqn
acc = odeint(dvdt,S0, t)
r,v = acc.T
# plotting
plt.plot(t,r)
plt.xlabel('Time')
plt.ylabel('Distance between earth and moon')
plt.show()
The scenario assumes that the two bodies are "shifted out of phase", so that they behave like dark matter to each other, no electro-weak or strong nuclear forces.
The effective gravity below the radius of Earth is determined by the mass inside the current radius, the influences of the outer shell add to zero.
The force vector always points to the center, for the one-dimensional motion the sign of the force has always to be opposite to the sign of the coordinate.
In total thus
R1 = 6.7e+6 # m
acc = -sign(r) * G*(m1*min(1,abs(r)/R1)**3) / abs(r)**2
= -G*m1 * r/max(R1,abs(r))**3
If this is implemented, one gets the expected oscillating plot

Extra bytes on the end of YUV buffer - RaspberryPi

I've started editing the RaspiStillYUV.c code. I eventually want to process the image I receive, but for now, I'm just working to understand it. Why am I working with YUV instead of RGB? So I can learn something new. I've made minor changes to the function camera_buffer_callback. All I am doing is the following:
fprintf(stderr, "GREAT SUCCESS! %d\n", buffer->length);
The line this is replacing:
bytes_written = fwrite(buffer->data, 1, buffer->length, pData->file_handle);
Now, the dimensions should be 2592 x 1944 (w x h) as set in the code. Working off of Wikipedia (YUV420) I have come to the conclusion that the file size should be w * h * 1.5. Since the Y component has 1 byte of data for each pixel and the U and V components have 1 byte of data for every 4 pixels (1 + 1/4 + 1/4 = 1.5). Great. Doing the math in Python:
>>> 2592 * 1944 * 1.5
7558272.0
Unfortunately, this does not line up with the output of my program:
GREAT SUCCESS! 7589376
That leaves a difference of 31104 bytes.
I figure that the buffer is allocated in fixed size chunks (the output size is evenly divisible by 512). While I would like to understand that mystery, I'm fine with the fixed size chunk explanation.
My question is if I am missing something. Are the extra bytes beyond the expected size meaningful in this format? Should they be ignored? Are my calculations off?
The documentation at this location supports your theory on padding: http://www.raspberrypi.org/wp-content/uploads/2013/07/RaspiCam-Documentation.pdf
Specifically:
Note that the image buffers saved in raspistillyuv are padded to a
horizontal size divisible by 16 (so there may be unused bytes at the
end of each line to made the width divisible by 16). Buffers are also
padded vertically to be divisible by 16, and in the YUV mode, each
plane of Y,U,V is padded in this way.
So my interpretation of this is the following.
The width is 2592 (divisible by 16 so this is ok).
The height is 1944 which is 8 short of being divisible by 16 so an extra 8*2592 are added (also multiplied by 1.5) thus giving your 31104 extra bytes.
Although this kindof helps with the size of the file, it doesn't explain the structure of the YUV output properly. I am having a look at this description to see if this provides a hint to start with: http://en.wikipedia.org/wiki/YUV#Y.27UV420p_.28and_Y.27V12_or_YV12.29_to_RGB888_conversion
From this I believe it is as follows:
Y Channel:
2592 * (1944+8) = 5059584
U Channel:
1296 * (972+4) = 1264896
V Channel:
1296 * (972+4) = 1264896
Giving a sum of :
5059584 + 2*1264896 = 7589376
This makes the numbers add up so only thing left is to confirm if this interpretation is correct.
I am also trying to do the YUV decode (for image comparisons) so if you can confirm if this actually does correspond to what you are reading in the YUV file this would be much appreciated.
You have to read the manual carefully. Buffers are padded to multiples of 16, but colour data is half-size, so your image size needs to be in multiples of 32 to avoid problems with padding breaking external software.

Writing the right sax code

i wrote a SAX(symbolic aggregate approximation) classification code using synthetic control training and testing data but it never gives me the right accuracy( i got 0.2 error and the real was 0.02 error) using window_size = 16 and alphabet = 10 . I want to know what is the right code to do so.

Simple voice recognition when whispering

I'm trying to do simple voice to text mapping using pocketsphinx (. The grammar is very simple such as:
public <grammar> = (Matt, Anna, Tom, Christine)+ (One | Two | Three | Four | Five | Six | Seven | Eight | Nine | Zero)+ ;
e.g:
Tom Anna Three Three
yields
Tom Anna 33
I adapted the acoustic model (to take into account my foreign accent) and after that I received decent performance (~94% accuracy). I used training dataset of ~3minutes.
Right now I'm trying to do the same but by whispering to the microphone. The accuracy dropped significantly to ~50% w/o training. With training for accent
I got ~60%. I tried other thinks including denoising and boosting volume. I read the whole docs but was wondering if anyone could answer some questions so I can
better know in which direction should I got to improve performance.
1) in tutorial you are adapting hub4wsj_sc_8k acustic model. I guess "8k" is a sampling parameter. When using sphinx_fe you use "-samprate 16000". Was it used deliberately to train 8k model using data with 16k sampling rate? Why data with 8k sampling haven't been used? Does it have influence on performance?
2) in sphinx 4.1 (in comparison to pocketsphinx) there are differenct acoustic models e.g. WSJ_8gau_13dCep_16k_40mel_130Hz_6800Hz.jar. Can those models be used with pocketsphinx? Will acustic model with 16k sampling have typically better performance with data having 16k sampling rate?
3) when using data for training should I use those with normal speaking mode (to adapt only for my accent) or with whispering mode (to adapt to whisper and my accent)? I think I tried both scenarios and didn't notice any difference to draw any conclussion but I don't know pocketsphinx internals so I might be doing something wrong.
4) I used the following script to record adapting training and testing data from the tutorial:
for i in `seq 1 20`; do
fn=`printf arctic_%04d $i`;
read sent; echo $sent;
rec -r 16000 -e signed-integer -b 16 -c 1 $fn.wav 2>/dev/null;
done < arctic20.txt
I noticed that each time I hit Control-C this keypress is distinct in the recorded audio that leaded to errors. Trimming audio somtimes helped to correct to or lead to
other error instead. Is there any requirement that each recording has some few seconds of quite before and after speaking?
5) When accumulating observation counts is there any settings I can tinker with to improve performance?
6) What's the difference between semi-continuous and continuous model? Can pocketsphinx use continuous model?
7) I noticed that 'mixture_weights' file from sphinx4 is much smaller comparing to the one you got in pocketsphinx-extra. Does it make any difference?
8) I tried different combination of removing white noise (using 'sox' toolkit e.g. sox noisy.wav filtered.wav noisered profile.nfo 0.1). Depending on the last parameter
sometimes it improved a little bit (~3%) and sometimes it makes worse. Is it good to remove noise or it's something pocketsphinx doing as well? My environment is quite
is there is only white noise that I guess can have more inpack when audio recorded whispering.
9) I noticed that boosting volume (gain) alone most of the time only maked the performance a little bit worse even though for humans it was easier to distinguish words. Should I avoid it?
10) Overall I tried different combination and the best results I got is ~65% when only removing noise, so only slight (5%) improvement. Below are some stats:
//ORIGNAL UNPROCESSED TESTING FILES
TOTAL Words: 111 Correct: 72 Errors: 43
TOTAL Percent correct = 64.86% Error = 38.74% Accuracy = 61.26%
TOTAL Insertions: 4 Deletions: 13 Substitutions: 26
//DENOISED + VOLUME UP
TOTAL Words: 111 Correct: 76 Errors: 42
TOTAL Percent correct = 68.47% Error = 37.84% Accuracy = 62.16%
TOTAL Insertions: 7 Deletions: 4 Substitutions: 31
//VOLUME UP
TOTAL Words: 111 Correct: 69 Errors: 47
TOTAL Percent correct = 62.16% Error = 42.34% Accuracy = 57.66%
TOTAL Insertions: 5 Deletions: 12 Substitutions: 30
//DENOISE, threshold 0.1
TOTAL Words: 111 Correct: 77 Errors: 41
TOTAL Percent correct = 69.37% Error = 36.94% Accuracy = 63.06%
TOTAL Insertions: 7 Deletions: 3 Substitutions: 31
//DENOISE, threshold 0.21
TOTAL Words: 111 Correct: 80 Errors: 38
TOTAL Percent correct = 72.07% Error = 34.23% Accuracy = 65.77%
TOTAL Insertions: 7 Deletions: 3 Substitutions: 28
Those processing I was doing only for testing data. Should the training data be processed in the same way? I think I tried that but there was barely any difference.
11) In all those testing I used ARPA language model. When using JGSF results where usually much worse (I have the latest pocketsphinx branch). Why is that?
12) Because is each sentence the maximum number would be '999' and no more than 3 names, I modified the JSGF and replaced repetition sign '+' by repeating content in the parentheses manually. This time the result where much closer to ARPA. Is there any way in grammar to tell maximum number of repetition like in regular expression?
13) When using ARPA model I generated it by using all possible combinations (since dictionary is fixed and really small: ~15 words) but then testing I was still receiving somtimes illegal results e.g. Tom Anna (without any required number). Is there any way to enforce some structure using ARPA model?
14) Should the dictionary be limited only to those ~15 words or just full dictionary will only affect speed but not performance?
15) Is modifying dictionary (phonemes) the way to go to improve recognition when whispering? (I'm not an expert but when we whisper I guess some words might sounds different?)
16) Any other tips how to improve accuracy would be really helpful!
Regarding whispering: when you do so, the sound waves don't have meaningful aperiodic parts (vibrations that result from your vocal cords resonating normally, but not when whispering). You can try this by putting your finger to your throat while loudly speaking 'aaaaaa', and then just whispering it.
AFAIR acoustic modeling relies a lot on taking the frequency spectrum of the sound to detect peaks (formants) and relate them to phones (like vowels).
Educated guess: when whispering, the spectrum is mostly white-noise, slightly shaped by the oral position (tongue, openness of mouth, etc), which is enough for humans, but far not enough to make the peeks distinguishable by a computer.