Envelope From Two Tables of Arbitrary Length in CSound - csound

I'm trying to make a pitch envelope out of two function tables. One table holds the pitch values and the other holds corresponding durations. The two tables will be equal length but that length can be any value. Does anyone know any good methods for creating a line with an arbitrary number of points? Or a way of joining two envelopes together one after another? Thanks!

this would be one possibility:
instr test
kPitches[] fillarray 60, 62, 61, 63
kDurations[] fillarray 1, 2, 3, 1
kTime init 0
kIndx init 0
if kTime <= 0 then
kPitchLine = kPitches[kIndx]
kTime = kDurations[kIndx]
kIndx += 1
endif
kTime -= 1/kr
aTest poscil .2, mtof(kPitchLine)
out aTest, aTest
endin
schedule("test",0,7)
you can use function table instead of array if you prefer. and you can wrap this into a UDO (see http://write.flossmanuals.net/csound/g-user-defined-opcodes/ for more information).
perhaps you consider to join the csound mailing list. you will get more suggestions there: https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND

Related

Understanding Pandas Series Data Structure

I am trying to get my head around the Pandas module and started learning about the Series data structure.
I have created the following Series in Spyder :-
songs = pd.Series(data = [145,142,38,13], name = "Count")
I can obtain information about the Series index using the code:-
songs.index
The output of the above code is as follows:-
My question is where it states Start = 0 and Stop = 4, what are these referring to?
I have interpreted start = 0 as the first element in the Series is in row 0.
But i am not sure what Stop value refers to as there are no elements in row 4 of the Series?
Can some one explain?
Thank you.
This concept as already explained adequately in the comments (indexing is at minus one the count of items) is prevalent in many places.
For instance, take the list data structure-
z = songs.to_list()
[145, 142, 38, 13]
len(z)
4 # length is four
# however indexing stops at i-1 position 'i' being the length/count of items in the list.
z[4] # this will raise an IndexError
# you will have to start at index 0 going till only index 3 (i.e. 4 items)
z[0], z[1], z[2], z[-1] # notice how -1 can be used to directly access the last element

I want to replace outlier instead of completely removing it... Any Suggestion?

I have data-set in which one column has outlier and these outlier somehow dependent on one more column which has 12 different categories.
So, I want to replace these outlier with mean of those categories.
for example:
column A has market_01, market_02, ..., market_12 and
column B has int values 984, 678, 1326, 887, ....., 710, .....
so, here, I want to replace 1326 outlier value with respect to its corresponding market_02.mean() rather than simply values.mean()
Try:
via mask() + groupby()+transform()
#Firstly find mean:
m=df.groupby('market')['values'].transform('mean').round(2)
#Finally replace outlier:
df['values']=df['values'].mask(df['values'].eq(1326),m)
OR
via np.where() with groupby()+transform():
m=df.groupby('market')['values'].transform('mean').round(2)
df['values']=np.where(df['values'].eq(1326),m,df['values'])
Step 1: Calculate the mean of the values corresponding to the market.
mean_val = df[df['market'] = 'market_02']['values'].mean()
Step 2: Now replace all values greater (or equal to) the value that you believe is outlier with the above calculated mean.
df['values'] = df['values'].apply(lambda x: mean_val if x >= mean_val else x)
Thank you #Pratik and #Anurag for your answers.
I have solved this by below approach: By this way my all outlier are replaced.
t = df.groupby('market').values
df['values'] = (
df.values.where(
t.transform('quantile', q=0.75) > df.values,
t.transform('median')))

Why can't I read all of the values in the matrix in scilab?

i am trying to read a csv file and my code is as follows
param=csvRead("C:\Users\USER\Dropbox\VOA-BK code\assets\Iris.csv",",","%i",'double',[],[],[1 2 3 4]); //reads number of clusters and features
data=csvRead("C:\Users\USER\Dropbox\VOA-BK code\assets\Iris.csv",",","%f",'double',[],[],[3 1 19 4]); //reads the values
numft=param(1,1);//save number of features
numcl=param(2,1);//save number of clusters
data_pts=0;
data_pts = max(size(data, "r"));//checks how many number of rows
disp(data(numft-3:data_pts,:));//print all data points (I added -3 otherwise it displays only 15 rows)
disp(numft);//print features
disp(data_pts);//print features
disp(param);
endfunction
below is the values that i am trying to read
features,4,,
clusters,3,,
5.1,3.5,1.4,0.2
4.9,3,1.4,0.2
4.7,3.2,1.3,0.2
4.6,3.1,1.5,0.2
5,3.6,1.4,0.2
7,3.2,4.7,1.4
6.4,3.2,4.5,1.5
6.9,3.1,4.9,1.5
5.5,2.3,4,1.3
6.5,2.8,4.6,1.5
5.7,2.8,4.5,1.3
6.3,3.3,6,2.5
5.8,2.7,5.1,1.9
7.1,3,5.9,2.1
6.3,2.9,5.6,1.8
6.5,3,5.8,2.2
7.6,3,6.6,2.1
I do not know why the code only displays 15 rows instead of 17. The only time it displays the correct matrix is when i put -3 in numft but with that, the number of columns would be 1. I am so confused. Is there a better way to read the values?
In the csvRead call in the first line of your script the boundaries of the region to read is incorrect, it should be corrected like this:
param=csvRead("C:\Users\USER\Dropbox\VOA-BK code\assets\Iris.csv",",","%i",'double',[],[],[1 2 2 2]);

Boundary Value Analysis, Why does use two values inside the boundary?

I can't understand why to use two values inside the boundary when using Boundary Value Analysis.
For instance, the program has the requirement: 1) Values between 1 and 100 are true, otherwise false.
func calc(x):
if (x >= 1 and x <= 100):
return True
else:
return False
A lot of books (Pressman, for instance) say you have to use the inputs 0, 1, 2, 99, 100 and 101 to test such program.
So, my question is: Why does use the inputs '2' and '99'?
I try to make a program with a fault that the test case set (0, 1, 2, 99, 100 and 101) expose a fail and the test case set (0, 1, 100, 101) does not expose it.
I can't make such program.
Could you make such program?
If not, it is a waste of resource create redundant test cases '2' and '99'.
The basic requirement is to have +-1 of the boundary values. So to test values for a range of 1-100
One test case for exact boundary values of input domains each means
1 and 100.
One test case for just below boundary value of input domains each
means 0 and 99.
One test case for just above boundary values of input domains each
means 2 and 101.
To answer your question - Why does use the inputs '2' and '99'? It is because if you are following BVA, you are checking both the limits (upper as well as lower) of the range to ensure that the software is behaving correctly. However, there are no hard and fast rules. If the range is big enough, then you should have more test points. You can also test the middle values as part of BVA.
Also, you can use Switch Case statements to create a program or multiple Ifs.

Dataframe non-null values differ from value_counts() values

There is an inconsistency with dataframes that I cant explain. In the following, I'm not looking for a workaround (already found one) but an explanation of what is going on under the hood and how it explains the output.
One of my colleagues which I talked into using python and pandas, has a dataframe "data" with 12,000 rows.
"data" has a column "length" that contains numbers from 0 to 20. she wants to divided the dateframe into groups by length range: 0 to 9 in group 1, 9 to 14 in group 2, 15 and more in group 3. her solution was to add another column, "group", and fill it with the appropriate values. she wrote the following code:
data['group'] = np.nan
mask = data['length'] < 10;
data['group'][mask] = 1;
mask2 = (data['length'] > 9) & (data['phraseLength'] < 15);
data['group'][mask2] = 2;
mask3 = data['length'] > 14;
data['group'][mask3] = 3;
This code is not good, of course. the reason it is not good is because you dont know in run time whether data['group'][mask3], for example, will be a view and thus actually change the dataframe, or it will be a copy and thus the dataframe would remain unchanged. It took me quit sometime to explain it to her, since she argued correctly that she is doing an assignment, not a selection, so the operation should always return a view.
But that was not the strange part. the part the even I couldn't understand is this:
After performing this set of operation, we verified that the assignment took place in two different ways:
By typing data in the console and examining the dataframe summary. It told us we had a few thousand of null values. The number of null values was the same as the size of mask3 so we assumed the last assignment was made on a copy and not on a view.
By typing data.group.value_counts(). That returned 3 values: 1,2 and 3 (surprise) we then typed data.group.value_counts.sum() and it summed up to 12,000!
So by method 2, the group column contained no null values and all the values we wanted it to have. But by method 1 - it didnt!
Can anyone explain this?
see docs here.
You dont' want to set values this way for exactly the reason you pointed; since you don't know if its a view, you don't know that you are actually changing the data. 0.13 will raise/warn that you are attempting to do this, but easiest/best to just access like:
data.loc[mask3,'group'] = 3
which will guarantee you inplace setitem