How to use org.openimaj.ml.gmm to construct speaker models. - verification

I would like to know how I can get GMM speaker model using OpenIMaj library.
org.openimaj.ml.gmm.GaussianMixtureModelEM. I have tried following
GaussianMixtureModelEM gmm = new GaussianMixtureModelEM
(DEFAULT_NUMBER_COMPONENTS,GaussianMixtureModelEM.CovarianceType.Diagonal);
MixtureOfGaussians mixture = gmm.estimate(data);
boolean convergerd = gmm.hasConverged();
I get true that GaussianMixtureModelEM has converged, I am lost where to go from here. Any help guidance would be appreciated.

Given your comment, then mixture.estimateLogProbability(point) should do what you want (see http://www.openimaj.org/apidocs/org/openimaj/math/statistics/distribution/MixtureOfGaussians.html#estimateLogProbability(double[])).

Related

How to defining Non Linear Vector Constraints in Julia

I'm trying to minimize a function which takes a vector as input and is subjected to some non linear constraints. I'm very new to Julia. I’m trying to implement pseudospectral methods using Ipopt.My isssue is Optimizer which i'm using takes gradient of cost function and constraints. Functions like "ForwardDiff , ReverseDiff" are not helping in finding the gradient of my vector function.
I found that similar issue has been face by #acauligi. So far I haven't found any solution.
using LinearAlgebra, DiffEqOperators, ForwardDiff, ApproxFun, FFTW, ToeplitzMatrices
using ModelingToolkit,DifferentialEquations,NLPModels,ADNLPModels,NLPModelsIpopt
using DCISolver,JSOSolvers
# Number of collocation points
N=31 # This number can go up to 200
function Dmatrix(N::Integer)
h=2*pi/N;
ns=range(1,N-1,step=1);
col1(nps)=0.5*((-1)^nps)/sin(nps*h/2);
col=[0,col1.(ns)...];
row=[0,col[end:-1:2]...];
D=Toeplitz(col,row)
end
Dmat=Dmatrix(N);
function dzdt(x,y,t,a)
u=(1-(x^2)/4)-y^2;
dx=-4*y+x*u+a*x;
dy=x+y*u+a*y;
[dx,dy]
end
# initial guess
tfinal=1.1*pi;
tpoints=collect(range(1,N,step=1))*tfinal/N;
xguess=sin.((2*pi/tfinal)*tpoints)*2.0
yguess=-sin.((2*pi/tfinal)*tpoints)*0.5
function dxlist(xs,ys,tf,a)
nstates=2
ts=collect(range(1,N,step=1))*tf/N;
xytsZip=zip(xs,ys,ts);
dxD0=[dzdt(x,y,t,a) for (x,y,t) in xytsZip];
dxD=reduce(hcat, dxD0)';
xlyl=reshape([xs;ys],N,nstates);
dxF=(Dmat*xlyl)*(2.0*pi/tf);
err=dxD-dxF;
[vcat(err'...).-10^(-10);-vcat(err'...).+10^(-10)]
end
function cons(x)
tf=x[end-1];
a=x[end];
xs1=x[1:N];
ys1=x[N+1:2*N];
dxlist(xs1,ys1,tf,a)
end
a0=10^-3;
x0=vcat([xguess;yguess;[tfinal,a0]]);
obj(x)=0.0;
xlower1=push!(-3*ones(2*N),pi);
xlower=push!(xlower1,-10^-3)
xupper1=push!(3*ones(2*N),1.5*pi);
xupper=push!(xupper,10^-3)
consLower=-ones(4*N)*Inf;
consUpper=zeros(4*N)
# println("constraints vector = ",cons(x0))
model=ADNLPModel(obj,x0,xlower,xupper,cons,consLower,consUpper; backend =
ADNLPModels.ReverseDiffAD)
output=ipopt(model)
xstar=output.solution
fstar=output.objective
I got the solution for this same problem in 3 minutes in MatLab.(solution to this problem is . Time period of system is "pi" when a=0.).
I was hoping I could get the same result much faster in Julia. I have asked in Julia discourse so far I have got any suggestion. Any suggestion on how fix this issue highly appreciated. Thank you all.
I think there was two issues with your code. First,
xupper1=push!(3*ones(2*N),1.5*pi);
xupper=push!(xupper1,10^-3)
and then for some reason the product of the Toeplitz matrix by another matrix gives an error with the automatic differentiation. However, the following works:
function dxlist(xs,ys,tf,a)
nstates=2
ts=collect(range(1,N,step=1))*tf/N;
xytsZip=zip(xs,ys,ts);
dxD0=[dzdt(x,y,t,a) for (x,y,t) in xytsZip];
dxD=reduce(hcat, dxD0)';
xlyl=reshape([xs;ys],N,nstates);
dxF=vcat((Dmat*xlyl[:,1])*(2.0*pi/tf), (Dmat*xlyl[:,2])*(2.0*pi/tf));
err=vcat(dxD...) - dxF;
[err.-10^(-10);-err.+10^(-10)]
end
At the end, Ipopt returns the right results
model=ADNLPModel(obj,x0,xlower,xupper,cons,consLower,consUpper)
output=ipopt(model)
xstar=output.solution
fstar=output.objective
I also noticed that using Percival.jl is faster
using Percival
output=percival(model, max_time = 300.0)
xstar=output.solution
fstar=output.objective
Note that ADNLPModels.jl is receiving some attention and will improve significantly.

How can I access value in sequence type?

There are the following attributes in client_output
weights_delta = attr.ib()
client_weight = attr.ib()
model_output = attr.ib()
client_loss = attr.ib()
After that, I made the client_output in the form of a sequence through
a = tff.federated_collect(client_output) and round_model_delta = tff.federated_map(selecting_fn,a)in here . and I declared
`
#tff.tf_computation() # append
def selecting_fn(a):
#TODO
return round_model_delta
in here. In the process of averaging on the server, I want to average the weights_delta by selecting some of the clients with a small loss value. So I try to access it via a.weights_delta but it doesn't work.
The tff.federated_collect returns a tff.SequenceType placed at tff.SERVER which you can manipulate the same way as for example client dataset is usually handled in a method decorated by tff.tf_computation.
Note that you have to use the tff.federated_collect operator in the scope of a tff.federated_computation. What you probably want to do[*] is pass it into a tff.tf_computation, using the tff.federated_map operator. Once inside the tff.tf_computation, you can think of it as a tf.data.Dataset object and everything in the tf.data module is available.
[*] I am guessing. More detailed explanation of what you would like to achieve would be helpful.

Matplotlib - Draw H and V line by specifying X or Y value on a plot

I was wondering today about how finding a specific value on a plot and drawing the right line that goes with. I used to do that on an old chart library, and I was wondering that perhaps this functionnality exist but I don't know how to find it.
The result should look like this: https://miro.medium.com/max/1070/1*Ckhi9soE9Lx2lIf9tPVLMQ.png
To provide some context, I'm doing a PCA over my data, and I would like to point out some thresholds at 97.5, 99 and 99.5% of explained cumuled variance.
Have a great day!
EDIT:
See Answer
As solved by ImportanceOfBeingErnest, here is the code:
whole_pca = PCA().fit(np.array(inputs['Scale'].tolist()))
cumul = np.cumsum(np.round(whole_pca.explained_variance_ratio_, decimals=3)*100)
over_95 = np.argmax(cumul>95)
over_99 = np.argmax(cumul>99)
over_995 = np.argmax(cumul>99.5)
plt.plot(cumul)
plt.plot([0,over_95,over_95], [95,95,0])
plt.plot([0,over_99,over_99], [99,99,0])
plt.plot([0,over_995,over_995], [99.5,99.5,0])
plt.xlim(left=0)
plt.ylim(bottom=80)
plt.ylabel('% Variance Explained')
plt.xlabel('# of Features')
plt.title('PCA Analysis')
Result in:
Thank you!

Rally API Fetch All Features under specific Epic

Hi I cannot find this anywhere and it seems so easy.
I am trying to get All the Features under specific Epic with Alteryx.
I am using this to get them all, but I want to make more precise.
https://rally1.rallydev.com/slm/webservice/v2.0/portfolioitem/feature?pagesize=2000
https://rally1.rallydev.com/slm/webservice/v2.0/hierarchicalrequirement?workspace=https://rally1.rallydev.com/slm/webservice/v2.0/workspace/....
Can anyone help please?
Got another solution but it is still not perfect:
First I am getting SubEpics
https://rally1.rallydev.com/slm/webservice/v2.0/portfolioitem/subepic?pagesize=2000&&start=1&query=(Parent.FormattedID = E101)
And then Features:
https://rally1.rallydev.com/slm/webservice/v2.0/portfolioitem/feature?pagesize=2000&query=(((Parent.FormattedID = SE104) OR (Parent.FormattedID = SE101)) OR ((Parent.FormattedID = 102) OR (Parent.FormattedID = 103)))
OK! Correct answer for Feature is :) Got it with help from SAGI GABAY from CA:
https://rally1.rallydev.com/slm/webservice/v2.0/portfolioitem/feature?pagesize=2000&&start=1&query=(Parent.Parent.FormattedID
= E101)
I have also asked about the User Stories level and going this way (Parent.Parent.Parent.FormattedID) doesn't work anymore but you can do this:
https://rally1.rallydev.com/slm/webservice/v2.0/hierarchicalrequirement?pagesize=2000&&start=1&query=(Feature.Parent.FormattedID = SE101)

How to make spaCy case Insensitive

How can I make spaCy case insensitive when finding the entity name?
Is there any code snippet that i should add or something because the questions could mention entities that are not in uppercase?
def analyseQuestion(question):
doc = nlp(question)
entity=doc.ents
return entity
print(analyseQuestion("what is the best seller of Nicholas Sparks "))
print(analyseQuestion("what is the best seller of nicholas sparks "))
which gives
(Nicholas Sparks,)
()
This is old, but this hopefully this will help anyone looking at similar problems.
You can use a truecaser to improve your results.
https://pypi.org/project/truecase/
It is very easy. You just need to add a preprocessing step of question.lower() to your function:
def analyseQuestion(question):
# Preprocess question to make further analysis case-insensetive
question = question.lower()
doc = nlp(question)
entity=doc.ents
return entity
The solution inspired by this code from Rasa NLU library. However, for non-english (non-ASCII) text it might not work. For that case you can try:
question = question.decode('utf8').lower().encode('utf8')
However the NER module in spacy, to some extent depends on the case of the tokens and you might face some discrepancies as it is a statistical trained model.Refer this link.