How do I tell Wolfram Alpha that I want it to compute the terms of an integer sequence? - wolframalpha

I want to compute a list of the terms of the sequence
(14747-40*n)/(2621440*(41-n)), n from 1 to 40
I tried entering the above in Wolfram Alpha, and it plots me a graph of the function. This isn't what I want. I've tried variations on this command, as well as guessing at various keywords to stick before it, and I either get back the same thing, or something unhelpful.
The help page on sequences suggests various things you might do with sequences, but doesn't say how to do something simple like this???

The following works:
Table[(14747-40*n)/(2621440*(41-n)) n, {n, 1, 40}]
Clicking on "approximate form" then on "copy plaintext" gives the following:
{0.000140257, 0.000286924, 0.000440507, 0.000601567, 0.000770728,
0.000948683, 0.00113621, 0.00133417, 0.00154356, 0.00176547,
0.00200115, 0.00225204, 0.00251976, 0.00280618, 0.00311345,
0.00344409, 0.00380101, 0.00418764, 0.00460803, 0.00506701,
0.00557035, 0.00612508, 0.00673974, 0.00742493, 0.00819385,
0.00906326, 0.0100547, 0.0111963, 0.0125257, 0.0140939, 0.0159728,
0.0182658, 0.0211282, 0.0248041, 0.0297003, 0.0365488, 0.0468139,
0.0639122, 0.0980936, 0.200607}

Related

How to defining Non Linear Vector Constraints in Julia

I'm trying to minimize a function which takes a vector as input and is subjected to some non linear constraints. I'm very new to Julia. I’m trying to implement pseudospectral methods using Ipopt.My isssue is Optimizer which i'm using takes gradient of cost function and constraints. Functions like "ForwardDiff , ReverseDiff" are not helping in finding the gradient of my vector function.
I found that similar issue has been face by #acauligi. So far I haven't found any solution.
using LinearAlgebra, DiffEqOperators, ForwardDiff, ApproxFun, FFTW, ToeplitzMatrices
using ModelingToolkit,DifferentialEquations,NLPModels,ADNLPModels,NLPModelsIpopt
using DCISolver,JSOSolvers
# Number of collocation points
N=31 # This number can go up to 200
function Dmatrix(N::Integer)
h=2*pi/N;
ns=range(1,N-1,step=1);
col1(nps)=0.5*((-1)^nps)/sin(nps*h/2);
col=[0,col1.(ns)...];
row=[0,col[end:-1:2]...];
D=Toeplitz(col,row)
end
Dmat=Dmatrix(N);
function dzdt(x,y,t,a)
u=(1-(x^2)/4)-y^2;
dx=-4*y+x*u+a*x;
dy=x+y*u+a*y;
[dx,dy]
end
# initial guess
tfinal=1.1*pi;
tpoints=collect(range(1,N,step=1))*tfinal/N;
xguess=sin.((2*pi/tfinal)*tpoints)*2.0
yguess=-sin.((2*pi/tfinal)*tpoints)*0.5
function dxlist(xs,ys,tf,a)
nstates=2
ts=collect(range(1,N,step=1))*tf/N;
xytsZip=zip(xs,ys,ts);
dxD0=[dzdt(x,y,t,a) for (x,y,t) in xytsZip];
dxD=reduce(hcat, dxD0)';
xlyl=reshape([xs;ys],N,nstates);
dxF=(Dmat*xlyl)*(2.0*pi/tf);
err=dxD-dxF;
[vcat(err'...).-10^(-10);-vcat(err'...).+10^(-10)]
end
function cons(x)
tf=x[end-1];
a=x[end];
xs1=x[1:N];
ys1=x[N+1:2*N];
dxlist(xs1,ys1,tf,a)
end
a0=10^-3;
x0=vcat([xguess;yguess;[tfinal,a0]]);
obj(x)=0.0;
xlower1=push!(-3*ones(2*N),pi);
xlower=push!(xlower1,-10^-3)
xupper1=push!(3*ones(2*N),1.5*pi);
xupper=push!(xupper,10^-3)
consLower=-ones(4*N)*Inf;
consUpper=zeros(4*N)
# println("constraints vector = ",cons(x0))
model=ADNLPModel(obj,x0,xlower,xupper,cons,consLower,consUpper; backend =
ADNLPModels.ReverseDiffAD)
output=ipopt(model)
xstar=output.solution
fstar=output.objective
I got the solution for this same problem in 3 minutes in MatLab.(solution to this problem is . Time period of system is "pi" when a=0.).
I was hoping I could get the same result much faster in Julia. I have asked in Julia discourse so far I have got any suggestion. Any suggestion on how fix this issue highly appreciated. Thank you all.
I think there was two issues with your code. First,
xupper1=push!(3*ones(2*N),1.5*pi);
xupper=push!(xupper1,10^-3)
and then for some reason the product of the Toeplitz matrix by another matrix gives an error with the automatic differentiation. However, the following works:
function dxlist(xs,ys,tf,a)
nstates=2
ts=collect(range(1,N,step=1))*tf/N;
xytsZip=zip(xs,ys,ts);
dxD0=[dzdt(x,y,t,a) for (x,y,t) in xytsZip];
dxD=reduce(hcat, dxD0)';
xlyl=reshape([xs;ys],N,nstates);
dxF=vcat((Dmat*xlyl[:,1])*(2.0*pi/tf), (Dmat*xlyl[:,2])*(2.0*pi/tf));
err=vcat(dxD...) - dxF;
[err.-10^(-10);-err.+10^(-10)]
end
At the end, Ipopt returns the right results
model=ADNLPModel(obj,x0,xlower,xupper,cons,consLower,consUpper)
output=ipopt(model)
xstar=output.solution
fstar=output.objective
I also noticed that using Percival.jl is faster
using Percival
output=percival(model, max_time = 300.0)
xstar=output.solution
fstar=output.objective
Note that ADNLPModels.jl is receiving some attention and will improve significantly.

Create subgraph query in Gremlin around single node with outgoing and incoming edges

I have a large Janusgraph database and I'd to create a subgraph centered around one node type and including incoming and outgoing nodes of specific types.
In Cypher, the query would look like this:
MATCH (a:Journal)N-[:PublishedIn]-(b:Paper{paperTitle:'My Paper Title'})<-[:AuthorOf]-(c:Author)
RETURN a,b,c
This is what I tried in Gremlin:
sg = g.V().outE('PublishedIn').subgraph('j_p_a').has('Paper','paperTitle', 'My Paper Title')
.inE('AuthorOf').subgraph('j_p_a')
.cap('j_p_a').next()
But I get a syntax error. 'AuthorOf' and 'PublishedIn' are not the only edge types ending at 'Paper' nodes.
Can someone show me how to correctly execute this query in Gremlin?
As written in your query, the outE step yields edges and the has step will check properties on those edges, following that the query processor will expect an inV not another inE. Without your data model it is hard to know exactly what you need, however, looking at the Cypher I think this is what you want.
sg = g.V().outE('PublishedIn').
subgraph('j_p_a').
inV().
has('Paper','paperTitle', 'My Paper Title').
inE('AuthorOf').
subgraph('j_p_a')
cap('j_p_a').
next()
Edited to add:
As I do not have your data I used my air-routes graph. I modeled this query on yours and used some select steps to limit the data size processed. This seems to work in my testing. Hopefully you can see the changes I made and try those in your query.
sg = g.V().outE('route').as('a').
inV().
has('code','AUS').as('b').
select('a').
subgraph('sg').
select('b').
inE('contains').
subgraph('sg').
cap('sg').
next()

Sentence segmentation and dependency parser

I’m pretty new to python (using python 3) and spacy (and programming too). Please bear with me.
I have three questions where two are more or less the same I just can’t get it to work.
I took the “syntax specific search with spacy” (example) and tried to make different things work.
My program currently reads txt and the normal extraction
if w.lower_ != 'music':
return False
works.
My first question is: How can I get spacy to extract two words?
For example: “classical music”
With the previous mentioned snippet I can make it extract either classical or music. But if I only search for one of the words I also get results I don’t want like.
Classical – period / era
Or when I look for only music
Music – baroque, modern
The second question is: How can I get the dependencies to work?
The example dependency with:
elif w.dep_ != 'nsubj': # Is it the subject of a verb?
return False
works fine. But everything else I tried does not really work.
For example, I want to extract sentences with the word “birthday” and the dependency ‘DATE’. (so the dependency is an entity)
I got
if d.ent_type_ != ‘DATE’:
return False
To work.
So now it would look like:
def extract_information(w,d):
if w.lower_ != ‘birthday’:
return False
elif d.ent_type_ != ‘DATE’:
return False
else:
return True
Does something like this even work?
If it works the third question would be how I can filter sentences for example with a DATE. So If the sentence contains a certain word and a DATE exclude it.
Last thing maybe, I read somewhere that the dependencies are based on the “Stanford typed dependencies manual”. Is there a list which of those dependencies work with spacy?
Thank you for your patience and help :)
Before I get into offering some simple suggestions to your questions, have you tried using displaCy's visualiser on some of your sentences?
Using an example sentence 'John's birthday was yesterday', you'll find that within the parsed sentence, birthday and yesterday are not necessarily direct dependencies of one another. So searching based on the birthday word having a dependency of a DATE type entity, might not be yield the best of results.
Onto the first question:
A brute force method would be to look for matching subsequent words after you have parsed the sentence.
doc = nlp(u'Mary enjoys classical music.')
for (i,token) in enumerate(doc):
if (token.lower_ == 'classical') and (i != len(doc)-1):
if doc[i+1].lower_ == 'music':
print 'Target Acquired!'
If you're unsure of what enumerate does, look it up. It's the pythonic way of using python.
To questions 2 and 3, one simple (but not elegant) way of solving this is to just identify in a parsed sentence if the word 'birthday' exists and if it contains an entity of type 'DATE'.
doc = nlp(u'John\'s birthday was yesterday.')
for token in doc:
if token.lower_ == 'birthday':
for entities in doc.ents:
if entities.label_ == 'DATE':
print 'Found ya!'
As for the list of dependencies, I presume you're referring to the Part-Of-Speech tags. Check out the documentation on this page.
Good luck! Hope that helped.

options for questions in Watson conversation api

I need to get the available options for a certain question in Watson conversation api?
For example I have a conversation app and in some cases Y need to give the users a list to select an option from it.
So I am searching for a way to get the available reply options for a certain question.
I can't answer to the NPM part, but you can get a list of the top 10 possible answers by setting alternate_intents to true. For example.
{
"context":{
"conversation_id":"cbbea7b5-6971-4437-99e0-a82927607079",
"system":{
"dialog_stack":["root"
],
"dialog_turn_counter":1,
"dialog_request_counter":1
}
},
"alternate_intents":true,
"input":{
"text":"Is it hot outside?"
}
}
This will return at most the top ten answers. If there is a limited number of intents it will only show them.
Part of your JSON response will have something like this:
"intents":[{
"intent":"temperature",
"confidence":0.9822100598134365
},
{
"intent":"conditions",
"confidence":0.017789940186563623
}
This won't get you the output text though from the node. So you will need to have your answer store elsewhere to cross reference.
Also be aware that just because it is in the list, doesn't mean it's a valid answer to give the end user. The confidence level needs to be taken into account.
The confidence level also does not work like a normal confidence. You need to determine your upper and lower bounds. I detail this briefly here.
Unlike earlier versions of WEA, the confidence is relative to the
number of intents you have. So the quickest way to find the lowest
confidence is to send a really ambiguous word.
These are the results I get for determining temperature or conditions.
treehouse = conditions / 0.5940327076534431
goldfish = conditions / 0.5940327076534431
music = conditions / 0.5940327076534431
See a pattern?🙂 So the low confidence level I will set at 0.6. Next
is to determine the higher confidence range. You can do this by mixing
intents within the same question text. It may take a few goes to get a
reasonable result.
These are results from trying this (C = Conditions, T = Temperature).
hot rain = T/0.7710267712183176, C/0.22897322878168241
windy desert = C/0.8597747113239446, T/0.14022528867605547
ice wind = C/0.5940327076534431, T/0.405967292346557
I purposely left out high confidence ones. In this I am going to go
with 0.8 as the high confidence level.

pseudo randomization in loop PsychoPy

I know other people have asked similar questions in past but I am still stuck on how to solve the problem and was hoping someone could offer some help. Using PsychoPy, I would like to present different images, specifically 16 emotional trials, 16 neutral trials and 16 face trials. I would like to pseudo randomize the loop such that there would not be more than 2 consecutive emotional trials. I created the experiment in Builder but compiled a script after reading through previous posts on pseudo randomization.
I have read the previous posts that suggest creating randomized excel files and using those, but considering how many trials I have, I think that would be too many and was hoping for some help with coding. I have tried to implement and tweak some of the code that has been posted for my experiment, but to no avail.
Does anyone have any advice for my situation?
Thank you,
Rae
Here's an approach that will always converge very quickly, given that you have 16 of each type and only reject runs of more than two emotion trials. #brittUWaterloo's suggestion to generate trials offline is very good--this what I do myself typically. (I like to have a small number of random orders, do them forward for some subjects and backwards for others, and prescreen them to make sure there are no weird or unintended juxtapositions.) But the algorithm below is certainly safe enough to do within an experiment if you prefer.
This first example assumes that you can represent a given trial using a string, such as 'e' for an emotion trial, 'n' neutral, 'f' face. This would work with 'emo', 'neut', 'face' as well, not just single letters, just change eee to emoemoemo in the code:
import random
trials = ['e'] * 16 + ['n'] * 16 + ['f'] * 16
while 'eee' in ''.join(trials):
random.shuffle(trials)
print trials
Here's a more general way of doing it, where the trial codes are not restricted to be strings (although they are strings here for illustration):
import random
def run_of_3(trials, obj):
# detect if there's a run of at least 3 objects 'obj'
for i in range(2, len(trials)):
if trials[i-2: i+1] == [obj] * 3:
return True
return False
tr = ['e'] * 16 + ['n'] * 16 + ['f'] * 16
while run_of_3(tr, 'e'):
random.shuffle(tr)
print tr
Edit: To create a PsychoPy-style conditions file from the trial list, just write the values into a file like this:
with open('emo_neu_face.csv', 'wb') as f:
f.write('stim\n') # this is a 'header' row
f.write('\n'.join(tr)) # these are the values
Then you can use that as a conditions file in a Builder loop in the regular way. You could also open this in Excel, and so on.
This is not quite right, but hopefully will give you some ideas. I think you could occassionally get caught in an infinite cycle in the elif statement if the last three items ended up the same, but you could add some sort of a counter there. In any case this shows a strategy you could adapt. Rather than put this in the experimental code, I would generate the trial sequence separately at the command line, and then save a successful output as a list in the experimental code to show to all participants, and know things wouldn't crash during an actual run.
import random as r
#making some dummy data
abc = ['f']*10 + ['e']*10 + ['d']*10
def f (l1,l2):
#just looking at the output to see how it works; can delete
print "l1 = " + str(l1)
print l2
if not l2:
#checks if second list is empty, if so, we are done
out = list(l1)
elif (l1[-1] == l1[-2] and l1[-1] == l2[0]):
#shuffling changes list in place, have to copy it to use it
r.shuffle(l2)
t = list(l2)
f (l1,t)
else:
print "i am here"
l1.append(l2.pop(0))
f(l1,l2)
return l1
You would then run it with something like newlist = f(abc[0:2],abc[2:-1])