CNTK Runtime Error related to C.gather - cntk

I have an error while trying to test CNTK.
I'm trying to slice a parameter using input_variable as an index. Using C.gather for slicing will cause a memory error in the backprop process.
Errors occur in all cntk2 environments such as CPU, GPU, Docker, Local installation.
Error Msg and Callstack
RuntimeError: CUBLAS failure 11: CUBLAS_STATUS_MAPPING_ERROR ; GPU=0 ;
hostname=.... ; expr=cublasGetMatrix((int) numRows, (int)
numCols, sizeof(ElemType), Data(), (int) GetNumRows(), dst, (int)
colStride)
[CALL STACK]
Microsoft::MSR::CNTK::CudaTimer:: Stop
- Microsoft::MSR::CNTK::Matrix:: CopySection
- Microsoft::MSR::CNTK::Matrix:: AssignValuesOf
- CNTK::NDArrayView:: CopyFrom
- CNTK::NDArrayView::NDArrayView
- CNTK::TrainingParameterSchedule:: Serialize
- CNTK::DictionaryValue:: Save
- CNTK::Trainer:: SummarizeTrainingProgress
- PyInit__cntk_py
- PyCFunction_Call
- PyEval_GetFuncDesc
- PyEval_EvalFrameEx
- PyEval_GetFuncDesc (x2)
- PyEval_EvalFrameEx (x2)
Code
x = input_val[:-2]
p1 = input_val[-2]
p2 = input_val[-1]
activator = relu
W1 = C.Parameter((slices,input_dim,hidden_layers_dim), init=C.glorot_normal(), name='W1')
b1 = C.Parameter((slices,hidden_layers_dim), init=0, name='b1')
W2 = C.Parameter((slices,hidden_layers_dim,hidden_layers_dim), init=C.glorot_normal(), name='W2')
b2 = C.Parameter((slices,hidden_layers_dim), init=0, name='b2')
W3 = C.Parameter((slices,hidden_layers_dim,output_dim), init=C.glorot_normal(), name='W3')
b3 = C.Parameter((slices,output_dim), init=0, name='b3')
W11 = C.gather(W1, p1)
b11 = C.gather(b1, p1)
W1x = C.reshape(W11, (input_dim,hidden_layers_dim))
b1x = C.reshape(b11, (hidden_layers_dim,))
W21 = C.gather(W2, p1)
b21 = C.gather(b2, p1)
W2x = C.reshape(W21, (hidden_layers_dim,hidden_layers_dim))
b2x = C.reshape(b21, (hidden_layers_dim,))
W31 = C.gather(W3, p1)
b31 = C.gather(b3, p1)
W3x = C.reshape(W31, (hidden_layers_dim,output_dim))
b3x = C.reshape(b31, (output_dim,))
x = activator(C.times(x, W1x) + b1x)
x = activator(C.times(x, W2x) + b2x)
x = C.times(x, W3x) + b3x

I can't reproduce this with the latest master. This is very likely a bug that was fixed right after the CNTK 2.1 release. The next release (2.2) will be around Sept. 15 2017. If the problem persists, after you have upgraded to 2.2, opening a github issue might be the right way to address this.

Related

Trouble writing OptimizationFunction for automatic forward differentiation during Parameter Estimation of an ODEProblem

I am trying to learn Julia for its potential use in parameter estimation. I am interested in estimating kinetic parameters of chemical reactions, which usually involves optimizing reaction parameters with multiple independent batches of experiments. I have successfully optimized a single batch, but need to expand the problem to use many different batches. In developing a sample problem, I am trying to optimize using two toy batches. I know there are probably smarter ways to do this (subject of a future question), but my current workflow involves calling an ODEProblem for each batch, calculating its loss against the data, and minimizing the sum of the residuals for the two batches. Unfortunately, I get an error when initiating the optimization with Optimization.jl. The current code and error are shown below:
using DifferentialEquations, Plots, DiffEqParamEstim
using Optimization, ForwardDiff, OptimizationOptimJL, OptimizationNLopt
using Ipopt, OptimizationGCMAES, Optimisers
using Random
#Experimental data, species B is NOT observed in the data
times = [0.0, 0.071875, 0.143750, 0.215625, 0.287500, 0.359375, 0.431250,
0.503125, 0.575000, 0.646875, 0.718750, 0.790625, 0.862500,
0.934375, 1.006250, 1.078125, 1.150000]
A_obs = [1.0, 0.552208, 0.300598, 0.196879, 0.101175, 0.065684, 0.045096,
0.028880, 0.018433, 0.011509, 0.006215, 0.004278, 0.002698,
0.001944, 0.001116, 0.000732, 0.000426]
C_obs = [0.0, 0.187768, 0.262406, 0.350412, 0.325110, 0.367181, 0.348264,
0.325085, 0.355673, 0.361805, 0.363117, 0.327266, 0.330211,
0.385798, 0.358132, 0.380497, 0.383051]
P_obs = [0.0, 0.117684, 0.175074, 0.236679, 0.234442, 0.270303, 0.272637,
0.274075, 0.278981, 0.297151, 0.297797, 0.298722, 0.326645,
0.303198, 0.277822, 0.284194, 0.301471]
#Create additional data sets for a multi data set optimization
#Simple noise added to data for testing
times_2 = times[2:end] .+ rand(range(-0.05,0.05,100))
P_obs_2 = P_obs[2:end] .+ rand(range(-0.05,0.05,100))
A_obs_2 = A_obs[2:end].+ rand(range(-0.05,0.05,100))
C_obs_2 = C_obs[2:end].+ rand(range(-0.05,0.05,100))
#ki = [2.78E+00, 1.00E-09, 1.97E-01, 3.04E+00, 2.15E+00, 5.27E-01] #Target optimized parameters
ki = [0.1, 0.1, 0.1, 0.1, 0.1, 0.1] #Initial guess of parameters
IC = [1.0, 0.0, 0.0, 0.0] #Initial condition for each species
tspan1 = (minimum(times),maximum(times)) #tuple timespan of data set 1
tspan2 = (minimum(times_2),maximum(times_2)) #tuple timespan of data set 2
# data = VectorOfArray([A_obs,C_obs,P_obs])'
data = vcat(A_obs',C_obs',P_obs') #Make multidimensional array containing all observed data for dataset1, transpose to match shape of ODEProblem output
data2 = vcat(A_obs_2',C_obs_2',P_obs_2') #Make multidimensional array containing all observed data for dataset2, transpose to match shape of ODEProblem output
#make dictionary containing data, time, and initial conditions
keys1 = ["A","B"]
keys2 = ["time","obs","IC"]
entryA =[times,data,IC]
entryB = [times_2, data2,IC]
nest=[Dict(zip(keys2,entryA)),Dict(zip(keys2,entryB))]
exp_dict = Dict(zip(keys1,nest)) #data dictionary
#rate equations in power law form r = k [A][B]
function rxn(x, k)
A = x[1]
B = x[2]
C = x[3]
P = x[4]
k1 = k[1]
k2 = k[2]
k3 = k[3]
k4 = k[4]
k5 = k[5]
k6 = k[6]
r1 = k1 * A
r2 = k2 * A * B
r3 = k3 * C * B
r4 = k4 * A
r5 = k5 * A
r6 = k6 * A * B
return [r1, r2, r3, r4, r5, r6] #returns reaction rate of each equation
end
#Mass balance differential equations
function mass_balances(di,x,args,t)
k = args
r = rxn(x, k)
di[1] = - r[1] - r[2] - r[4] - r[5] - r[6] #Species A
di[2] = + r[1] - r[2] - r[3] - r[6] #Species B
di[3] = + r[2] - r[3] + r[4] #Species C
di[4] = + r[3] + r[5] + r[6] #Species P
end
function ODESols(time,uo,parms)
time_init = (minimum(time),maximum(time))
prob = ODEProblem(mass_balances,uo,time_init,parms)
sol = solve(prob, Tsit5(), reltol=1e-8, abstol=1e-8,save_idxs = [1,3,4],saveat=time) #Integrate prob
return sol
end
function cost_function(data_dict,parms)
res_dict = Dict(zip(keys(data_dict),[0.0,0.0]))
for key in keys(data_dict)
pred = ODESols(data_dict[key]["time"],data_dict[key]["IC"],parms)
loss = L2Loss(data_dict[key]["time"],data_dict[key]["obs"])
err = loss(pred)
res_dict[key] = err
end
residual = sum(res_dict[key] for key in keys(res_dict))
#show typeof(residual)
return residual
end
lb = [0.0,0.0,0.0,0.0,0.0,0.0] #parameter lower bounds
ub = [10.0,10.0,10.0,10.0,10.0,10.0] #parameter upper bounds
optfun = Optimization.OptimizationFunction(cost_function,Optimization.AutoForwardDiff())
optprob = Optimization.OptimizationProblem(optfun,exp_dict, ki,lb=lb,ub=ub,reltol=1E-8) #Set up optimization problem
optsol=solve(optprob, BFGS(),maxiters=10000) #Solve optimization problem
println(optsol.u) #print solution
when I call optsol I get the error:
ERROR: MethodError: no method matching ForwardDiff.GradientConfig(::Optimization.var"#89#106"{OptimizationFunction{true, Optimization.AutoForwardDiff{nothing}, typeof(cost_function), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Vector{Float64}}, ::Dict{String, Dict{String, Array{Float64}}}, ::ForwardDiff.Chunk{2})
Searching online suggests that the issue may be that my cost_function function is not generic enough for ForwardDiff to handle, however I am not sure how to identify where the issue is in this function, or whether it is related to the functions (mass_balances and rxn) that are called within cost_function. Another potential issue is that I am not calling the functions appropriately when building the OptimizationFunction or the OpptimizationProblem, but I cannot identify the issue here either.
Thank you for any suggestions and your help in troubleshooting this application!
res_dict = Dict(zip(keys(data_dict),[0.0,0.0]))
This dictionary is declared to the wrong type.
zerotype = zero(params[1])
res_dict = Dict(zip(keys(data_dict),[zerotype ,zerotype]))
or
res_dict = Dict(zip(keys(data_dict),zeros(eltype(params),2)))
Either way, you want your intermediate calculations to match the type of params when using AutoForwardDiff().
In addition to the variable type specification suggested by Chris, my model also had an issue with the order of the arguments of cost_function and how I passed the arguments to the problem in optprob. This solution was shown by Contradict here

I want train a set of weight using pytorch, but the weights do not even change

I want to reproduce a method from a paper, the code in this paper was written in tensorflow1.0 and I want to rewrite it in pytorch. A brief description, I want to get a set of G that can be used to reweight input data but in training, the G doesn't even change, this is the tensorflow code:
n,p = X_input.shape
n_e, p_e = X_encoder_input.shape
display_step = 100
X = tf.placeholder("float", [None, p])
X_encoder = tf.placeholder("float", [None, p_e])
G = tf.Variable(tf.ones([n,1]))
loss_balancing = tf.constant(0, tf.float32)
for j in range(1,p+1):
X_j = tf.slice(X_encoder, [j*n,0],[n,p_e])
I = tf.slice(X, [0,j-1],[n,1])
balancing_j = tf.divide(tf.matmul(tf.transpose(X_j),G*G*I),tf.maximum(tf.reduce_sum(G*G*I),tf.constant(0.1))) - tf.divide(tf.matmul(tf.transpose(X_j),G*G*(1-I)),tf.maximum(tf.reduce_sum(G*G*(1-I)),tf.constant(0.1)))
loss_balancing += tf.norm(balancing_j,ord=2)
loss_regulizer = (tf.reduce_sum(G*G)-n)**2 + 10*(tf.reduce_sum(G*G-1))**2#
loss = loss_balancing + 0.0001*loss_regulizer
optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(loss)
saver = tf.train.Saver()
sess = tf.Session()
sess.run(tf.global_variables_initializer())
and this is my rewriting pytorch code:
n, p = x_test.shape
loss_balancing = torch.tensor(0.0)
G = nn.Parameter(torch.ones([n,1]))
optimizer = torch.optim.RMSprop([G] , lr=0.001)
for i in range(num_steps):
for j in range(1, p+1):
x_j = x_all_encoder[j * n : j*n + n , :]
I = x_test[0:n , j-1:j]
balancing_j = torch.divide(torch.matmul(torch.transpose(x_j,0,1) , G * G * I) ,
torch.maximum( (G * G * I).sum() ,
torch.tensor(0.1) -
torch.divide(torch.matmul(torch.transpose(x_j,0,1) ,G * G * (1-I)),
torch.maximum( (G*G*(1-I)).sum() , torch.tensor(0.1) )
)
)
)
loss_balancing += nn.Parameter(torch.norm(balancing_j))
loss_regulizer = nn.Parameter(((G * G) - n).sum() ** 2 + 10 * ((G * G - 1).sum()) ** 2)
loss = nn.Parameter( loss_balancing + 0.0001 * loss_regulizer )
optimizer.zero_grad()
loss.backward()
optimizer.step()
if i % 100 == 0:
print('Loss:{:.4f}'.format(loss.item()))
and the G.grad = None, I want to know how to get the G a set of value by iteration to minimize the Loss , Thanks.
Firstly, please provide a minimal reproducible example. It will be very helpful for people to answer your question.
Since G.grad has no value, it indicates that loss.backward() didn't properly work.
The computation of gradient can be disturbed by many factors, but in this case, I suspect the maximum operation in your code prevents the backward flow since the maximum operation is not differentiable in general.
To check if this hypothesis is correct, you could check the gradient of a tensor created after the maximum operation which I can't do because provided code is not executable in my case.

solve_ivp error: 'Required step size is less than spacing between numbers.'

Been trying to solve the newtonian two-body problem using RK45 from scipy however keep running into the TypeError:'Required step size is less than spacing between numbers.' I've tried different values of t_eval than the one below but nothing seems to work.
from scipy import optimize
from numpy import linalg as LA
import matplotlib.pyplot as plt
from scipy.optimize import fsolve
import numpy as np
from scipy.integrate import solve_ivp
AU=1.5e11
a=AU
e=0.5
mss=2E30
ms = 2E30
me = 5.98E24
mv=4.867E24
yr=3.15e7
h=100
mu1=ms*me/(ms+me)
mu2=ms*me/(ms+me)
G=6.67E11
step=24
vi=np.sqrt(G*ms*(2/(a*(1-e))-1/a))
#sun=sphere(pos=vec(0,0,0),radius=0.1*AU,color=color.yellow)
#earth=sphere(pos=vec(1*AU,0,0),radius=0.1*AU)
sunpos=np.array([-903482.12391302, -6896293.6960525, 0. ])
earthpos=np.array([a*(1-e),0,0])
earthv=np.array([0,vi,0])
sunv=np.array([0,0,0])
def accelerations2(t,pos):
norme=sum( (pos[0:3]-pos[3:6])**2 )**0.5
gravit = G*(pos[0:3]-pos[3:6])/norme**3
sunaa = me*gravit
earthaa = -ms*gravit
tota=earthaa+sunaa
return [*earthaa,*sunaa]
def ode45(f,t,y,h):
"""Calculate next step of an initial value problem (IVP) of an ODE with a RHS described
by the RHS function with an order 4 approx. and an order 5 approx.
Parameters:
t: float. Current time.
y: float. Current step (position).
h: float. Step-length.
Returns:
q: float. Order 2 approx.
w: float. Order 3 approx.
"""
s1 = f(t, y[0],y[1])
s2 = f(t + h/4.0, y[0] + h*s1[0]/4.0,y[1] + h*s1[1]/4.0)
s3 = f(t + 3.0*h/8.0, y[0] + 3.0*h*s1[0]/32.0 + 9.0*h*s2[0]/32.0,y[1] + 3.0*h*s1[1]/32.0 + 9.0*h*s2[1]/32.0)
s4 = f(t + 12.0*h/13.0, y[0] + 1932.0*h*s1[0]/2197.0 - 7200.0*h*s2[0]/2197.0 + 7296.0*h*s3[0]/2197.0,y[1] + 1932.0*h*s1[1]/2197.0 - 7200.0*h*s2[1]/2197.0 + 7296.0*h*s3[1]/2197.0)
s5 = f(t + h, y[0] + 439.0*h*s1[0]/216.0 - 8.0*h*s2[0] + 3680.0*h*s3[0]/513.0 - 845.0*h*s4[0]/4104.0,y[1] + 439.0*h*s1[1]/216.0 - 8.0*h*s2[1] + 3680.0*h*s3[1]/513.0 - 845.0*h*s4[1]/4104.0)
s6 = f(t + h/2.0, y[0] - 8.0*h*s1[0]/27.0 + 2*h*s2[0] - 3544.0*h*s3[0]/2565 + 1859.0*h*s4[0]/4104.0 - 11.0*h*s5[0]/40.0,y[1] - 8.0*h*s1[1]/27.0 + 2*h*s2[1] - 3544.0*h*s3[1]/2565 + 1859.0*h*s4[1]/4104.0 - 11.0*h*s5[1]/40.0)
w1 = y[0] + h*(25.0*s1[0]/216.0 + 1408.0*s3[0]/2565.0 + 2197.0*s4[0]/4104.0 - s5[0]/5.0)
w2 = y[1] + h*(25.0*s1[1]/216.0 + 1408.0*s3[1]/2565.0 + 2197.0*s4[1]/4104.0 - s5[1]/5.0)
q1 = y[0] + h*(16.0*s1[0]/135.0 + 6656.0*s3[0]/12825.0 + 28561.0*s4[0]/56430.0 - 9.0*s5[0]/50.0 + 2.0*s6[0]/55.0)
q2 = y[1] + h*(16.0*s1[1]/135.0 + 6656.0*s3[1]/12825.0 + 28561.0*s4[1]/56430.0 - 9.0*s5[1]/50.0 + 2.0*s6[1]/55.0)
return w1,w2, q1,q2
t=0
T=10**5
poss=[-903482.12391302, -6896293.6960525, 0. ,a*(1-e),0,0 ]
sol = solve_ivp(accelerations2, [0, 10**5], poss,t_eval=np.linspace(0,10**5,1))
print(sol)
Not sure what the error even means because I've tried many different t_evl and nothing seems to work.
The default values in solve_ivp are made for a "normal" situation where the scales of the variables are not too different from the range from 0.1 to 100. You could achieve these scales by rescaling the problem so that all lengths and related constants are in AU and all times and related constants are in days.
Or you can try to set the absolute tolerance to something reasonable like 1e-4*AU.
It also helps to use the correct first order system, as I told you recently in another question on this topic. In a mechanical system you get usually a second order ODE x''=a(x). Then the first order system to pass to the ODE solver is [x', v'] = [v, a(x)], which could be implemented as
def firstorder(t,state):
pos, vel = state.reshape(2,-1);
return [*vel, *accelerations2(t,pos)]
Next it is always helpful to apply the acceleration of Earth to Earth and of the sun to the sun. That is, fix an order of the objects. At the moment the initialization has the sun first, while in the acceleration computation you treat the state as Earth first. Switch all to sun first
def accelerations2(t,pos):
pos=pos.reshape(-1,3)
# pos[0] = sun, pos[1] = earth
norme=sum( (pos[1]-pos[0])**2 )**0.5
gravit = G*(pos[1]-pos[0])/norme**3
sunacc = me*gravit
earthacc = -ms*gravit
totacc=earthacc+sunacc
return [*sunacc,*earthacc]
And then it never goes amiss to use the correctly reproduced natural constants like
G = 6.67E-11
Then the solver call and print formatting as
state0=[*sunpos, *earthpos, *sunvel, *earthvel]
sol = solve_ivp(firstorder, [0, T], state0, first_step=1e+5, atol=1e-6*a)
print(sol.message)
for t, pos in zip(sol.t, sol.y[[0,1,3,4]].T):
print("%.6e"%t, ", ".join("%8.4g"%x for x in pos))
gives the short table
The solver successfully reached the end of the integration interval.
t x_sun y_sun x_earth y_earth
0.000000e+00 -9.035e+05, -6.896e+06, 7.5e+10, 0
1.000000e+05 -9.031e+05, -6.896e+06, 7.488e+10, 5.163e+09
that is, for this step the solver only needs one internal step.

LoadError using approximate bayesian criteria

I am getting an error that is confusing me.
using DifferentialEquations
using RecursiveArrayTools # for VectorOfArray
using DiffEqBayes
f2 = #ode_def_nohes LotkaVolterraTest begin
dx = x*(1 - x - A*y)
dy = rho*y*(1 - B*x - y)
end A B rho
u0 = [1.0;1.0]
tspan = (0.0,10.0)
p = [0.2,0.5,0.3]
prob = ODEProblem(f2,u0,tspan,p)
sol = solve(prob,Tsit5())
t = collect(linspace(0,10,200))
randomized = VectorOfArray([(sol(t[i]) + .01randn(2)) for i in 1:length(t)])
data = convert(Array,randomized)
priors = [Uniform(0.0, 2.0), Uniform(0.0, 2.0), Uniform(0.0, 2.0)]
bayesian_result_abc = abc_inference(prob, Tsit5(), t, data,
priors;num_samples=500)
Returns the error
ERROR: LoadError: DimensionMismatch("first array has length 400 which does not match the length of the second, 398.")
while loading..., in expression starting on line 20.
I have not been able to locate any array of size 400 or 398.
Thanks for your help.
Take a look at https://github.com/JuliaDiffEq/DiffEqBayes.jl/issues/52, that was due to an error in passing the t. This has been fixed on master so you can use that or wait some time, we will have a new release soon with the 1.0 upgrades which will have this fixed too.
Thanks!

How to compute gradients in tf.py_func ops?

I have a python program for calculating intersections between rectangles as this:
def clip(subjectPolygon, clipPolygon):
def inside(p):
return(cp2[0]-cp1[0])*(p[1]-cp1[1]) > (cp2[1]-cp1[1])*(p[0]-cp1[0])
def computeIntersection():
dc = [ cp1[0] - cp2[0], cp1[1] - cp2[1] ]
dp = [ s[0] - e[0], s[1] - e[1] ]
n1 = cp1[0] * cp2[1] - cp1[1] * cp2[0]
n2 = s[0] * e[1] - s[1] * e[0]
n3 = 1.0 / (dc[0] * dp[1] - dc[1] * dp[0])
return [(n1*dp[0] - n2*dc[0]) * n3, (n1*dp[1] - n2*dc[1]) * n3]
outputList = subjectPolygon
cp1 = clipPolygon[-1]
for clipVertex in clipPolygon:
cp2 = clipVertex
inputList = outputList
outputList = []
s = inputList[-1]
for subjectVertex in inputList:
e = subjectVertex
if inside(e):
if not inside(s):
outputList.append(computeIntersection())
outputList.append(e)
elif inside(s):
outputList.append(computeIntersection())
s = e
cp1 = cp2
return(outputList)
def PolygonArea(corners):
n = len(corners) # of corners
area = 0.0
for i in range(n):
j = (i + 1) % n
area += corners[i][0] * corners[j][1]
area -= corners[j][0] * corners[i][1]
area = abs(area) / 2.0
return area
I forward an image in the network and as an output it gives a rectangle. From this predicted rectangle and the ground truth rectangle (labeled data) I use the programs above with the help of tf.py_func to calculate the intersection over union. I use IoU to construct the loss function, then to be feed in the optimizer. But, by doing this I get the error:
ValueError: No gradients provided for any variable, check your graph
for ops that do not support gradients, between variables ...
and it gives a list of all my variables.
Does anybody have an idea how to solve the problem; maybe by writing the functions in TF or making the tf.py_func to compute gradients?
If it's of interest, this is the file and then navigate the repository.
You need to define gradient for py_func yourself, see this for an example