Defining a soft constraint in cvxpy - optimization

I am using cvxpy to do a simple portfolio optimization.
I implemented the following dummy code
from cvxpy import *
import numpy as np
np.random.seed(1)
n = 10
Sigma = np.random.randn(n, n)
Sigma = Sigma.T.dot(Sigma)
orig_weight = [0.15,0.25,0.15,0.05,0.20,0,0.1,0,0.1,0]
w = Variable(n)
mu = np.abs(np.random.randn(n, 1))
ret = mu.T*w
lambda_ = Parameter(sign='positive')
lambda_ = 5
risk = quad_form(w, Sigma)
constraints = [sum_entries(w) == 1, w >= 0, sum_entries(abs(w-orig_weight)) <= 0.750]
prob = Problem(Maximize(ret - lambda_ * risk), constraints)
prob.solve()
print 'Solver Status : ',prob.status
print('Weights opt :', w.value)
I am constraining on being fully invested, long only and to have a turnover of <= 75%. However I would like to use turnover as a "soft" constraint in the sense that the solver will use as little as possible but as much as necessary, currently the solver will almost fully max out turnover.
I basically want something like this which is convex and doesn't violate the DCP rules
sum_entries(abs(w-orig_weight)) >= 0.05
I would assume this should set a minimum threshold (5% here) and then use as much turnover until it finds a feasible solution.
I tried rewriting my objective function to
prob = Problem(Maximize(lambda_ * ret - risk - penalty * max(sum_entries(abs(w-orig_weight))+0.9,0)) , constraints)
where penalty is e.g. 2 and my constraint object still looks like
constraints = [sum_entries(w) == 1, w >= 0, sum_entries(abs(w-orig_weight)) <= 0.9]
I have never used soft-constraints and any explanation would be highly appreciated.
EDIT: Intermediate solution
from cvxpy import *
import numpy as np
np.random.seed(1)
n = 10
Sigma = np.random.randn(n, n)
Sigma = Sigma.T.dot(Sigma)
w = Variable(n)
mu = np.abs(np.random.randn(n, 1))
ret = mu.T*w
risk = quad_form(w, Sigma)
orig_weight = [0.15,0.2,0.2,0.2,0.2,0.05,0.0,0.0,0.0,0.0]
min_weight = [0.35,0.0,0.0,0.0,0.0,0,0.0,0,0.0,0.0]
max_weight = [0.35,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0]
lambda_ret = Parameter(sign='positive')
lambda_ret = 5
lambda_risk = Parameter(sign='positive')
lambda_risk = 1
penalty = Parameter(sign='positive')
penalty = 100
penalized = True
if penalized == True:
print '-------------- RELAXED ------------------'
constraints = [sum_entries(w) == 1, w >= 0, w >= min_weight, w <= max_weight]
prob = Problem(Maximize(lambda_ * ret - lambda_ * risk - penalty * max_entries(sum_entries(abs(w-orig_weight)))-0.01), constraints)
else:
print '-------------- HARD ------------------'
constraints = [sum_entries(w) == 1, w >= 0, w >= min_weight, w <= max_weight, sum_entries(abs(w-orig_weight)) <= 0.40]
prob = Problem(Maximize(lambda_ret * ret - lambda_risk * risk ),constraints)
prob.solve()
print 'Solver Status : ',prob.status
print('Weights opt :', w.value)
all_in = []
for i in range(n):
all_in.append(np.abs(w.value[i][0] - orig_weight[i]))
print 'Turnover : ', sum(all_in)
The above code will force a specific increase in weight for item[0], here +20%, in order to maintain the sum() =1 constraint that has to be offset by a -20% decrease, therefore I know it will need a minimum of 40% turnover to do that, if one runs the code with penalized = False the <= 0.4 have to be hardcoded, anything smaller than that will fail. The penalized = True case will find the minimum required turnover of 40% and solve the optimization. What I haven't figured out yet is how I can set a minimum threshold in the relaxed case, i.e. do at least 45% (or more if required).
I found some explanation around the problem here, in chapter 4.6 page 37.
Boyed Paper

Related

A Simple Bayesian Network with a Coin-Flipping Problem

I am trying to implement a Bayesian network and solve a regression problem using PYMC3. In my model, I have a fair coin as the parent node. If the parent node is H, the child node selects the normal distribution N(5,0.2); if T, the child selects N(0,0.5). Here is an illustration of my network.
To simulate this network, I generated a sample dataset and tried doing Bayesian regression using the code below. Currently, the model does regression only for the child node as if the parent node does not exist. I would greatly appreciate it if anyone can let me know how to implement the conditional probability P(D|C). Ultimately, I am interested in finding the probability distribution for mu1 and mu2. Thank you!
# Generate data for coin flip P(C) and store in c1
theta_real = 0.5 # unkown value in a real experiment
n_sample = 10
c1 = bernoulli.rvs(p=theta_real, size=n_sample)
# Generate data for normal distribution P(D|C) and store in d1
np.random.seed(123)
mu1 = 0
sigma1 = 0.5
mu2 = 5
sigma2 = 0.2
d1 = []
for index, item in enumerate(c1):
if item == 0:
d1.extend(normal(mu1, sigma1, 1))
else:
d1.extend(normal(mu2, sigma2, 1))
# I start building PYMC3 model here
c1_tensor = theano.shared(np.array(c1))
d1_tensor = theano.shared(np.array(d1))
with pm.Model() as model:
# define prior for c1. I am not sure how to do this.
#c1_present = pm.Categorical('c1',observed=c1_tensor)
# how do I incorporate P(D | C)
mu_prior = pm.Normal('mu', mu=2, sd=2, shape=1)
sigma_prior = pm.HalfNormal('sigma', sd=2, shape=1)
y_likelihood = pm.Normal('y', mu=mu_prior, sd=sigma_prior, observed=d1_tensor)
You could use the Dirichlet distribution as a prior for the coin toss and NormalMixture as the prior of the two Gaussians. In the following snippet I changed the fairness of the coin and increased the number of coin tosses, but you could adjust these in any way want:
import numpy as np
import pymc3 as pm
from scipy.stats import bernoulli
# Generate data for coin flip P(C) and store in c1
theta_real = 0.2 # unkown value in a real experiment
n_sample = 2000
c1 = bernoulli.rvs(p=theta_real, size=n_sample)
# Generate data for normal distribution P(D|C) and store in d1
np.random.seed(123)
mu1 = 0
sigma1 = 0.5
mu2 = 5
sigma2 = 0.2
d1 = []
for index, item in enumerate(c1):
if item == 0:
d1.extend(np.random.normal(mu1, sigma1, 1))
else:
d1.extend(np.random.normal(mu2, sigma2, 1))
with pm.Model() as model:
w = pm.Dirichlet('p', a=np.ones(2))
mu = pm.Normal('mu', 0, 20, shape=2)
sigma = np.array([0.5,0.2])
pm.NormalMixture('like',w=w,mu=mu,sigma=sigma,observed=np.array(d1))
trace = pm.sample()
pm.summary(trace)
This will give you the following:
mean sd mc_error hpd_2.5 hpd_97.5 n_eff Rhat
mu__0 4.981222 0.023900 0.000491 4.935044 5.027420 2643.052184 0.999637
mu__1 -0.007660 0.004946 0.000095 -0.017388 0.001576 2481.146286 1.000312
p__0 0.213976 0.009393 0.000167 0.195602 0.231803 2245.905021 0.999302
p__1 0.786024 0.009393 0.000167 0.768197 0.804398 2245.905021 0.999302
The parameters are recovered nicely as you can also see from the traceplots:
The above implementation will give you the posterior of theta_real, mu1 and mu2 but I could not get convergence when I added sigma1 and sigma2 as parameters to be estimated by the data (even though the prior was quite narrow):
with pm.Model() as model:
w = pm.Dirichlet('p', a=np.ones(2))
mu = pm.Normal('mu', 0, 20, shape=2)
sigma = pm.HalfNormal('sigma', sd=2, shape=2)
pm.NormalMixture('like',w=w,mu=mu,sigma=sigma,observed=np.array(d1))
trace = pm.sample()
print(pm.summary(trace))
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma, mu, p]
Sampling 4 chains: 100%|██████████| 4000/4000 [00:10<00:00, 395.57draws/s]
The acceptance probability does not match the target. It is 0.883057127209148, but should be close to 0.8. Try to increase the number of tuning steps.
The gelman-rubin statistic is larger than 1.4 for some parameters. The sampler did not converge.
The estimated number of effective samples is smaller than 200 for some parameters.
mean sd mc_error ... hpd_97.5 n_eff Rhat
mu__0 1.244021 2.165433 0.216540 ... 5.005507 2.002049 212.596596
mu__1 3.743879 2.165122 0.216510 ... 5.012067 2.002040 235.750129
p__0 0.643069 0.248630 0.024846 ... 0.803369 2.004185 30.966189
p__1 0.356931 0.248630 0.024846 ... 0.798632 2.004185 30.966189
sigma__0 0.416207 0.125435 0.012517 ... 0.504110 2.009031 17.333177
sigma__1 0.271763 0.125539 0.012533 ... 0.497208 2.007779 19.217223
[6 rows x 7 columns]
Based on that you most likely will need to reparametrize if you also wanted to estimate the two standard deviations from this data.
This answer is to supplement #balleveryday's answer, which suggests the Gaussian Mixture Model, but had some trouble getting the symmetry breaking to work. Admittedly, the symmetry breaking in the official example is done in the context of Metropolis-Hastings sampling, whereas I think NUTS might be a little more sensitive to encountering impossible values (not sure). Here's what worked for me:
import numpy as np
import pymc3 as pm
from scipy.stats import bernoulli
import theano.tensor as tt
# everything should reproduce
np.random.seed(123)
n_sample = 2000
# Generate data for coin flip P(C) and store in c1
theta_real = 0.2 # unknown value in a real experiment
c1 = bernoulli.rvs(p=theta_real, size=n_sample)
# Generate data for normal distribution P(D|C) and store in d1
mu1, mu2 = 0, 5
sigma1, sigma2 = 0.5, 0.2
d1 = np.empty_like(c1, dtype=np.float64)
d1[c1 == 0] = np.random.normal(mu1, sigma1, np.sum(c1 == 0))
d1[c1 == 1] = np.random.normal(mu2, sigma2, np.sum(c1 == 1))
with pm.Model() as gmm_asym:
# mixture vector
w = pm.Dirichlet('p', a=np.ones(2))
# Gaussian parameters (testval helps start off ordered)
mu = pm.Normal('mu', 0, 20, shape=2, testval=[-10, 10])
sigma = pm.HalfNormal('sigma', sd=2, shape=2)
# break symmetry, forcing mu[0] < mu[1]
order_means_potential = pm.Potential('order_means_potential',
tt.switch(mu[1] - mu[0] < 0, -np.inf, 0))
# observed
pm.NormalMixture('like', w=w, mu=mu, sigma=sigma, observed=d1)
# reproducible sampling
tr_gmm_asym = pm.sample(tune=2000, target_accept=0.9, random_seed=20191121)
This produces samples with the statistics
mean sd mc_error hpd_2.5 hpd_97.5 n_eff Rhat
mu__0 0.004549 0.011975 0.000226 -0.017398 0.029375 2425.487301 0.999916
mu__1 5.007663 0.008993 0.000166 4.989247 5.024692 2181.134002 0.999563
p__0 0.789983 0.009091 0.000188 0.773059 0.808062 2417.356539 0.999788
p__1 0.210017 0.009091 0.000188 0.191938 0.226941 2417.356539 0.999788
sigma__0 0.497322 0.009103 0.000186 0.480394 0.515867 2227.397854 0.999358
sigma__1 0.191310 0.006633 0.000141 0.178924 0.204859 2286.817037 0.999614
and the traces

Efficient implementation of factorization machine with matrix operations?

Link is here : https://www.csie.ntu.edu.tw/~r01922136/slides/ffm.pdf (slides 5-6)
Given the following matrices:
X : n * d
W : d * k
Is there an efficient way to calculate the n x 1 matrix using only matrix operations (eg. numpy, tensorflow), where the jth element is :
EDIT:
Current attempt is this, but obviously it's not very space efficient, as it requires storing matrices of size n*d*d :
n = 1000
d = 256
k = 32
x = np.random.normal(size=[n,d])
w = np.random.normal(size=[d,k])
xxt = np.matmul(x.reshape([n,d,1]),x.reshape([n,1,d]))
wwt = np.matmul(w.reshape([1,d,k]),w.reshape([1,k,d]))
output = xxt*wwt
output = np.sum(output,(1,2))
Avoid large temporary arrays
Not all types of algorithms are that easily or obviously to vectorize. The np.sum(xxt*wwt) can be rewritten using np.einsum. This should be faster than your solution, but has some other limitations (eg. no multithreading).
I would therefor suggest using a compiler like Numba.
Example
import numpy as np
import numba as nb
import time
#nb.njit(fastmath=True,parallel=True)
def factorization_nb(w,x):
n = x.shape[0]
d = x.shape[1]
k = w.shape[1]
output=np.empty(n,dtype=w.dtype)
wwt=np.dot(w.reshape((d,k)),w.reshape((k,d)))
for i in nb.prange(n):
sum=0.
for j in range(d):
for jj in range(d):
sum+=x[i,j]*x[i,jj]*wwt[j,jj]
output[i]=sum
return output
def factorization_orig(w,x):
n = x.shape[0]
d = x.shape[1]
k = w.shape[1]
xxt = np.matmul(x.reshape([n,d,1]),x.reshape([n,1,d]))
wwt = np.matmul(w.reshape([1,d,k]),w.reshape([1,k,d]))
output = xxt*wwt
output = np.sum(output,(1,2))
return output
Mesuring Performance
n = 1000
d = 256
k = 32
x = np.random.normal(size=[n,d])
w = np.random.normal(size=[d,k])
#first call has some compilation overhead
res_1=factorization_nb(w,x)
t1=time.time()
for i in range(100):
res_1=factorization_nb(w,x)
#res_2=factorization_orig(w,x)
print(time.time()-t1)
Timings
factorization_nb: 4.2 ms per iteration
factorization_orig: 460 ms per iteration (110x speedup)
For an einsum implemtnation in pytorch, it would be something like
V = torch.randn([50, 10])
x = torch.randn([50])
result = (torch.einsum('ik,jk,i,j->', V, V, x, x)-torch.einsum('ik,ik,i,i->', V, V, x, x))/2
where we subtract the contribution from the feature weight being dotted with itself.

cardinality constraint in portfolio optimisation

I am using cvxpy to work on some simple portfolio optimisation problem. The only constraint I can't get my head around is the cardinality constraint for the number non-zero portfolio holdings. I tried two approaches, a MIP approach and a traditional convex one.
here is some dummy code for a working traditional example.
import numpy as np
import cvxpy as cvx
np.random.seed(12345)
n = 10
k = 6
mu = np.abs(np.random.randn(n, 1))
Sigma = np.random.randn(n, n)
Sigma = Sigma.T.dot(Sigma)
w = cvx.Variable(n)
ret = mu.T*w
risk = cvx.quad_form(w, Sigma)
objective = cvx.Maximize(ret - risk)
constraints = [cvx.sum_entries(w) == 1, w>= 0, cvx.sum_smallest(w, n-k) >= 0, cvx.sum_largest(w, k) <=1 ]
prob = cvx.Problem(objective, constraints)
prob.solve()
print prob.status
output = []
for i in range(len(w.value)):
output.append(round(w[i].value,2))
print 'Number of non-zero elements : ',sum(1 for i in output if i > 0)
I had the idea to use, sum_smallest and sum_largest (cvxpy manual) my thought was to constraint the smallest n-k entries to 0 and let my target range k sum up to one, I know I can't change the direction of the inequality in order to stay convex, but maybe anyone knows about a clever way of constraining the problem while still keeping it simple.
My second idea was to make this a mixed integer problem, s.th along the lines of
import numpy as np
import cvxpy as cvx
np.random.seed(12345)
n = 10
k = 6
mu = np.abs(np.random.randn(n, 1))
Sigma = np.random.randn(n, n)
Sigma = Sigma.T.dot(Sigma)
w = cvx.Variable(n)
binary = cvx.Bool(n)
integer = cvx.Int(n)
ret = mu.T*w
risk = cvx.quad_form(w, Sigma)
objective = cvx.Maximize(ret - risk)
constraints = [cvx.sum_entries(w) == 1, w>= 0, cvx.sum_entries(binary) == k ]
prob = cvx.Problem(objective, constraints)
prob.solve()
print prob.status
output = []
for i in range(len(w.value)):
output.append(round(w[i].value,2))
print sum(1 for i in output if i > 0)
for i in range(len(w.value)):
print round(binary[i].value,2)
print output
looking at my binary vector it seems to be doing the right thing but the sum_entries constraint doesn't work, looking into the binary vector values I noticed that 0 isn't 0 it's very small e.g xxe^-20 I assume this will mess things up. Anyone can give me any guidance if this is the right way to go? I can use the standard solvers, as well as Mosek if that helps. I would prefer to have a non MIP implementation as I understand this is a combinatorial problem and will get very slow for larger problems. Ultimately I would like to either constraint on exact number of target holdings or a range e.g. 20-30.
Also the documentation in cvxpy around MIP is very short. thanks
A bit chaotic, this question.
So first: this kind of cardinality-constraint is NP-hard. This means, you can't express it using cvxpy without using Integer-programming (or else it would implicate P=NP)!
That beeing said, it would have been nicer, if there would be a pure version of the code without trying to formulate this constraint. I just assume it's the first code without the sum_smallest and sum_largest constraints.
So let's tackle the MIP-approach:
Your code trying to do this makes no sense at all
You introduce some binary-vars, but they have no connection to any other variable at all (so a constraint on it's sum is useless)!
You introduce some integer-vars, but they don't have any use at all!
So here is a MIP-approach:
import numpy as np
import cvxpy as cvx
np.random.seed(12345)
n = 10
k = 6
mu = np.abs(np.random.randn(n, 1))
Sigma = np.random.randn(n, n)
Sigma = Sigma.T.dot(Sigma)
w = cvx.Variable(n)
ret = mu.T*w
risk = cvx.quad_form(w, Sigma)
objective = cvx.Maximize(ret - risk)
binary = cvx.Bool(n) # !!!
constraints = [cvx.sum_entries(w) == 1, w>= 0, w - binary <= 0., cvx.sum_entries(binary) == k] # !!!
prob = cvx.Problem(objective, constraints)
prob.solve(verbose=True)
print(prob.status)
output = []
for i in range(len(w.value)):
output.append(round(w[i].value,2))
print('Number of non-zero elements : ',sum(1 for i in output if i > 0))
So we just added some binary-variables and connected them to w to indicate if w is nonzero or not.
If w is nonzero:
w will be > 0 because of constraint w>= 0
binary needs to be 1, or else constraint w - binary <= 0. is not fulfilled
So it's just introducing these binaries and this one indicator-constraint.
Now the cvx.sum_entries(binary) == k does what it should do.
Be careful with the implication-direction we used here. It might be relevant when chaging the constraint on k (like <=).
Keep in mind, that the default MIP-solver is awful. I also fear that Mosek's interface (sub-optimal within cvxpy) won't solve this, but i might be wrong.
Edit: Your in-range can easily be formulated using two more indicators for:
(k >= a) <= ind_0
(k <= b) <= ind_1
and adding a constraint which equals a logical_and:
ind_0 + ind_1 >= 2
I've had a similar problem where my weights could be negative and did not need to sum to 1 (but still need to be bounded), so I've modified sascha's example to accommodate relaxing these constraints using the CVXpy absolute value function. This should allow for a more general approach to tackling cardinality constraints with MIP
import numpy as np
import cvxpy as cvx
np.random.seed(12345)
n = 10
k = 6
mu = np.abs(np.random.randn(n, 1))
Sigma = np.random.randn(n, n)
Sigma = Sigma.T.dot(Sigma)
w = cvx.Variable(n)
ret = mu.T*w
risk = cvx.quad_form(w, Sigma)
objective = cvx.Maximize(ret - risk)
binary = cvx.Variable(n,boolean=True) # !!!
maxabsw=2
constraints = [ w>= -maxabsw,w<=maxabsw, cvx.abs(w)/maxabsw - binary <= 0., cvx.sum(binary) == k] # !!!
prob = cvx.Problem(objective, constraints)
prob.solve(verbose=True)
print(prob.status)
output = []
for i in range(len(w.value)):
output.append(round(w[i].value,2))
print('Number of non-zero elements : ',sum(1 for i in output if i > 0))

How do I solve equations using the Scharfetter-Gummel scheme in FiPy?

I'm trying to use FiPy to simulate solar cells but I'm struggling to get reasonable results even for simple test cases.
My test problem is an abrupt 1D p-n homojunction in the dark in equilibrium. The governing system of equations are the semiconductor equations with no additional generation or recombination.
Poisson's equation determines the electric field (φ) in a semiconductor with dielectric constant, &epsi;, given the densities of electrons (n), holes (p), donors (ND), and acceptors (NA), where the charge of an electron is q:
∇²φ = q(p − n + ND − NA) / &epsi;
Electrons and holes drift and diffuse with current densities, J, depending on their mobilities (μ) and diffusion constants (D):
Jn = qμnnE + qDn∇n
Jp = qμppE − qDp∇n
The evolution of the charge in the system is accounted for with the electron and hole continutiy equations:
∂n/∂t = (∇·Jn) / q
∂p/∂t = − (∇·Jp) / q
which can be expressed in FiPy canonical form as:
∂n/∂t = μn∇·(−n∇φ) + Dn∇²n
∂p/∂t = − (μp∇·(−p∇φ) − Dp∇²n)
To attempt to solve the problem in FiPy I first import modules and define the physical parameters.
from __future__ import print_function, division
import fipy
import numpy as np
import matplotlib.pyplot as plt
eps_0 = 8.8542e-12 # Permittivity of free space, F/m
q = 1.6022e-19 # Charge of an electron, C
k = 1.3807e-23 # Boltzmann constant, J/K
T = 300 # Temperature, K
Vth = (k*T)/q # Thermal voltage, eV
N_ap = 1e22 # Acceptor density in p-type layer, m^-3
N_an = 0 # Acceptor density in n-type layer, m^-3
N_dp = 0 # Donor density in p-type layer, m^-3
N_dn = 1e22 # Donor density in n-type layer, m^-3
mu_n = 1400.0e-4 # Mobilty of electrons, m^2/Vs
mu_p = 450.0e-4 # Mobilty of holes, m^2/Vs
D_p = k*T*mu_p/q # Hole diffusion constant, m^2/s
D_n = k*T*mu_n/q # Electron diffusion constant, m^2/s
eps_r = 11.8 # Relative dielectric constant
n_i = (5.29e19 * (T/300)**2.54 * np.exp(-6726/T))*1e6
V_bias = 0
Then create the mesh, solution variables, and doping profile.
nx = 20000
dx = 0.1e-9
mesh = fipy.Grid1D(dx=dx, nx=nx)
Ln = Lp = (nx/2) * dx
phi = fipy.CellVariable(mesh=mesh, hasOld=True, name='phi')
n = fipy.CellVariable(mesh=mesh, hasOld=True, name='n')
p = fipy.CellVariable(mesh=mesh, hasOld=True, name='p')
Na = fipy.CellVariable(mesh=mesh, name='Na')
Nd = fipy.CellVariable(mesh=mesh, name='Nd')
Then I set some initial values on the cell centers and impose Dirichlet boundary conditions on all parameters.
x = mesh.cellCenters[0]
n0 = n_i**2 / N_ap
nL = N_dn
p0 = N_ap
pL = n_i**2 / N_dn
phi_min = -(Vth)*np.log(p0/n_i)
phi_max = (Vth)*np.log(nL/n_i) + V_bias
Na.setValue(N_an, where=(x >= Lp))
Na.setValue(N_ap, where=(x < Lp))
Nd.setValue(N_dn, where=(x >= Lp))
Nd.setValue(N_dp, where=(x < Lp))
n.setValue(N_dn, where=(x > Lp))
n.setValue(n_i**2 / N_ap, where=(x < Lp))
p.setValue(n_i**2 / N_dn, where=(x >= Lp))
p.setValue(N_ap, where=(x < Lp))
phi.setValue((phi_max - phi_min)*x/((Ln + Lp)) + phi_min)
phi.constrain(phi_min, mesh.facesLeft)
phi.constrain(phi_max, mesh.facesRight)
n.constrain(nL, mesh.facesRight)
n.constrain(n_i**2 / p0, mesh.facesLeft)
p.constrain(n_i**2 / nL, mesh.facesRight)
p.constrain(p0, mesh.facesLeft)
I express Poisson's equation as
eps = eps_0*eps_r
rho = q * (p - n + Nd - Na)
rho.name = 'rho'
poisson = fipy.ImplicitDiffusionTerm(coeff=eps, var=phi) == -rho
the continuity equations as
cont_eqn_n = (fipy.TransientTerm(var=n) ==
(fipy.ExponentialConvectionTerm(coeff=-phi.faceGrad*mu_n, var=n)
+ fipy.ImplicitDiffusionTerm(coeff=D_n, var=n)))
cont_eqn_p = (fipy.TransientTerm(var=p) ==
- (fipy.ExponentialConvectionTerm(coeff=-phi.faceGrad*mu_p, var=p)
- fipy.ImplicitDiffusionTerm(coeff=D_p, var=p)))
and solve by coupling the equations and sweeping:
eqn = poisson & cont_eqn_n & cont_eqn_p
dt = 1e-12
steps = 50
sweeps = 10
for step in range(steps):
phi.updateOld()
n.updateOld()
p.updateOld()
for sweep in range(sweeps):
eqn.sweep(dt=dt)
I have played around with different values for the mesh size, time step, number of time steps, number of sweeps etc. I see some variation but haven't had any luck finding a set of conditions that give me a realistic solution. I think the problem probably lies in the expressions for the current terms.
Usually when solving these equations the current densities are are approximated using the Scharfetter-Gummel (SG) discretization scheme, rather then the direct discretization. In the SG scheme the electron current density (J) through a cell face is approximated as a function of the values of potential (φ) and charge density (n) defined on the centres of cells K and L either side as
Jn,KL=qμnVT[B(δφ/VT)nL − B(−δφ/VT)nK)
where q is the charge on an electron, μn is the electron mobility, VT is the thermal voltage, δφ=φL−φK, and B(x) is the Bernoulli function x/(ex−1).
It's not obvious to me how to implement the scheme in FiPy. I have seen there is a scharfetterGummelFaceVariable but I can't work out from the documentation whether it's suitable or intended for this problem. Looking at the code it seems to only calculate the Bernoulli function multiplied by a factor eφL. Is it possible to directly use the scharfetterGummelFaceVariable to solve this type of problem? If so, how? If not, is there an alternative approach that will allow me to simulate semiconductor devices using FiPy?

Checking the gradient when doing gradient descent

I'm trying to implement a feed-forward backpropagating autoencoder (training with gradient descent) and wanted to verify that I'm calculating the gradient correctly. This tutorial suggests calculating the derivative of each parameter one at a time: grad_i(theta) = (J(theta_i+epsilon) - J(theta_i-epsilon)) / (2*epsilon). I've written a sample piece of code in Matlab to do just this, but without much luck -- the differences between the gradient calculated from the derivative and the gradient numerically found tend to be largish (>> 4 significant figures).
If anyone can offer any suggestions, I would greatly appreciate the help (either in my calculation of the gradient or how I perform the check). Because I've simplified the code greatly to make it more readable, I haven't included a biases, and am no longer tying the weight matrices.
First, I initialize the variables:
numHidden = 200;
numVisible = 784;
low = -4*sqrt(6./(numHidden + numVisible));
high = 4*sqrt(6./(numHidden + numVisible));
encoder = low + (high-low)*rand(numVisible, numHidden);
decoder = low + (high-low)*rand(numHidden, numVisible);
Next, given some input image x, do feed-forward propagation:
a = sigmoid(x*encoder);
z = sigmoid(a*decoder); % (reconstruction of x)
The loss function I'm using is the standard Σ(0.5*(z - x)^2)):
% first calculate the error by finding the derivative of sum(0.5*(z-x).^2),
% which is (f(h)-x)*f'(h), where z = f(h), h = a*decoder, and
% f = sigmoid(x). However, since the derivative of the sigmoid is
% sigmoid*(1 - sigmoid), we get:
error_0 = (z - x).*z.*(1-z);
% The gradient \Delta w_{ji} = error_j*a_i
gDecoder = error_0'*a;
% not important, but included for completeness
% do back-propagation one layer down
error_1 = (error_0*encoder).*a.*(1-a);
gEncoder = error_1'*x;
And finally, check that the gradient is correct (in this case, just do it for the decoder):
epsilon = 10e-5;
check = gDecoder(:); % the values we obtained above
for i = 1:size(decoder(:), 1)
% calculate J+
theta = decoder(:); % unroll
theta(i) = theta(i) + epsilon;
decoderp = reshape(theta, size(decoder)); % re-roll
a = sigmoid(x*encoder);
z = sigmoid(a*decoderp);
Jp = sum(0.5*(z - x).^2);
% calculate J-
theta = decoder(:);
theta(i) = theta(i) - epsilon;
decoderp = reshape(theta, size(decoder));
a = sigmoid(x*encoder);
z = sigmoid(a*decoderp);
Jm = sum(0.5*(z - x).^2);
grad_i = (Jp - Jm) / (2*epsilon);
diff = abs(grad_i - check(i));
fprintf('%d: %f <=> %f: %f\n', i, grad_i, check(i), diff);
end
Running this on the MNIST dataset (for the first entry) gives results such as:
2: 0.093885 <=> 0.028398: 0.065487
3: 0.066285 <=> 0.031096: 0.035189
5: 0.053074 <=> 0.019839: 0.033235
6: 0.108249 <=> 0.042407: 0.065843
7: 0.091576 <=> 0.009014: 0.082562
Do not sigmoid on both a and z. Just use it on z.
a = x*encoder;
z = sigmoid(a*decoderp);