Problem with variable recognition in Fortran - variables

i am a complete beginner at Fortran and am working on a code that solves a kinetic mechanism by solving differential equations at different time steps using a differential equation solver.
Here is a link to download the zip file with the whole project:
http://www.filedropper.com/fortranmicropyrolyzersetup
The input variables for the differential solver are defined as follows in the code:
! Declaration of variables
implicit none
EXTERNAL :: FEXSB_AUTO, JEX_SB
integer :: neq,Mf,lrw,liw,iwork,itol,itask,istate,iopt !solver parameters
integer :: j,jjk,m !Counters
double precision :: ATOL,RTOL, RWORK !solver parameters
double precision :: T, TOUT !starting time (s), timestep time (s)
double precision :: Y, w !molar fraction of biomass (-), mass fraction of biomass (-)
double precision :: y_gas, w_gas, Conc !molar fraction in gas phase (-), concentration in gas phase (mol/m³), mass fraction in gas phase (-)
double precision :: n(speciescount) !number of moles (used temporarily to calculate initial molar fracions)
character*5 :: simulationNumber
!Setting the solver parameters
neq = SpeciesCount !number of equations
ITOL = 1 !RTOL and ATOL are integers
RTOL = 1.0D-8 !Relative tolerance
ATOL = 1.0D-15 !Absolute tolerance
ITASK = 1
ISTATE = 1
LRW = 22 + 9*SpeciesCount + 2*SpeciesCount**2 !Array sizing (see VODE)
LIW = 30+SpeciesCount !Array sizint (see VODE)
MF = 22 !Use BDF with finite difference Jacobian
IOPT = 1 !Optional input specified
Iwork(6) = 7000 !Increase maximum iteration steps from 500 to 2000 otherwise the solver does not converge
The differential solver is then called in a do loop for each time step as follows:
! Solve reactor equations (see FEXSB) and advance time, until stop criterium is met
TimeStep = 0 ! dimensionless time step
!DO while(wm(1).lt.0.9999) -> previously used stop criterium
DO while((y_gas(1).lt.0.99999).OR.(TimeStep.lt.10)) !stop criterium: molar fraction Helium = 0.9999 AND do at least 100 timesteps
TimeStep = TimeStep + 1
write(*,*)"Doing iteration for time step",TimeStep
call DVODE(FEXSB_AUTO,NEQ,Y,T,TOUT,ITOL,RTOL,ATOL,ITASK,ISTATE,IOPT,RWORK,LRW,IWORK,LIW,JEX_SB,MF) !solve reactor equations to Y
!CALL DVODE(FEXSB_AUTO,NEQ,Y,T,TOUT,ITOL,RTOL,ATOL,ITASK,ISTATE,IOPT,RWORK,LRW,IWORK,LIW,JEX_SB,MF,RPAR,IPAR)
! calculate w, y_gas, w_gas and Conc from Y
do j = 1, SpeciesCount
if (Y(j) .lt. 1.0D-10) then ! round to zero below 10e-10 to avoid negative numbers and numerical problems with small numbers
Y(j) = 0.0D0
endif
if (MolarFlowRate(j) .lt. 1.0D-10) then
MolarFlowRate(j) = 0.0D0
endif
w(j) = Y(j)*n0_tot*amms(j) / (mass_sample)
y_gas(j) = MolarFlowRate(j) / TotalMolarFlowRate
w_gas(j) = MolarFlowRate(j)*amms(j) / (1000*MassFlowRate) ! factor 1000 to put molar mass in kg/mol instead of g/mol
Conc(j) = MolarFlowRate(j) / VolumetricFlowRate
end do
The differential equation solver does not successfully complete the do loop of the line :
DO while((y_gas(1).lt.0.99999).OR.(TimeStep.lt.10))
The variables seem to be recognized for the first time step when the ODE Solver is called in the line:
call DVODE(FEXSB_AUTO,NEQ,Y,T,TOUT,ITOL,RTOL,ATOL,ITASK,ISTATE,IOPT,RWORK,LRW,IWORK,LIW,JEX_SB,MF)
And the solver successfully completes the first iteration. After the time step is increased one more time, the call function works again but this time the variables are not recognized somehow. When I stop the code to debug what is wrong after the first time step, I realized that the variables required for DVODE do not have set values anymore, somehow they get deleted after the first successful iteration.
What might be causing this problem?
*DECK DVODE
SUBROUTINE DVODE (F, NEQ, Y, T, TOUT, ITOL, RTOL, ATOL, ITASK,
1 ISTATE, IOPT, RWORK, LRW, IWORK, LIW, JAC, MF,
2 RPAR, IPAR)
EXTERNAL F, JAC
DOUBLE PRECISION Y, T, TOUT, RTOL, ATOL, RWORK, RPAR
INTEGER NEQ, ITOL, ITASK, ISTATE, IOPT, LRW, IWORK, LIW,
1 MF, IPAR
DIMENSION Y(*), RTOL(*), ATOL(*), RWORK(LRW), IWORK(LIW),
1 RPAR(*), IPAR(*)
!-----------------------------------------------------------------------
c dvode: Variable-coefficient Ordinary Differential Equation solver,
! with fixed-leading-coefficient implementation.
! This version is in double precision.
!
! DVODE solves the initial value problem for stiff or nonstiff
! systems of first order ODEs,
! dy/dt = f(t,y) , or, in component form,
! dy(i)/dt = f(i) = f(i,t,y(1),y(2),...,y(NEQ)) (i = 1,...,NEQ).
! DVODE is a package based on the EPISODE and EPISODEB packages, and
! on the ODEPACK user interface standard, with minor modifications.
!-----------------------------------------------------------------------
! Authors:
! Peter N. Brown and Alan !. Hindmarsh
! Center for Applied Scientific Computing, L-561
! Lawrence Livermore National Laboratory
! Livermore, CA 94551
! and
! George D. Byrne
! Illinois Institute of Technology
! Chicago, IL 60616
!-----------------------------------------------------------------------
Any help would be greatly appreciated. Please note that I am new to Fortran. If I should supply any additional information to help you answer my question, please don't hesitate to let me know. Thanks in advance!

The assumption that the parameters should be untouched after the first iteration is wrong.
In fortran function parameters are always passed by reference. Thus, a fortran function never guarantees that your original parameter set is the same after the call. It might be the intened (or poor) design of the function to DO change the parameters. Only documentation can help you, and what you provided in your question is not enough, which might very likely also mean: maybe there is not enough documentation for this case/function.
In contrast to C/C++ in fortran there is no "const" modifier for variables that would guarantee you what you have to assume right now.
To me it seem the only solution is to re-initialize all parameters right before every call to DVODE.

Related

Terminate Scipy solve_ivp on custom predicate

I have an ODE dy/dt = f(y,t), where y is a N dimensional vector, which I would like to solve using the scipy.integrate.solve_ivp function.
However, I would like to stop the integration if a certain predicate g(y,t) evaluates to True. The use case I have here is that I expect the value of y to converge towards some constant value y0 before the end of the integration duration t_end. I am interested in this constant value y0 and would like to save time by terminating the integration once convergence has happened.
I was hoping that I could create an array to store the values of y in the last 5 integration steps, and if they are very close, convergence is believed to have happened.
The event function of solve_ivp does not really help in my case: there is no root that I hope to find, and I am not interested in the t when convergence happens. I am surprised that this seemingly "common" use case of looking for a convergence cannot be done easily, and I can't find similar problems already on Stackoverflow.
If someone has some idea, I would love to hear it.
This is a good candidate for accessing the integrator classes that solve_ivp uses under the hood. If we take the simple function dy/dt = -y with initial condition y(0) = 100. We want to terminate the function when the solution has changed by less than 0.1 over 1 second of simulation, i.e. abs(y(t) - y(t-1)) < 0.1. For this ODE, this occurs at t=-ln(0.1 / (100(e-1)) or t~7.45. We can solve this using RK45 integrator (RK45 docs) as follows:
import numpy as np
from scipy.integrate import RK45
def fun(t, y):
return -y
y0 = [100]
t0 = 0
#max_step used to ensure that we take small enough time steps
rk45 = RK45(fun, t0, y0, t_bound=1000, max_step=0.1)
t = []
y = []
while rk45.status == "running":
t.append(rk45.t)
y.append(rk45.y[0])
if rk45.t > 1.0 and np.abs(np.interp(rk45.t-1, t, y) - rk45.y[0]) < 0.1:
break
rk45.step()
print(f"Final t: {t[-1]:.1f}")
# Because max_step=0.1, t[-11] will be 1 second behind t[-1]
print(f"Time period checked: {t[-1]-t[-11]:.1f}, delta_y: {y[-11]-y[-1]:.1f}")
yields
Final t: 7.5
Time period checked: 1.0, delta_y: 0.1

Market Neutral Optimisation using CVXPY

I have a model that generates alphas and sigma for a set of stocks. Have coded a long-only optimisation using CVXOPT by passing the function sol=solvers.qp(Q, p, G, h, A, b)
Now I would like to add two further optimisation problem to the script I already have so that I can also have results for a Long Short and Market Neural (sum weights = 0) portfolio. In order to do that I would like to use/import CVXPY without adding too many lines of code given I have already loaded up alphas sigma and weight bounds.
Below you can find the data I am loading to currently optimise a long only portfolio using CVXOPT. I will appreciate if anyone would be so kind to provide me with some help on how to set CVXPY to return an optimal Long Short and Market Neutral using those data. I could also share the whole code
### Parameters setup
Alpha = np.array(-np.transpose(opt.matrix(np.loadtxt('C:\Alpha.txt'))))
Var_Cov = np.loadtxt('C:\VAR_COV.txt')
n = len (Var_Cov)
r_min = 0.03
maxW = np.loadtxt('C:\maxW.txt')
minW = np.loadtxt('C:\minW.txt')
### Solve
solution = optimize_portfolio(n, Alpha, Var_Cov, r_min)

GAMS - Unit step function

I need to use the step function in order to count the number of non-zero elements in a parameter. The step function that I am considering is the following:
After searching on the internet for solution, I realized we can create stepwise functions in GAMS, but I need a continuous function for x > 1.
I tried the following code to reproduce a step-like function:
round(1 / (1 + exp(-x)) - 0.01)
which is:
Unfortunately, this formula does not work with GAMS. When I try to run the code, I got this error:
Endogenous function argument(s) not allowed in linear models
I am working with a MIP (Mixed Integer Linear Program) model. Is there a way to use a step function in GAMS?
I assume, that x is a variable in your code? Then you can try something like this (if x would be a parameter, it would be easier):
Equation a, b;
Variable x;
Binary Variable y;
Scalar BigM / 1e3/
SmallM /1e-3/;
a.. y*BigM =g= x;
b.. y*SmallM =l= x;
So, if x=0, y will be 0 as well because of equation b. And if x>0, y will become 1 because of equation a. The BigM you should choose as small as possible and as big as necessary (so it should be the maximum value x can take) and SmallM the other way around. This assumes of course, that there is something like a lower and upper bound for x, if it is not 0...
Hope that helps!
Lutz

Find global maximum in the lest number of computations

Let's say I have a function f defined on interval [0,1], which is smooth and increases up to some point a after which it starts decreasing. I have a grid x[i] on this interval, e.g. with a constant step size of dx = 0.01, and I would like to find which of those points has the highest value, by doing the smallest number of evaluations of f in the worst-case scenario. I think I can do much better than exhaustive search by applying something inspired with gradient-like methods. Any ideas? I was thinking of something like a binary search perhaps, or parabolic methods.
This is a bisection-like method I coded:
def optimize(f, a, b, fa, fb, dx):
if b - a <= dx:
return a if fa > fb else b
else:
m1 = 0.5*(a + b)
m1 = _round(m1, a, dx)
fm1 = fa if m1 == a else f(m1)
m2 = m1 + dx
fm2 = fb if m2 == b else f(m2)
if fm2 >= fm1:
return optimize(f, m2, b, fm2, fb, dx)
else:
return optimize(f, a, m1, fa, fm1, dx)
def _round(x, a, dx, right = False):
return a + dx*(floor((x - a)/dx) + right)
The idea is: find the middle of the interval and compute m1 and m2- the points to the right and to the left of it. If the direction there is increasing, go for the right interval and do the same, otherwise go for the left. Whenever the interval is too small, just compare the numbers on the ends. However, this algorithm still does not use the strength of the derivatives at points I computed.
Such a function is called unimodal.
Without computing the derivatives, you can work by
finding where the deltas x[i+1]-x[i] change sign, by dichotomy (the deltas are positive then negative after the maximum); this takes Log2(n) comparisons; this approach is very close to what you describe;
adapting the Golden section method to the discrete case; it takes Logφ(n) comparisons (φ~1.618).
Apparently, the Golden section is more costly, as φ<2, but actually the dichotomic search takes two function evaluations at a time, hence 2Log2(n)=Log√2(n) .
One can show that this is optimal, i.e. you can't go faster than O(Log(n)) for an arbitrary unimodal function.
If your function is very regular, the deltas will vary smoothly. You can think of the interpolation search, which tries to better predict the searched position by a linear interpolation rather than simple halving. In favorable conditions, it can reach O(Log(Log(n)) performance. I don't know of an adaptation of this principle to the Golden search.
Actually, linear interpolation on the deltas is very close to parabolic interpolation on the function values. The latter approach might be the best for you, but you need to be careful about the corner cases.
If derivatives are allowed, you can use any root solving method on the first derivative, knowing that there is an isolated zero in the given interval.
If only the first derivative is available, use regula falsi. If the second derivative is possible as well, you may consider Newton, but prefer a safe bracketing method.
I guess that the benefits of these approaches (superlinear and quadratic convergence) are made a little useless by the fact that you are working on a grid.
DISCLAIMER: Haven't test the code. Take this as an "inspiration".
Let's say you have the following 11 points
x,f(x) = (0,3),(1,7),(2,9),(3,11),(4,13),(5,14),(6,16),(7,5),(8,3)(9,1)(1,-1)
you can do something like inspired to the bisection method
a = 0 ,f(a) = 3 | b=10,f(b)=-1 | c=(0+10/2) f(5)=14
from here you can see that the increasing interval is [a,c[ and there is no need to that for the maximum because we know that in that interval the function is increasing. Maximum has to be in interval [c,b]. So at the next iteration you change the value of a s.t. a=c
a = 5 ,f(a) = 14 | b=10,f(b)=-1 | c=(5+10/2) f(6)=16
Again [a,c] is increasing so a is moved on the right
you can iterate the process until a=b=c.
Here the code that implements this idea. More info here:
int main(){
#define STEP (0.01)
#define SIZE (1/STEP)
double vals[(int)SIZE];
for (int i = 0; i < SIZE; ++i) {
double x = i*STEP;
vals[i] = -(x*x*x*x - (0.6)*(x*x));
}
for (int i = 0; i < SIZE; ++i) {
printf("%f ",vals[i]);
}
printf("\n");
int a=0,b=SIZE-1,c;
double fa=vals[a],fb=vals[b] ,fc;
c=(a+b)/2;
fc = vals[c];
while( a!=b && b!=c && a!=c){
printf("%i %i %i - %f %f %f\n",a,c,b, vals[a], vals[c],vals[b]);
if(fc - vals[c-1] > 0){ //is the function increasing in [a,c]
a = c;
}else{
b=c;
}
c=(a+b)/2;
fa=vals[a];
fb=vals[b];
fc = vals[c];
}
printf("The maximum is %i=%f with %f\n", c,(c*STEP),vals[a]);
}
Find points where derivative(of f(x))=(df/dx)=0
for derivative you could use five-point-stencil or similar algorithms.
should be O(n)
Then fit those multiple points (where d=0) on a polynomial regression / least squares regression .
should be also O(N). Assuming all numbers are neighbours.
Then find top of that curve
shouldn't be more than O(M) where M is resolution of trials for fit-function.
While taking derivative, you could leap by k-length steps until derivate changes sign.
When derivative changes sign, take square root of k and continue reverse direction.
When again, derivative changes sign, take square root of new k again, change direction.
Example: leap by 100 elements, find sign change, leap=10 and reverse direction, next change ==> leap=3 ... then it could be fixed to 1 element per step to find exact location.
I am assuming that the function evaluation is very costly.
In the special case, that your function could be approximately fitted with a polynomial, you can easily calculate the extrema in least number of function evaluations. And since you know that there is only one maximum, a polynomial of degree 2 (quadratic) might be ideal.
For example: If f(x) can be represented by a polynomial of some known degree, say 2, then, you can evaluate your function at any 3 points and calculate the polynomial coefficients using Newton's difference or Lagrange interpolation method.
Then its simple to solve for the maximum for this polynomial. For a degree 2 you can easily get a closed form expression for the maximum.
To get the final answer you can then search in the vicinity of the solution.

R: FAST multivariate optimization packages?

I am looking to find a local minimum of a scalar function of 4 variables, and I have range-constraints on the variables ("box constraints"). There's no closed-form for the function derivative, so methods needing an analytical derivative function are out of the question. I've tried several options and control parameters with the optim function, but all of them seem very slow. Specifically, they seem to spend a lot of time between calls to my (R-defined) objective function, so I know the bottleneck is not my objective function but the "thinking" between calls to my objective function. I looked at CRAN Task View for optimization and tried several of those options (DEOptim from RcppDE, etc) but none of them seem any good. I would have liked to try the nloptr package (an R wrapper for NLOPT library) but it seems to be unavailable for windows.
I'm wondering, are there any good, fast optimization packages that people use that I may be missing? Ideally these would be in the form of thin wrappers around good C++/Fortran libraries, so there's minimal pure-R code. (Though this shouldn't be relevant, my optimization problem arose while trying to fit a 4-parameter distribution to a set of values, by minimizing a certain goodness-of-fit measure).
In the past I've found R's optimization libraries to be quite slow, and ended up writing a thin R wrapper calling a C++ API of a commercial optimization library. So are the best libraries necessarily commercial ones?
UPDATE. Here is a simplified example of the code I'm looking at:
###########
## given a set of values x and a cdf, calculate a measure of "misfit":
## smaller value is better fit
## x is assumed sorted in non-decr order;
Misfit <- function(x, cdf) {
nevals <<- nevals + 1
thinkSecs <<- thinkSecs + ( Sys.time() - snapTime)
cat('S')
if(nevals %% 20 == 0) cat('\n')
L <- length(x)
cdf_x <- pmax(0.0001, pmin(0.9999, cdf(x)))
measure <- -L - (1/L) * sum( (2 * (1:L)-1 )* ( log( cdf_x ) + log( 1 - rev(cdf_x))))
snapTime <<- Sys.time()
cat('E')
return(measure)
}
## Given 3 parameters nu (degrees of freedom, or shape),
## sigma (dispersion), gamma (skewness),
## returns the corresponding 4-parameter student-T cdf parametrized by these params
## (we restrict the location parameter mu to be 0).
skewtGen <- function( p ) {
require(ghyp)
pars = student.t( nu = p[1], mu = 0, sigma = p[2], gamma = p[3] )
function(z) pghyp(z, pars)
}
## Fit using optim() and BFGS method
fit_BFGS <- function(x, init = c()) {
x <- sort(x)
nevals <<- 0
objFun <- function(par) Misfit(x, skewtGen(par))
snapTime <<- Sys.time() ## global time snap shot
thinkSecs <<- 0 ## secs spent "thinking" between objFun calls
tUser <- system.time(
res <- optim(init, objFun,
lower = c(2.1, 0.1, -1), upper = c(15, 2, 1),
method = 'L-BFGS-B',
control = list(trace=2, factr = 1e12, pgtol = .01 )) )[1]
cat('Total time = ', tUser,
' secs, ObjFun Time Pct = ', 100*(1 - thinkSecs/tUser), '\n')
cat('results:\n')
print(res$par)
}
fit_DE <- function(x) {
x <- sort(x)
nevals <<- 0
objFun <- function(par) Misfit(x, skewtGen(par))
snapTime <<- Sys.time() ## global time snap shot
thinkSecs <<- 0 ## secs spent "thinking" between objFun calls
require(RcppDE)
tUser <- system.time(
res <- DEoptim(objFun,
lower = c(2.1, 0.1, -1),
upper = c(15, 2, 1) )) [1]
cat('Total time = ', tUser,
' secs, ObjFun Time Pct = ', 100*(1 - thinkSecs/tUser), '\n')
cat('results:\n')
print(res$par)
}
Let's generate a random sample:
set.seed(1)
# generate 1000 standard-student-T points with nu = 4 (degrees of freedom)
x <- rt(1000,4)
First fit using the fit.tuv (for "T UniVariate") function in the ghyp package -- this uses the Max-likelihood Expectation-Maximization (E-M) method. This is wicked fast!
require(ghyp)
> system.time( print(unlist( pars <- coef( fit.tuv(x, silent = TRUE) ))[c(2,4,5,6)]))
nu mu sigma gamma
3.16658356 0.11008948 1.56794166 -0.04734128
user system elapsed
0.27 0.00 0.27
Now I am trying to fit the distribution a different way: by minimizing the "misfit" measure defined above, using the standard optim() function in base R. Note that the results will not in general be the same. My reason for doing this is to compare these two results for a whole class of situations. I pass in the above Max-Likelihood estimate as the starting point for this optimization.
> fit_BFGS( x, init = c(pars$nu, pars$sigma, pars$gamma) )
N = 3, M = 5 machine precision = 2.22045e-16
....................
....................
.........
iterations 5
function evaluations 7
segments explored during Cauchy searches 7
BFGS updates skipped 0
active bounds at final generalized Cauchy point 0
norm of the final projected gradient 0.0492174
final function value 0.368136
final value 0.368136
converged
Total time = 41.02 secs, ObjFun Time Pct = 99.77084
results:
[1] 3.2389296 1.5483393 0.1161706
I also tried to fit with the DEoptim() but it ran for too long and I had to kill it. As you can see from the output above, 99.8% of the time is attributable to the objective function! So Dirk and Mike were right in their comments below. I should have more carefully estimated the time spent in my objective function, and printing dots was not a good idea! Also I suspect the MLE(E-M) method is very fast because it uses an analytical (closed-form) for the log-likelihood function.
A maximum likelihood estimator, when it exists for your problem, will always be faster than a global optimizer, in any language.
A global optimizer, no matter the algorithm, typically combines some random jumps with local minimization routines. Different algorithms may discuss this in terms of populations (genetic algorithms), annealing, migration, etc. but they are all conceptually similar.
In practice, this means that if you have a smooth function, some other optimization algorithm will likely be fastest. The characteristics of your problem function will dictate whether that will be a quadratic, linear, conical, or some other type of optimization problem for which an exact (or near-exact) analytical solution exists, or whether you will need to apply a global optimizer that is necessarily slower.
By using ghyp, you're saying that your 4 variable function produces an output that may be fit to the generalized hyperbolic distribution, and you are using a maximum likelihood estimator to find the closest generalized hyperbolic distribution to the data you've provided. But if you are doing that, I'm afraid I don't understand how you could have a non-smooth surface requiring optimization.
In general, the optimizer you choose needs to be chosen based on your problem. There is no perfect 'optimal optimizer', in any programming language, and choice of optimization algorithm to fit your problem will likely make more of a difference than any minor inefficiencies of the implementation.