How to get list of basic variables from JuMP/Gurobi? - gurobi

I'm solving an LP in Julia/JuMP using Gurobi as my solver. I want to know which variables in my solution are basic. How do I obtain this information?
Just checking which variables are non-zero is not sufficient, since we might be dealing with a degenerate solution (i.e., basic variables equal to zero).
I found two similar questions online, but the solutions suggested do not seem to apply to the current version of JuMP anymore (or am I missing something?):
https://discourse.julialang.org/t/how-to-obtain-a-basic-solution/1784
https://groups.google.com/g/julia-opt/c/vJ6QuFBfPbw?pli=1
There's a Gurobi attribute called VBasis (https://www.gurobi.com/documentation/9.0/refman/vbasis.html) that seems to be what I'm looking for, but I don't know how to access it.

This is poorly documented, but you can see which constraints are basic:
model = Model(Gurobi.Optimizer)
#variable(model, x >= 0)
#constraint(model, c, 2x >= 1)
#objective(model, Min, x)
optimize!(model)
julia> MOI.get(model, MOI.ConstraintBasisStatus(), c)
NONBASIC::BasisStatusCode = 1
julia> MOI.get(model, MOI.ConstraintBasisStatus(), LowerBoundRef(x))
BASIC::BasisStatusCode = 0
Note that since variables can have lower and upper bounds, we report which constraints are basic, rather than which variables are.
Documentation:
https://jump.dev/MathOptInterface.jl/stable/apireference/#MathOptInterface.ConstraintBasisStatus
https://jump.dev/MathOptInterface.jl/stable/apireference/#MathOptInterface.BasisStatusCode

Related

Event Handling for ordinary differential equations for billiards

I am currently interested in billiards. However, I am interested in special billiards with a non-conventional reflection law and specific rules for the trajectories. I, therefore, need to calculate the trajectories using a differential equation solver. Finding one is not a problem at all. However, I still have trouble finding a suitable solution for the reflection. Previously I was working in Mathematica, whose Numerical ODE solver has a WhenEvent option:
Example:
NDSolve[{y''[t] == -9.81, y[0] == 5, y'[0] == 0, WhenEvent[y[t] == 0, y'[t] -> -0.95 y'[t]]}, y, {t, 0, 10}];
The solution that this line of code gives is a bouncing ball.
Basically after each integration step, it checks whether the condition is true and if so, it performs an action. (I suspect it checks if y has switched sign. If, for example, one puts in
WhenEvent[y²[t]==0],
the quantity does not switch sign and this method fails.)
Now, I would like to switch from Mathematica to something that is more openly available (C++ or python based.) but I could not find anything that can has this or a similar options. Does anyone perhaps have an Idea, what I could use instead? Basically I am looking for the option to check for a condition after each integration step and if the condition is met perform an action on the solution.
Does anyone have an idea what I could use?
Any help appreciated

Root finding with a kinked function using NLsolve in Julia

I am currently trying to solve a complementarity problem with a function that features a downward discontinuity, using the mcpsolve() function of the NLsolve package in Julia. The function is reproduced here for specific parameters, and the numbers below refer to the three panels of the figure.
Unfortunately, the algorithm does not always return the interior solution, even though it exists:
In (1), when starting at 0, the algorithm stays at 0, thinking that the boundary constraint binds,
In (2), when starting at 0, the algorithm stops right before the downward jump, even though the solution lies to the right of this point.
These problems occur regardless of the method used - trust region or Newton's method. Ideally, the algorithm would look for potential solutions in the entire set before stopping.
I was wondering if some of you had worked with similar functions, and had found a clever solution to bypass these issues. Note that
Starting to the right of the solution would not solve these problems, as they would also occur for other parametrization - see (3) this time,
It is not known a priori where in the parameter space the particular cases occur.
As an illustrative example, consider the following piece of code. Note that the function is smoother, and yet here as well the algorithm cannot find the solution.
function f!(x,fvec)
if x[1] <= 1.8
fvec[1] = 0.1 * (sin(3*x[1]) - 3)
else
fvec[1] = 0.1 * (x[1]^2 - 7)
end
end
NLsolve.mcpsolve(f!,[0.], [Inf], [0.], reformulation = :smooth, autodiff = true)
Once more, setting the initial value to something else than 0 would only postpone the problem. Also, as pointed out by Halirutan, fzero from the Roots package would probably work, but I'd rather use mcpsolve() as the problem is initially a complementarity problem.
Thank you very much in advance for your help.

Mathematica can't solve DSolve[{f[0] ==d,f'[0]==v0,f''[t] == -g*m2/f[t]^2}, f, t]?

In[11]:= $Version
Out[11]= 9.0 for Linux x86 (32-bit) (November 20, 2012)
In[12]:= DSolve[{f[0] == d, f'[0] == v0, f''[t] == g*m2/f[t]^2}, f, t]
DSolve::bvimp: General solution contains implicit solutions. In the boundary
value problem these solutions will be ignored, so some of the solutions will
be lost.
Out[12]= {}
The code above pretty much says it all. I get the same error if I replace g*m2 with 1.
This seems like a really simple DFQ to solve. I'd like to tell DSolve to assume all variables are real and that d, g, and m2 are all greater than 0, but there's unfortunately no way to do that.
Thoughts?
You are trying for a symbolic solution. And unfortunately, symbolic integration is hard (while symbolic differentiation is easy).
The way this integration works is to obtain the energy functional by integrating once
E = 1/2*f'[t]^2 + C/f[t]
and then to isolate f'[t]. The resulting integral is not easy to solve and leads to the mentioned implicit solutions.
Did you really want to get the symbolic solution or only some function table to plot the solutions or compute other related quantities?
Since it was clarified that the requested quantity is the maximum of certain solutions: This can be computed by setting v=0 in the energy equation
C/x = E = 1/2*v0^2 + C/x0
or
x = C*x0/(C + 1/2*v0^2*x0 )
One would have to analyze the time aspect to make sure that this extremum is reached before passing again at the initial point x0.

Circumventing R's `Error in if (nbins > .Machine$integer.max)`

This is a saga which began with the problem of how to do survey weighting. Now that I appear to be doing that correctly, I have hit a bit of a wall (see previous post for details on the import process and where the strata variable came from):
> require(foreign)
> ipums <- read.dta('/path/to/data.dta')
> require(survey)
> ipums.design <- svydesign(id=~serial, strata=~strata, data=ipums, weights=perwt)
Error in if (nbins > .Machine$integer.max) stop("attempt to make a table with >= 2^31 elements") :
missing value where TRUE/FALSE needed
In addition: Warning messages:
1: In pd * (as.integer(cat) - 1L) : NAs produced by integer overflow
2: In pd * nl : NAs produced by integer overflow
> traceback()
9: tabulate(bin, pd)
8: as.vector(data)
7: array(tabulate(bin, pd), dims, dimnames = dn)
6: table(ids[, 1], strata[, 1])
5: inherits(x, "data.frame")
4: is.data.frame(x)
3: rowSums(table(ids[, 1], strata[, 1]) > 0)
2: svydesign.default(id = ~serial, weights = ~perwt, strata = ~strata,
data = ipums)
1: svydesign(id = ~serial, weights = ~perwt, strata = ~strata, data = ipums)
This error seems to come from the tabulate function, which I hoped would be straightforward enough to circumvent, first by changing .Machine$integer.max
> .Machine$integer.max <- 2^40
and when that didn't work the whole source code of tabulate:
> tabulate <- function(bin, nbins = max(1L, bin, na.rm=TRUE))
{
if(!is.numeric(bin) && !is.factor(bin))
stop("'bin' must be numeric or a factor")
#if (nbins > .Machine$integer.max)
if (nbins > 2^40) #replacement line
stop("attempt to make a table with >= 2^31 elements")
.C("R_tabulate",
as.integer(bin),
as.integer(length(bin)),
as.integer(nbins),
ans = integer(nbins),
NAOK = TRUE,
PACKAGE="base")$ans
}
Neither circumvented the problem. Apparently this is one reason why the ff package was created, but what worries me is the extent to which this is a problem I cannot avoid in R. This post seems to indicate that even if I were to use a package that would avoid this problem, I would only be able to access 2^31 elements at a time. My hope was to use sql (either sqlite or postgresql) to get around the memory problems, but I'm afraid I'll spend a while getting that to work, only to run into the same fundamental limit.
Attempting to switch back to Stata doesn't solve the problem either. Again see the previous post for how I use svyset, but the calculation I would like to run causes Stata to hang:
svy: mean age, over(strata)
Whether throwing more memory at it will solve the problem I don't know. I run R on my desktop which has 16 gigs, and I use Stata through a Windows server, currently setting memory allocation to 2000MB, but I could theoretically experiment with increasing that.
So in sum:
Is this a hard limit in R?
Would sql solve my R problems?
If I split it up into many separate files would that fix it (a lot of work...)?
Would throwing a lot of memory at Stata do it?
Am I seriously barking up the wrong tree somehow?
Yes, R uses 32-bit indexes for vectors so they can contain no more than 2^31-1 entries and you are trying to create something with 2^40. There is talk of introducing 64-bit indexes but that will be some way off before appearing in R. Vectors have the stated hard limit and that is it as far as base R is concerned.
I am unfamiliar with the details of what you are doing to offer any further advice on the other parts of your Q.
Why do you want to work with the full data set? Wouldn't a smaller sample that can fit in to the restrictions R places on you be just as useful? You could use SQL to store all the data and query it from R to return a random subset of more appropriate size.
Since this question was asked some time ago, I'd like to point that my answer here uses the version 3.3 of the survey package.
If you check the code of svydesign, you can see that the function that causes all the problem is within a check step that looks whether you should set the nest parameter to TRUE or not. This step can be disabled setting the option check.strata=FALSE.
Of course, you shouldn't disable a check step unless you know what you are doing. In this case, you should be able to decide yourself whether you need to set the nest option to TRUE or FALSE. nest should be set to TRUE when the same PSU (cluster) id is recycled in different strata.
Concretely for the IPUMS dataset, since you are using the serial variable for cluster identification and serial is unique for each household in a given sample, you may want to set nest to FALSE.
So, your survey design line would be:
ipums.design <- svydesign(id=~serial, strata=~strata, data=ipums, weights=perwt, check.strata=FALSE, nest=FALSE)
Extra advice: even after circumventing this problem you will find that the code is pretty slow unless you remap strata to a range from 1 to length(unique(ipums$strata)):
ipums$strata <- match(ipums$strata,unique(ipums$strata))
Both #Gavin and #Martin deserve credit for this answer, or at least leading me in the right direction. I'm mostly answering it separately to make it easier to read.
In the order I asked:
Yes 2^31 is a hard limit in R, though it seems to matter what type it is (which is a bit strange given it is the length of the vector, rather than the amount of memory (which I have plenty of) which is the stated problem. Do not convert strata or id variables to factors, that will just fix their length and nullify the effects of subsetting (which is the way to get around this problem).
sql could probably help, provided I learn how to use it correctly. I did the following test:
library(multicore) # make svy fast!
ri.ny <- subset(ipums, statefips_num %in% c(36, 44))
ri.ny.design <- svydesign(id=~serial, weights=~perwt, strata=~strata, data=ri.ny)
svyby(~incwage, ~strata, ri.ny.design, svymean, data=ri.ny, na.rm=TRUE, multicore=TRUE)
ri <- subset(ri.ny, statefips_num==44)
ri.design <- svydesign(id=~serial, weights=~perwt, strata=~strata, data=ri)
ri.mean <- svymean(~incwage, ri.design, data=ri, na.rm=TRUE)
ny <- subset(ri.ny, statefips_num==36)
ny.design <- svydesign(id=~serial, weights=~perwt, strata=~strata, data=ny)
ny.mean <- svymean(~incwage, ny.design, data=ny, na.rm=TRUE, multicore=TRUE)
And found the means to be the same, which seems like a reasonable test.
So: in theory, provided I can split up the calculation by either using plyr or sql, the results should still be fine.
See 2.
Throwing a lot of memory at Stata definitely helps, but now I'm running into annoying formatting issues. I seem to be able to perform most of the calculation I want (much quicker and with more stability as well) but I can't figure out how to get it into the form I want. Will probably ask a separate question on this. I think the short version here is that for big survey data, Stata is much better out of the box.
In many ways yes. Trying to do analysis with data this big is not something I should have taken on lightly, and I'm far from figuring it out even now. I was using the svydesign function correctly, but I didn't really know what's going on. I have a (very slightly) better grasp now, and it's heartening to know I was generally correct about how to solve the problem. #Gavin's general suggestion of trying out small data with external results to compare to is invaluable, something I should have started ages ago. Many thanks to both #Gavin and #Martin.

Does system C support tri-state logic?

Does System C support tri-state logic? That is, bits that can get 0, 1 or X, where X means "unknown"?
If it does, does it also support vectors that can contain Xes, including logic and arithmetic operations?
Here is what you need:
http://www.asic-world.com/systemc/data_types2.html
http://en.wikipedia.org/wiki/SystemC#Data_types
It does not have tri-state variables, but quad-state (is that correct? :P) variables (0,1,X,Z). More about it in the above links. It also supports vectors of those variables.
Hope I helped you a little bit :)
Yeah, you're looking for the sc_logic and sc_lv types which are 4 state variables: 0, 1, X, and Z. Pay attention to how they interact when you resolve them together. There's a nice tables on the asic-world.com site taken directly from the SystemC User Manual.
Note though that this doesn't work like in Verilog where X can also act as a wildcard. I had to build my own function to add that functionality.