Suppose you have a CNF formula with some variables marked special.
Is there a way to make a SAT Solver (say, minisat) find a solution maximizing the number of special variables assigned to true?
What you (I) want is called Partial Max Sat. There is a solver called qmaxsat, which seems to work well enough.
Not sure if all of these can handle the indication of special variables, but at least wikipedia gives some direction for the search:
There are several solvers submitted to the last Max-SAT Evaluations:
Branch and Bound based: Clone, MaxSatz (based on Satz), IncMaxSatz, IUT_MaxSatz, WBO.
Satisfiability based: SAT4J, QMaxSat.
Unsatisfiability based: msuncore, WPM1, PM2.
Checking the description for all of them should be managable.
You can use a PBC solver such as minisat+ http://minisat.se/MiniSat+.html
They solve regular CNF files with additional constraints called pseudo Boolean constraints
Minisat+ also supports optimization of such constraints
and from my understanding it solves your problem
Let x1, .... xn be the variables you want to maximize the number o truth assignments
Then you can define the constraints
maximize +1 x1 ..... +1 xn
minisat+ solves such optimization problems
Related
Our use case could be described as a variant of 1D bin packing or sheet cutting.
Imagine a drywall with a beam framing.
We want to optimize the number and size of gypsum boards that would be needed to cover the wall.
Boards must start and end on a beam.
Boards must not overlap (hard constraint).
Less (i.e. bigger) boards, the better (soft constraint).
What we currently do:
Pre-generate all possible boards and pass them as problem facts.
Let the solver pick the best subset of those (nullable planning variable).
First Fit Decreasing + Simulated Annealing
Even relatively small walls (~6m, less than 20 possible boards to pick from) take sometimes minutes and while we mostly get a feasible solution, it's rarely optimal.
Is there a better way to model that?
EDIT
Our current domain model looks like the following. Please note that the planning entity only holds the selected/picked material but nothing else. I.e. currently our planning entities are all equal, which kind of prevents any optimization that depends on planning entity difficulty.
data class Assignment(
#PlanningId
private val id: Long? = null,
#PlanningVariable(
valueRangeProviderRefs = ["materials"],
strengthComparatorClass = MaterialStrengthComparator::class,
nullable = true
)
var material: Material? = null
)
data class Material(
val start: Double,
val stop: Double,
)
Active (sub)pillar change and swap move selectors. See optaplanner docs section about move selectors (move neighorhoods). The default moves (single swap and single change) are probably getting stuck in local optima (and even though SA helps them escape those, those escapes are probably not efficient enough).
That should help, but a custom move to swap two subpillars of the almost the same size, might improve efficiency further.
Also, as you're using SA (Simulated Annealing), know that SA is parameter sensitive. Use optaplanner-benchmark to try multiple SA starting temp parameters with different dataset set sizes. Also compare it to a plain LA (Late Acceptance) in benchmarks too. LA isn't fickle like SA can be. (With fickle I don't mean unstable. I mean potential dataset size sensitive parameter tweaking.)
I am trying to understand how to set the gap for Gurobi.Optimizer, because it solves too long when the default gap threshold is used (10e-4).
I couldn't find it anywhere (they might be referred to as attributes or parameters).
PS: Also, I am trying to find the right tutorial or instructions on how to use the solver in JuMP. I checked here https://juliahub.com/docs/Gurobi/do9v6/0.7.7, but they don't reveal the meanings of different attributes and inputs. Please, send me one in case somebody knows.
Best,
A.
You can set the MIP gap via the MIPGap parameter:
using JuMP, Gurobi
model = Model(Gurobi.Optimizer)
set_optimizer_attribute(model, "MIPGap", 0.1)
You can read more about JuMP here: https://github.com/jump-dev/Gurobi.jl
I have a SCIP project to solve a binary problem with nonlinear objective function that works well but for some instances I got a message saying "the best solution is not feasible", and there is some violation in the constraints. (The violation is mostly very small)
To solve this issue, I added the SCIP_CALL_EXC( SCIPsetRealParam(scip, "numerics/feastol", 1e-5) to change the default value of feastol. But I get segmentation fault!
Following your helpful suggestion, the violation value is now much lower. My objective function is in the form of Min: AX+ LSqrt(BX). In the previous version, I had used an auxiliary variable, let say, Q such that Q^2 - L^2(BX) >=0 and the objective function was expressed as Min: AX+ Q . In the new version, I changed the inequality sign into equality and in combination with SCIPsetRealParam(scip, "numerics/feastol",1e-8), the violation in the constraints are much lower. What can I do more to decrease the violation value? Moreover, when I printed the value of feastol, I seenumerics/lpfeastol=1e-8 but lpfeastol is different from feastol!. So, why lpfeastol is modified instead?. I see the modified value of lpfeastol several times printed on screen when SCIP is solving the problem. I appreciate your help in advance
Maybe try changing the define of UPGSCALE in src/scip/cons_soc.c to some larger value.
The output for the lpfeastol is unfortunate, but normal. Reducing feastol automatically leads to adjusting lpfeastol, too. I cannot reproduce "I printed the value of feastol, I seenumerics/lpfeastol=1e-8 but lpfeastol is different from feastol".
I am trying to find out the parameters for the function below:
$$
\log L(\alpha,\beta,v) = v/\beta(e^{-\beta T} -1) + \alpha/\beta \sum_{i=1}^{n}(e^{-\beta(T-t_i)} -1) + \sum_{i=1}^{N}log(v e^{-\beta t_i} + \alpha \sum_{j=1}^{jmax(t_i)} e^{-\beta(t_i - t_j)}).
$$
However, the conventional methods like fmin, fminsearch are not converging properly. Any suggestions on any other methods or open libraries which I can use?
I was trying CVXPY, but they don't support the division by a variable in the expression.
The problem may not be convex (I have not verified this but it could be why CVXPY refused it). We don't have the data so we cannot try things out, but I can give some general advice:
Provide exact gradients (and 2nd derivatives if needed) or use a modeling system with automatic differentiation. Especially first derivatives should be preferably quite precise. With finite differences you may lose half the precision.
Provide a good starting point. May be using an alternative estimation method.
Some solvers can use bounds on the variables to restrict the feasible region where functions will be evaluated. This can be used to restrict the search to interesting areas only and also to protect operations like division and log functions.
I've been studying J for the last few weeks, and something that has really buggered me is the dyadic case of the # operator: the only way I've used it yet is similar to the following:
(1 p: a) # a
If this were reversed, the parenthesis could be omitted:
a #~ 1 p: a
Why was there chosen not to take the reverse of the current arguments? Backwards familiarity with APL, or something I'm completely overlooking?
In general, J's primitives are designed to take "primary data" on the right, and "control data" on the left.
The distinction between "primary" and "control" data is not sharp, but in general one would expect "primary" data to vary more often than "control" data. That is, one would expect that the "control" data is less likely to be calculated than the "primary" data.
The reason for that design choice is exactly as you point out: because if the data that is more likely to be calculated (as opposed to fixed in advanced) appears on the right, then more J phrases could be expressed as simple trains or pipelines of verbs, without excessive parenthesization (given that J executed left-to-right).
Now, in the case of #, which data is more likely to be calculated? You're 100% right that the filter (or mask) is very likely to be calculated. However, the data to be filtered is almost certain to be calculated. Where did you get your a, for example?
QED.
PS: if your a can be calculated by some J verb, as in a=: ..., then your whole result, filter and all, can be expressed with primeAs =: 1&p: # ... .
PPS: Note the 1&p:, there. That's another example of "control" vs "primary": the 1 is control data - you can tell because it's forever bound to p: - and it's fixed. And so, uncoincidentally, p: was designed to take it as a left argument.
PPPS: This concept of "control data appears on the left" has been expressed many different ways. You can find one round-up of veteran Jers' explanations here: http://www.jsoftware.com/pipermail/general/2007-May/030079.html .