How to express a constraint in MOSEK - mosek

I have the following constraint to be implemented in Mosek, where the unknown variable is x.
I'm trying to follow the discussion here. I could write the constraint as the intersection between 15 exponential cones and one half space. However, what is the best way to write the exponential cone in Mosek given that I have a linear combination of the elements of the unknown x?

In Fusion API you write the constraint t\geq exp(u) as
M.constraint(Expr.hstack(t, Expr.constTerm(1.0), u), Domain.inPExpCone())
and u can be an expression constructed in a more complicated way, say
y1 = Expr.sub(Expr.dot(c1, x), b1)
u = Expr.mul(r, y1)
....
and so on.

Related

How to input constraints on parameters?

I am currently doing a materials prediction project using PSO and I was wondering if anyone can provide any expertise. I utilize PSO as my method of operation but I am trying to handle a constraint
For eg: I have 17 input parameters for the algorithm to take references from and make predictions. However, these 17 elements should not exceed 100%. May I know how do I input the constraints?
enter image description here
Apply the constraint before the objective function is updated but after the particle position has been updated. Let say after velocity/location update, your particle is now located at [5,5] while your constraint (Ub) is [4,3]. Simply modify your particle location to [4,3]. Other people use more exotic method such as 'bouncing', like when hitting a wall with a ball. E.g., original particle location is [3,3] with velc of [4,2] (same Ub). Due to the constraint and bouncing, the particle is now at [0,1] (3+((4-3)-3)).
Code example for the former method
% Fixing the Boundary
bindex_up = x(pop_iter,:) > ub;
bindex_down = x(pop_iter,:) < lb;
x(pop_iter,bindex_up)=ub(bindex_up);
x(pop_iter,bindex_down)=lb(bindex_down);
Do not change the particle position, but if the particle location is outside the Ub or Lb, apply a penalty to the fitness/obj function.
Nature Inspired Metaheuristic has more details on this subject (Constrain handling) https://dl.acm.org/doi/10.5555/1628847

Relaxation of linear constraints?

When we need to optimize a function on the positive real half-line, and we only have non-constraints optimization routines, we use y = exp(x), or y = x^2 to map to the real line and still optimize on the log or the (signed) square root of the variable.
Can we do something similar for linear constraints, of the form Ax = b where, for x a d-dimensional vector, A is a (N,n)-shaped matrix and b is a vector of length N, defining the constraints ?
While, as Ervin Kalvelaglan says this is not always a good idea, here is one way to do it.
Suppose we take the SVD of A, getting
A = U*S*V'
where if A is n x m
U is nxn orthogonal,
S is nxm, zero off the main diagonal,
V is mxm orthogonal
Computing the SVD is not a trivial computation.
We first zero out the elements of S which we think are non-zero just due to noise -- which can be a slightly delicate thing to do.
Then we can find one solution x~ to
A*x = b
as
x~ = V*pinv(S)*U'*b
(where pinv(S) is the pseudo inverse of S, ie replace the non zero elements of the diagonal by their multiplicative inverses)
Note that x~ is a least squares solution to the constraints, so we need to check that it is close enough to being a real solution, ie that Ax~ is close enough to b -- another somewhat delicate thing. If x~ doesn't satisfy the constraints closely enough you should give up: if the constraints have no solution neither does the optimisation.
Any other solution to the constraints can be written
x = x~ + sum c[i]*V[i]
where the V[i] are the columns of V corresponding to entries of S that are (now) zero. Here the c[i] are arbitrary constants. So we can change variables to using the c[] in the optimisation, and the constraints will be automatically satisfied. However this change of variables could be somewhat irksome!

In AMPL, how to refer to part of the result, and use them in multiple places

I'm learning AMPL to speed up a model currently in excel spreadsheet with excel solver. It basically based on the matrix multiplication result of a 1 x m variables and an m x n parameters. And it would find the variables to maximize the minimum of certain values in the result while keeping some other values in the same result satisfying a few constraints. How to do so in AMPL?
Given: P= m x n parameters
Variable: X= 1 x m variable we tried to solve
Calculate: R= X x P , result of matrix multiplication of X and P
Maximize: min(R[1..3]), the minimum value of the first 3 values in the result
Subject to: R[2]<R[4]
min(R[6..8])>20
R[5]-20>R[7]
X are all integers
I read several tutorials and look up the manual but can't find the solution to this seemingly straightforward problem. All I found is maximize a single value, which is the calculation result. And it was used only once and does not appear again in the constraint.
The usual approach for "maximize the minimum" problems in products like AMPL is to define an auxiliary variable and set linear constraints that effectively define it as the minimum, converting a nonlinear function (min) into linear rules.
For instance, suppose I have a bunch of decision variables x[i] with i ranging over an index set S, and I want to maximize the minimum over x[i]. AMPL syntax for that would be:
var x_min;
s.t. DefineMinimum{i in S}: x_min <= x[i];
maximize ObjectiveFunction: x_min;
The constraint only requires that x_min be less than or equal to the minimum of x[i]. However, since you're trying to maximize x_min and there are no other constraints on it, it should always end up exactly equal to that minimum (give or take machine-arithmetic epsilon considerations).
If you have parameters (i.e. values are known before you run the optimisation) and want to refer to their minimum, AMPL lets you do that more directly:
param p_min := min{j in IndexSet_P} p[j];
While AMPL also supports this syntax for variables, not all of the solvers used with AMPL are capable of accepting this type of constraint. For instance:
reset;
option solver gecode;
set S := {1,2,3};
var x{S} integer;
var x_min = min{s in S} x[s];
minimize OF: sum{s in S} x[s];
s.t. c1: x_min >= 5;
solve;
This will run and do what you'd expect it to do, because Gecode is programmed to recognise and deal with min-type constraints. However, if you switch the solver option to gurobi or cplex it will fail, since these only accept linear or quadratic constraints. To apply a minimum constraint with those solvers, you need to use something like the linearization trick I discussed above.

Should I transform constraint optimization to unconstrained optimization?

I have a two part question based on the optimization problem,
max f(x) s.t. a <= x <= b
where f is a nonlinear function and a and b are finite.
(1) I have heard that if possible, one should try transform this constrained optimization problem to an unconstrained one (I am interested in not finding local maximums but this could also be to speed up the optimization). Is this in general true?
For the specific problem at hand, I am using the "optim" function in R with "Nelder-Mead" that uses non-differentiable optimization.
(2) Is there a "best" transformation to use to transform the constrained to unconstrained problem?
I am using a +(b-a)*(sin(x)+1)/2 because it is onto and continuous (and so I am hoping not to find local maximums by searching the entire interval).
See https://math.stackexchange.com/questions/75077/mapping-the-real-line-to-the-unit-interval for some transformations. The unconstrained problem is then,
max f(a +(b-a)*(sin(x)+1)/2)
Also in the case of a one-sided constraint a < x, I have seen people use the exponential function a + exp(x). Is this the best thing to do?

Equality and inequality constraints in multi-objective optimisation?

This question has been posted in stach mathematics link and I would like to post it here as well to get an answer
The general form of the multi-objective optimisation as the following:
Maximise/ Minimise f(x), m=1,2,… ,M;
subject to j (x)≥0, j=1,2,… ,J;
k (x)=0, k=1,2,… ,K;
x_i^((L))≤x_i≤x_i^((U)), i=1,2,… ,N;
where, f(x): R^N→R^M,x=(x_1,x_2,...,x_K,...,x_N) is the vector of the N parameters, M is the number of objective functions, k and j are the equality and inequality constraints, respectively, with K and J are the number of equality and inequality constraints that the solution must satisfy, respectively. The last set of constraints are the parameter bounds restricting each parameter x_i to take a value within an upper bound x_i^((U)) and a lower bound x_i^((L)).
What does the equality and inequality constraints? and what do they do? and how can i know the K and J?
I appreciate all the feedback
An optimization problem is a way of modeling a system. Variables, objectives and constraints all come out of that model. Consider a constraint on how much of a resource x_i is available. Suppose you have 5 of whatever x_i is. Then an inequality constraint j is -x_i + 5 >= 0 . Equality constraints K come from similar considerations. Suppose you must assign exactly three of x_i. Then you have an equality constraint k: x_i = 3.