Assume a mathematical optimization problem with two positive continuous variables:
0 <= x <= 1
0 <= y <= 1000
I am seeking of an efficient way to express in form of linear constraints (possibly with the use of binary/integer variables and big M) the following nonlinear relationship, so the problem can be solved with milp solvers:
when 0 <= y < 200 then x = 0
when y = 200 then 0 <= x <= 1
when 200 < y <= 1000 then x = 1
The numbers 200 and 1000 are indicative.
Are there any direct suggestions or papers/books addressing similar problems?
I think this will work...
Here's how I think of this. You have 3 states that you need to be aware of, which are the 3 partitions on the domain of y. So, 2 binary variables can capture these 3 states. In order to keep things linear, you will need to work with non-strict inequalities. So define:
y_lb ∈ {0, 1} and let y_lb = 1 if y >= 200
y_ub ∈ {0, 1} and let y_ub = 1 if y <= 1000
so now we have our partitions set up in terms of a truth table for y_lb and y_ub:
y y<200 200<=y<=1000 y>1000
y_lb 0 | 1 | 1
y_ub 1 | 1 | 0
Now we can easily link that truth table to constrain x:
x ∈ Reals
x <= y_lb
x >= 1 - y_ub
Related
I have a dataframe frame in which I need to iterate through one of the columns and apply certain conditional statements for using one or the other set of equations.
I've written the code below. However, I'm not getting the right result. In the code, the input_data variable is checked for positive values, but the condition is not met when encountering a negative value and always applies the equations for the case of positive values.
thanks in advance for any advice on this
import pandas as pd
x=[-1,1]
y=[2,3]
df=pd.DataFrame({'x':x, 'y':y})
print(df)
x y
0 -1 2
1 1 3
input_data=df['x']
for i in range(len(input_data)):
if input_data[i]>0:
df['z']=input_data[i]+1
df['z2']=df['z']+1
df['z3']=1
else:
df['z']=input_data[i]-1
df['z2']=df['z']-1
df['z3']=0
print(df)
x y z z2 z3
0 -1 2 2 3 1
1 1 3 2 3 1
In pandas, loops are generally implemented with apply():
df[['z','z2','z3']] = df.apply(
lambda row: [row.x+1, row.x+2, 1] if row.x > 0 else [row.x-1, row.x-2, 0],
result_type='expand',
axis=1)
# x y z z2 z3
# 0 -1 2 -2.0 -3.0 0.0
# 1 1 3 2.0 3.0 1.0
Or you can use the vectorized np.where():
df['z'] = np.where(df.x > 0, df.x + 1, df.x - 1)
df['z2'] = np.where(df.x > 0, df.z + 1, df.z - 1)
df['z3'] = df.x.gt(0).astype(int)
# x y z z2 z3
# 0 -1 2 -2 -3 0
# 1 1 3 2 3 1
As for the for loop implementation, the issue was due to the assignment statements.
For example df['z3'] = 1 sets the the whole z3 column to 1 (not just any particular row of z3 but the whole column). Similarly df['z3'] = 0 sets the whole column to 0. This applies to all those assignment statements.
So then because the last x value is positive, the final iteration sets all the z columns to the positive result.
Assume I have a model that has A(t) and B(t) governed by the following equations:
A(t) = {
WHEN B(t-1) < 10 : B(t-1)
WHEN B(t-1) >=10 : B(t-1) / 6
}
B(t) = A(t) * 2
The following table is provided as input.
SELECT * FROM model ORDER BY t;
| t | A | B |
|---|------|------|
| 0 | 0 | 9 |
| 1 | null | null |
| 2 | null | null |
| 3 | null | null |
| 4 | null | null |
I.e. we know the values of A(t=0) and B(t=0).
For each row, we want to calculate the value of A & B using the equations above.
The final table should be:
| t | A | B |
|---|---|----|
| 0 | 0 | 9 |
| 1 | 9 | 18 |
| 2 | 3 | 6 |
| 3 | 6 | 12 |
| 4 | 2 | 4 |
We've tried using lag, but because of the models' recursive-like nature, we end up only getting A & B at (t=1)
CREATE TEMPORARY FUNCTION A_fn(b_prev FLOAT64) AS (
CASE
WHEN b_prev < 10 THEN b_prev
ELSE b_prev / 6.0
END
);
SELECT
t,
CASE WHEN t = 0 THEN A ELSE A_fn(LAG(B) OVER (ORDER BY t)) END AS A,
CASE WHEN t = 0 THEN B ELSE A_fn(LAG(B) OVER (ORDER BY t)) * 2 END AS B
FROM model
ORDER BY t;
Produces:
| t | A | B |
|---|------|------|
| 0 | 0 | 9 |
| 1 | 9 | 18 |
| 2 | null | null |
| 3 | null | null |
| 4 | null | null |
Each row is dependent on the row above it. It seems it should be possible to compute a single row at a time, while iterating through the rows? Or does BigQuery not support this type of windowing?
If it is not possible, what do you recommend?
Round #1 - starting point
Below is for BigQuery Standard SQL and works (for me) with up to 3M rows
#standardSQL
CREATE TEMP FUNCTION x(v FLOAT64, t INT64)
RETURNS ARRAY<STRUCT<t INT64, v FLOAT64>>
LANGUAGE js AS """
var i, result = [];
for (i = 1; i <= t; i++) {
if (v < 10) {v = 2 * v}
else {v = v / 3};
result.push({t:i, v});
};
return result
""";
SELECT 0 AS t, 0 AS A, 9 AS B UNION ALL
SELECT line.t, line.v / 2, line.v FROM UNNEST(x(9, 3000000)) line
Going above 3M rows produces Resources exceeded during query execution: UDF out of memory.
To overcome this - i think you should just implement it on the client - so no JS UDF Limits are applied. I think it is reasonable "workaround" because looks like anyway you have no really data in BQ and just one starting value (9 in this example). But even if you do have other valuable columns in the table - you can then JOIN produced result back to table ON t value - so should be Ok!
Round #2 - It could be billions ... - so let's take care of scale, parallelization
Below is a little trick to avoid JS UDFs Resource and/or Memory error
So, I was able to run it for 2B rows in one shot!
#standardSQL
CREATE TEMP FUNCTION anchor(seed FLOAT64, len INT64, batch INT64)
RETURNS ARRAY<STRUCT<t INT64, v FLOAT64>> LANGUAGE js AS """
var i, result = [], v = seed;
for (i = 0; i <= len; i++) {
if (v < 10) {v = 2 * v} else {v = v / 3};
if (i % batch == 0) {result.push({t:i + 1, v})};
}; return result
""";
CREATE TEMP FUNCTION x(value FLOAT64, start INT64, len INT64)
RETURNS ARRAY<STRUCT<t INT64, v FLOAT64>>
LANGUAGE js AS """
var i, result = []; result.push({t:0, v:value});
for (i = 1; i < len; i++) {
if (value < 10) {value = 2 * value} else {value = value / 3};
result.push({t:i, v:value});
}; return result
""";
CREATE OR REPLACE TABLE `project.dataset.result` AS
WITH settings AS (SELECT 9 init, 2000000000 len, 1000 batch),
anchors AS (SELECT line.* FROM settings, UNNEST(anchor(init, len, batch)) line)
SELECT 0 AS t, 0 AS A, init AS B FROM settings UNION ALL
SELECT a.t + line.t, line.v / 2, line.v
FROM settings, anchors a, UNNEST(x(v, t, batch)) line
In above query - you "control" initial values in below line
WITH settings AS (SELECT 9 init, 2000000000 len, 1000 batch),
in above example, 9 is initial value, 2,000,000,000 is number of rows to be calculated and 1000 is a batch to process with (this is important one to keep BQ Engine out of throwing Resource and/or Memory error - you cannot make it too big or too small - i feel I got some sense of what it needs to be - but not enough for trying to formulate it)
Some stats (settings - execution time):
1M: SELECT 9 init, 1000000 len, 1000 batch - 0 min 9 sec
10M: SELECT 9 init, 10000000 len, 1000 batch - 0 min 50 sec
100M: SELECT 9 init, 100000000 len, 600 batch - 3 min 4 sec
100M: SELECT 9 init, 100000000 len, 40 batch - 2 min 56 sec
1B: SELECT 9 init, 1000000000 len, 10000 batch - 29 min 39 sec
1B: SELECT 9 init, 1000000000 len, 1000 batch - 27 min 50 sec
2B: SELECT 9 init, 2000000000 len, 1000 batch - 48 min 27 sec
Round #3 - some thoughts and comments
Obviously, as I mentioned in #1 above - this type of calculation is more suited for being implemented on client of your choice - so it is hard for me to judge practical value of above - but I really had fun playing with it! In reality, I had few more cool ideas in mind and also implemented and played with them - but above (in #2) was the most practical/scalable one
Note: The most interesting part of above solution is anchors table. It is very cheap to generate and allows to set anchors in batch-size interval - so having this you can for example calculate value of row = 2,000,035 or 1,123,456,789 (for example) without actually processing all previous rows - and this will take fraction of sec. Or you can parallelize calculation of all rows by starting several threads/calculations using respective anchors, etc. Quite a number of opportunities.
Finally, it really depends on your specific use-case which way to go further - so I am leaving it up to you
It seems it should be possible to compute a single row at a time, while iterating through the rows
Support for Scripting and Stored Procedures is now in beta (as of October 2019)
You can submit multiple statements separated with semi-colons and BigQuery is able to run them now.
So, conceptually your process could look like below script:
DECLARE b_prev FLOAT64 DEFAULT NULL;
DECLARE t INT64 DEFAULT 0;
DECLARE arr ARRAY<STRUCT<t INT64, a FLOAT64, b FLOAT64>> DEFAULT [STRUCT(0, 0.0, 9.0)];
SET b_prev = 9.0 / 2;
LOOP
SET (t, b_prev) = (t + 1, 2 * b_prev);
IF t >= 100 THEN LEAVE;
ELSE
SET b_prev = CASE WHEN b_prev < 10 THEN b_prev ELSE b_prev / 6.0 END;
SET arr = (SELECT ARRAY_CONCAT(arr, [(t, b_prev, 2 * b_prev)]));
END IF;
END LOOP;
SELECT * FROM UNNEST(arr);
Even though above script is simpler and more directly represents logic for non-technical personal and easier to manage - it does not fit in scenarios were you need to loop through more than 100 or more iterations. For example above script took close to 2 min while my original solution for same 100 rows took just 2 sec
But still great for simple / smaller cases
I have the next LP problem
Maximize
1000 x1 + 500 x2 - 500 x5 - 250 x6
Subject To
c1: x1 + x2 - x3 - x4 = 0
c2: - x3 + x5 = 0
c3: - x4 + x6 = 0
With these Bounds
0 <= x1 <= 10
0 <= x2 <= 15
0 <= x5 <= 15
0 <= x6 <= 5
By solving this problem with Cplex dual algorithm I get an optimal solution of 6250. But checking the reduced costs of the variables I get the next results
Variable value reduced cost
1 10.0 500.0
1 0.0 -0.0
2 5.0 -0.0
3 5.0 -0.0
4 5.0 -0.0
5 5.0 250.0
Is it possible to have a positive reduced cost on a positive valued variable? Because the reduced cost value indicates how much the objective function coefficient on the corresponding variable must be improved before the value of the variable will be positive in the optimal solution, what does a positive reduced cost means on a postive valued variable?
Variable 1 is listed twice in the solution?
Note that you need to distinguish between nonbasic at lower bound and nonbasic at upper bound. The reduced cost indicates how much the objective can change when the corresponding bound changes by one unit.
Note also that most textbooks focus on the special case x >= 0 while practical solvers support both lower and upper bounds: L <= x <= U.
I have a table d with fields x, y, f, (PK is x,y) and would like to implement convolution, where a new column, c, is defined, as the Convolution (2D) given an arbitrary kernel. In a procedural language, this is easy to define (see below). I'm confident it can be defined in SQL using a JOIN, but I'm having trouble doing so.
In a procedural language, I would do:
def conv(x, y):
c = 0
# x_ and y_ are pronounced "x prime" and "y prime",
# and take on *all* x and y values in the table;
# that is, we iterate through *all* rows
for all x_, y_
c += f(x_, y_) * kernel(x_ - x, y_ - y)
return c
kernel can be any arbitrary function. In my case, it's 1/k^(sqrt(x_dist^2, y_dist^2)), with kernel(0,0) = 1.
For performance reasons, we don't need to look at every x_, y_. We can filter it where the distance < threshold.
I think this can be done using a Cartesian product join, followed by aggregate SQL SUM, along with a WHERE clause.
One additional challenge of doing this in SQL is NULLs. A naive implementation would treat them as zeroes. What I'd like to do is instead treat the kernel as a weighted average, and just leave out NULLs. That is, I'd use a function wkernel as my kernel, and modify the code above to be:
def conv(x, y):
c = 0
w = 0
for all x_, y_
c += f(x_, y_) * wkernel(x_ - x, y_ - y)
w += wkernel(x_ - x, y_ - y)
return c/w
That would make NULLs work great.
To clarify: You can't have a partial observation, where x=NULL and y=3. However, you can have a missing observation, e.g. there is no record where x=2 and y=3. I am referring to this as NULL, in the sense that the entire record is missing. My procedural code above will handle this fine.
I believe the above can be done in SQL (assuming wkernel is already implemented as a function), but I can't figure out how. I'm using Postgres 9.4.
Sample table:
Table d
x | y | f
0 | 0 | 1.4
1 | 0 | 2.3
0 | 1 | 1.7
1 | 1 | 1.2
Output (just showing one row):
x | y | c
0 | 0 | 1.4*1 + 2.3*1/k + 1.7*1/k + 1.2*1/k^1.414
Convolution https://en.wikipedia.org/wiki/Convolution is a standard algorithm used throughout image processing and signal processing, and I believe it can be done in SQL, which is very useful given the large data sets we're now using.
I assumed a function wkernel, for example:
create or replace function wkernel(k numeric, xdist numeric, ydist numeric)
returns numeric language sql as $$
select 1. / pow(k, sqrt(xdist*xdist + ydist*ydist))
$$;
The following query gives what you want but without restricting to close values:
select d1.x, d1.y, SUM(d2.f*wkernel(2, d2.x-d1.x, d2.y-d1.y)) AS c
from d d1 cross join d d2
group by d1.x, d1.y;
x | y | c
---+---+-------------------------
0 | 0 | 3.850257072695778143380
1 | 0 | 4.237864186319019036455
0 | 1 | 3.862992722666908108145
1 | 1 | 3.725299918145074500610
(4 rows)
With some arbitrary restriction:
select d1.x, d1.y, SUM(d2.f*wkernel(2, d2.x-d1.x, d2.y-d1.y)) AS c
from d d1 cross join d d2
where abs(d2.x-d1.x)+abs(d2.y-d1.y) < 1.1
group by d1.x, d1.y;
x | y | c
---+---+-------------------------
0 | 0 | 3.400000000000000000000
1 | 0 | 3.600000000000000000000
0 | 1 | 3.000000000000000000000
1 | 1 | 3.200000000000000000000
(4 rows)
For the weighted average point:
select d1.x, d1.y, SUM(d2.f*wkernel(2, d2.x-d1.x, d2.y-d1.y)) / SUM(wkernel(2, d2.x-d1.x, d2.y-d1.y)) AS c
from d d1 cross join d d2
where abs(d2.x-d1.x)+abs(d2.y-d1.y) < 1.1
group by d1.x, d1.y;
Now onto the missing information thing. In the following code, please replace 2 by the maximum distance to be considered.
The idea is the following: We find the bounds of the considered image and we generate all the information that could be needed. With your example and with a maximum scope of 1, we need all the couples (x, y) such that (-1 <= x <= 2) and (-1 <= y <= 2).
Finding bounds and fixing scope=1 and k=2 (call this relation cfg):
SELECT MIN(x), MAX(x), MIN(y), MAX(y), 1, 2
FROM d;
min | max | min | max | ?column? | ?column?
-----+-----+-----+-----+----------+----------
0 | 1 | 0 | 1 | 1 | 2
Generating completed set of values (call this relation completed):
SELECT x.*, y.*, COALESCE(f, 0)
FROM cfg
CROSS JOIN generate_series(minx - scope, maxx + scope) x
CROSS JOIN generate_series(miny - scope, maxy + scope) y
LEFT JOIN d ON d.x = x.* AND d.y = y.*;
x | y | coalesce
----+----+----------
-1 | -1 | 0
-1 | 0 | 0
-1 | 1 | 0
-1 | 2 | 0
0 | -1 | 0
0 | 0 | 1.4
0 | 1 | 1.7
0 | 2 | 0
1 | -1 | 0
1 | 0 | 2.3
1 | 1 | 1.2
1 | 2 | 0
2 | -1 | 0
2 | 0 | 0
2 | 1 | 0
2 | 2 | 0
(16 rows)
Now we just have to compute the values with the query given before and the cfg and completed relations. Note that we do not compute convolution for the values on the borders:
SELECT d1.x, d1.y, SUM(d2.f*wkernel(k, d2.x-d1.x, d2.y-d1.y)) / SUM(wkernel(k, d2.x-d1.x, d2.y-d1.y)) AS c
FROM cfg cross join completed d1 cross join completed d2
WHERE d1.x BETWEEN minx AND maxx
AND d1.y BETWEEN miny AND maxy
AND abs(d2.x-d1.x)+abs(d2.y-d1.y) <= scope
GROUP BY d1.x, d1.y;
x | y | c
---+---+-------------------------
0 | 0 | 1.400000000000000000000
0 | 1 | 1.700000000000000000000
1 | 0 | 2.300000000000000000000
1 | 1 | 1.200000000000000000000
(4 rows)
All in one, this gives:
WITH cfg(minx, maxx, miny, maxy, scope, k) AS (
SELECT MIN(x), MAX(x), MIN(y), MAX(y), 1, 2
FROM d
), completed(x, y, f) AS (
SELECT x.*, y.*, COALESCE(f, 0)
FROM cfg
CROSS JOIN generate_series(minx - scope, maxx + scope) x
CROSS JOIN generate_series(miny - scope, maxy + scope) y
LEFT JOIN d ON d.x = x.* AND d.y = y.*
)
SELECT d1.x, d1.y, SUM(d2.f*wkernel(k, d2.x-d1.x, d2.y-d1.y)) / SUM(wkernel(k, d2.x-d1.x, d2.y-d1.y)) AS c
FROM cfg cross join completed d1 cross join completed d2
WHERE d1.x BETWEEN minx AND maxx
AND d1.y BETWEEN miny AND maxy
AND abs(d2.x-d1.x)+abs(d2.y-d1.y) <= scope
GROUP BY d1.x, d1.y;
I hope this helps :-)
I am new in maxima, so I am really sorry if I ask simple question. I have a differential equation,
(%i1) -(x-x/2*sinh(x/2)+'diff(y,x))*(1/y+'diff(y,x)*x/y^2)+(x-x^2/sinh(x/2)+x^2*cosh(x/2)/(4*(sinh(x/2))^2)+'diff(y,x)*x+'diff(y,x,2)*x^2)/y+y^2-1-0.9*(x-x^2/(2*sinh(x/2)))=0;
2 x
2 2 x cosh(-)
2 d y dy x 2
x --- + x -- - ------- + ---------- + x
2 dx x 2 x
dx sinh(-) 4 sinh (-)
2 2
(%o1) ----------------------------------------
y
x dy
x sinh(-) x -- 2
dy 2 dx 1 2 x
+ (- -- + --------- - x) (---- + -) + y - 0.9 (x - ---------) - 1 = 0
dx 2 2 y x
y 2 sinh(-)
2
(%i2) ode2(%,y,x);
rat: replaced -0.9 by -9/10 = -0.9
(%o2) false
what should I do?
The equation you have is nonlinear. Maxima's ode2 can only solve a limited variety of differential equations, and it appears your equation doesn't fall into any of the categories it can handle.
I don't know if there is another symbolic diff eq solver in Maxima that you can try. If a numerical solution is enough, take a look at rk (a Runge-Kutta implementation).