I am trying to maximize the function $a_1x_1 + \cdots +a_nx_n$ subject to the constraints $b_1x_1 + \cdots + b_nx_n \leq c$ and $x_i \geq 0$ for all $i$. For the toy example below, I've chosen $a_i = b_i$, so the problem is to maximize $0x_1 + 25x_2 + 50x_3 + 75x_4 + 100x_5$ given $0x_1 + 25x_2 + 50x_3 + 75x_4 + 100x_5 \leq 100$. Trivially, the maximum value of the objective function should be 100, but when I run the code below I get a solution of 2.5e+31. What's going on?
library(lpSolve)
a <- seq.int(0, 100, 25)
b <- seq.int(0, 100, 25)
c <- 100
optimal_val <- lp(direction = "max",
objective.in = a,
const.mat = b,
const.dir = "<=",
const.rhs = c,
all.int = TRUE)
optimal_val
b is not a proper matrix. You should do, before the lp call:
b <- seq.int(0, 100, 25)
b <- matrix(b,nrow=1)
That will give you an explicit 1 x 5 matrix:
> b
[,1] [,2] [,3] [,4] [,5]
[1,] 0 25 50 75 100
Now you will see:
> optimal_val
Success: the objective function is 100
Background: by default R will consider a vector as a column matrix:
> matrix(c(1,2,3))
[,1]
[1,] 1
[2,] 2
[3,] 3
Related
I am reading this statistics book where they have mentioned that the attached top plot has no correlation between adjacent residuals. Whereas, the bottom most has correlation with p-0.9. Can anybody please provide some direction as to how to analyze this? Thank you very much for your time.
Correlated errors mean that the lag 1 correlation is p. That is, Cor(Yi, Yi-1) = p. This can be modelled using Yi = mu + p epsiloni-1 + epsiloni where epsiloni ~ N(0, 1) for all i. We can verify that the correlation between adjacent data points is p: Cov(Yi, Yi-1) = Cov(p epsiloni-1 + epsiloni, p epsiloni-2 + epsiloni-1) = Cov(p epsiloni-1, epsiloni-1) = p Var(epsiloni-1) = p. Code to demonstrate appears below:
set.seed(123)
epsilonX <- rnorm(100, 0, 1)
epsilonY <- rnorm(100, 0, 1)
epsilonZ <- rnorm(100, 0, 1)
X <- NULL
Y <- NULL
Z <- NULL
Y[1] <- epsilonY[1]
X[1] = epsilonX[1]
Z[1] = epsilonZ[1]
rhoX = 0
rhoY = 0.5
rhoZ = 0.9
for (i in 2:100) {
Y[i] <- rhoY * epsilonY[i-1] + epsilonY[i]
X[i] <- rhoX * epsilonX[i-1] + epsilonX[i]
Z[i] <- rhoZ * epsilonZ[i-1] + epsilonZ[i]
}
param = par(no.readonly = TRUE)
par(mfrow=c(3,1))
plot(X, type='o', xlab='', ylab='Residual', main=expression(rho*"=0.0"))
abline(0, 0, lty=2)
plot(Y, type='o', xlab='', ylab='Residual', main=expression(rho*"=0.5"))
abline(0, 0, lty=2)
plot(Z, type='o', xlab='', ylab='Residual', main=expression(rho*"=0.9"))
abline(0, 0, lty=2)
#par(param)
acf(X)
acf(Y)
acf(Z)
Note from the acf plots that the lag 1 correlation is insignificant for p = 0, higher for p = 0.5 data (~0.3), and still higher for p = 0.9 data (~0.5).
As the title says,
I want to solve a problem similar to the summation of multiple schemes into a fixed constant, However, when I suggest the constrained optimization model, I can't get all the basic schemes well. Part of the opinion is to add a constraint when I get a solution. However, the added constraint leads to incomplete solution and no addition leads to a dead cycle.
Here is my problem description
I have a list of benchmark data detail_list ,My goal is to select several numbers from the benchmark data list(detail_list), but not all of them, so that the sum of these data can reach the sum of the number(plan_amount) I want.
For Examle
detail_list = [50, 100, 80, 40, 120, 25],
plan_amount = 20,
The feasible schemes are:
detail_list[2]=20 can be satisfied, detail_list[1](noly 10) + detail_list[3](only 10) = plan_amount(20) , detail_list[1](only 5) + detail_list[3](only 15) = plan_amount(20) also can be satisfied, and detail_list1 + detail_list2 + detail_list3 = plan_amount(20). But you can't take four elements in the detail_list are combined, because number = 3, indicating that a maximum of three elements are allowed to be combined.
from pulp import *
num = 6 # the list max length
number_max = 3 # How many combinations can there be at most
plan_amount = 20
detail_list = [50, 100, 80, 40, 120, 25] # Basic data
plan_model = LpProblem("plan_model")
alpha = [LpVariable("alpha_{0}".format(i+1), cat="Binary") for i in range(num)]
upBound_num = [int(detail_list_money) for detail_list_money in detail_list]
num_channel = [
LpVariable("fin_money_{0}".format(i+1), lowBound=0, upBound=upBound_num[i], cat="Integer") for i
in range(num)]
plan_model += lpSum(num_channel) == plan_amount
plan_model += lpSum(alpha) <= number_max
for i in range(num):
plan_model += num_channel[i] >= alpha[i] * 5
plan_model += num_channel[i] <= alpha[i] * detail_list[i]
plan_model.writeLP("2222.lp")
test_dd = open("2222.txt", "w", encoding="utf-8")
i = 0
while True:
plan_model.solve()
if LpStatus[plan_model.status] == "Optimal":
test_dd.write(str(i + 1) + "times result\n")
for v in plan_model.variables():
test_dd.write(v.name + "=" + str(v.varValue))
test_dd.write("\n")
test_dd.write("============================\n\n")
alpha_0_num = 0
alpha_1_num = 0
for alpha_value in alpha:
if value(alpha_value) == 0:
alpha_0_num += 1
if value(alpha_value) == 1:
alpha_1_num += 1
plan_model += (lpSum(
alpha[k] for k in range(num) if value(alpha[k]) == 1)) <= alpha_1_num - 1
plan_model.writeLP("2222.lp")
i += 1
else:
break
test_dd.close()
I don't know how to change my constraints to achieve this goal. Can you help me
I'm trying to implement RGB to HSV conversion from opencv in pure numpy using formula from here:
def rgb2hsv_opencv(img_rgb):
img_hsv = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2HSV)
return img_hsv
def rgb2hsv_np(img_rgb):
assert img_rgb.dtype == np.float32
height, width, c = img_rgb.shape
r, g, b = img_rgb[:,:,0], img_rgb[:,:,1], img_rgb[:,:,2]
t = np.min(img_rgb, axis=-1)
v = np.max(img_rgb, axis=-1)
s = (v - t) / (v + 1e-6)
s[v==0] = 0
# v==r
hr = 60 * (g - b) / (v - t + 1e-6)
# v==g
hg = 120 + 60 * (b - r) / (v - t + 1e-6)
# v==b
hb = 240 + 60 * (r - g) / (v - t + 1e-6)
h = np.zeros((height, width), np.float32)
h = h.flatten()
hr = hr.flatten()
hg = hg.flatten()
hb = hb.flatten()
h[(v==r).flatten()] = hr[(v==r).flatten()]
h[(v==g).flatten()] = hg[(v==g).flatten()]
h[(v==b).flatten()] = hb[(v==b).flatten()]
h[h<0] += 360
h = h.reshape((height, width))
img_hsv = np.stack([h, s, v], axis=-1)
return img_hsv
img_bgr = cv2.imread('00000.png')
img_rgb = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB)
img_rgb = img_rgb / 255.0
img_rgb = img_rgb.astype(np.float32)
img_hsv1 = rgb2hsv_np(img_rgb)
img_hsv2 = rgb2hsv_opencv(img_rgb)
print('max diff:', np.max(np.fabs(img_hsv1 - img_hsv2)))
print('min diff:', np.min(np.fabs(img_hsv1 - img_hsv2)))
print('mean diff:', np.mean(np.fabs(img_hsv1 - img_hsv2)))
But I get big diff:
max diff: 240.0
min diff: 0.0
mean diff: 0.18085355
Do I missing something?
Also maybe it's possible to write numpy code more efficient, for example without flatten?
Also I have hard time finding original C++ code for cvtColor function, as I understand it should be actually function cvCvtColor from C code, but I can't find actual source code with formula.
From the fact that the max difference is exactly 240, I'm pretty sure that what's happening is in the case when both or either of v==r, v==g are simultaneously true alongside v==b, which gets executed last.
If you change the order from:
h[(v==r).flatten()] = hr[(v==r).flatten()]
h[(v==g).flatten()] = hg[(v==g).flatten()]
h[(v==b).flatten()] = hb[(v==b).flatten()]
To:
h[(v==r).flatten()] = hr[(v==r).flatten()]
h[(v==b).flatten()] = hb[(v==b).flatten()]
h[(v==g).flatten()] = hg[(v==g).flatten()]
The max difference may start showing up as 120, because of that added 120 in that equation. So ideally, you would want to execute these three lines in the order b->g->r. The difference should be negligible then (still noticing a max difference of 0.01~, chalking it up to some round off somewhere).
h[(v==b).flatten()] = hb[(v==b).flatten()]
h[(v==g).flatten()] = hg[(v==g).flatten()]
h[(v==r).flatten()] = hr[(v==r).flatten()]
How to sample without replacement in TensorFlow? Like numpy.random.choice(n, size=k, replace=False) for some very large integer n (e.g. 100k-100M), and smaller k (e.g. 100-10k).
Also, I want it to be efficient and on the GPU, so other solutions like this with tf.py_func are not really an option for me. Anything which would use tf.range(n) or so is also not an option because n could be very large.
This is one way:
n = ...
sample_size = ...
idx = tf.random_shuffle(tf.range(n))[:sample_size]
EDIT:
I had posted the answer below but then read the last line of your post. I don't think there is a good way to do it if you absolutely cannot produce a tensor with size O(n) (numpy.random.choice with replace=False is also implemented as a slice of a permutation). You could resort to a tf.while_loop until you have unique indices:
n = ...
sample_size = ...
idx = tf.zeros(sample_size, dtype=tf.int64)
idx = tf.while_loop(
lambda i: tf.size(idx) == tf.size(tf.unique(idx)),
lambda i: tf.random_uniform(sample_size, maxval=n, dtype=int64))
EDIT 2:
About the average number of iterations in the previous method. If we call n the number of possible values and k the length of the desired vector (with k ≤ n), the probability that an iteration is successful is:
p = product((n - (i - 1) / n) for i in 1 .. k)
Since each iteartion can be considered a Bernoulli trial, the average number of trials unitl first success is 1 / p (proof here). Here is a function that calculates the average numbre of trials in Python for some k and n values:
def avg_iter(k, n):
if k > n or n <= 0 or k < 0:
raise ValueError()
avg_it = 1.0
for p in (float(n) / (n - i) for i in range(k)):
avg_it *= p
return avg_it
And here are some results:
+-------+------+----------+
| n | k | Avg iter |
+-------+------+----------+
| 10 | 5 | 3.3 |
| 100 | 10 | 1.6 |
| 1000 | 10 | 1.1 |
| 1000 | 100 | 167.8 |
| 10000 | 10 | 1.0 |
| 10000 | 100 | 1.6 |
| 10000 | 1000 | 2.9e+22 |
+-------+------+----------+
You can see it varies wildy depending on the parameters.
It is possible, though, to construct a vector in a fixed number of steps, although the only algorithm I can think of is O(k2). In pure Python it goes like this:
import random
def sample_wo_replacement(n, k):
sample = [0] * k
for i in range(k):
sample[i] = random.randint(0, n - 1 - len(sample))
for i, v in reversed(list(enumerate(sample))):
for p in reversed(sample[:i]):
if v >= p:
v += 1
sample[i] = v
return sample
random.seed(100)
print(sample_wo_replacement(10, 5))
# [2, 8, 9, 7, 1]
print(sample_wo_replacement(10, 10))
# [6, 5, 8, 4, 0, 9, 1, 2, 7, 3]
This is a possible way to do it in TensorFlow (not sure if the best one):
import tensorflow as tf
def sample_wo_replacement_tf(n, k):
# First loop
sample = tf.constant([], dtype=tf.int64)
i = 0
sample, _ = tf.while_loop(
lambda sample, i: i < k,
# This is ugly but I did not want to define more functions
lambda sample, i: (tf.concat([sample,
tf.random_uniform([1], maxval=tf.cast(n - tf.shape(sample)[0], tf.int64), dtype=tf.int64)],
axis=0),
i + 1),
[sample, i], shape_invariants=[tf.TensorShape((None,)), tf.TensorShape(())])
# Second loop
def inner_loop(sample, i):
sample_size = tf.shape(sample)[0]
v = sample[i]
j = i - 1
v, _ = tf.while_loop(
lambda v, j: j >= 0,
lambda v, j: (tf.cond(v >= sample[j], lambda: v + 1, lambda: v), j - 1),
[v, j])
return (tf.where(tf.equal(tf.range(sample_size), i), tf.tile([v], (sample_size,)), sample), i - 1)
i = tf.shape(sample)[0] - 1
sample, _ = tf.while_loop(lambda sample, i: i >= 0, inner_loop, [sample, i])
return sample
And an example:
with tf.Graph().as_default(), tf.Session() as sess:
tf.set_random_seed(100)
sample = sample_wo_replacement_tf(10, 5)
for i in range(10):
print(sess.run(sample))
# [3 0 6 8 4]
# [5 4 8 9 3]
# [1 4 0 6 8]
# [8 9 5 6 7]
# [7 5 0 2 4]
# [8 4 5 3 7]
# [0 5 7 4 3]
# [2 0 3 8 6]
# [3 4 8 5 1]
# [5 7 0 2 9]
This is quite intesive on tf.while_loops, though, which are well-known not to be particularly fast in TensorFlow, so I wouldn't know how fast can you really get with this method without some kind of benchmarking.
EDIT 4:
One last possible method. You can divide the range of possible values (0 to n) in "chunks" of size c and pick a random amount of numbers from each chunk, then shuffle everything. The amount of memory that you use is limited by c, and you don't need nested loops. If n is divisible by c, then you should get about a perfect random distribution, otherwise values in the last "short" chunk would receive some extra probability (this may be negligible depending on the case). Here is a NumPy implementation. It is somewhat long to account for different corner cases and pitfalls, but if c ≥ k and n mod c = 0 several parts get simplified.
import numpy as np
def sample_chunked(n, k, chunk=None):
chunk = chunk or n
last_chunk = chunk
parts = n // chunk
# Distribute k among chunks
max_p = min(float(chunk) / k, 1.0)
max_p_last = max_p
if n % chunk != 0:
parts += 1
last_chunk = n % chunk
max_p_last = min(float(last_chunk) / k, 1.0)
p = np.full(parts, 2)
# Iterate until a valid distribution is found
while not np.isclose(np.sum(p), 1) or np.any(p > max_p) or p[-1] > max_p_last:
p = np.random.uniform(size=parts)
p /= np.sum(p)
dist = (k * p).astype(np.int64)
sample_size = np.sum(dist)
# Account for rounding errors
while sample_size < k:
i = np.random.randint(len(dist))
while (dist[i] >= chunk) or (i == parts - 1 and dist[i] >= last_chunk):
i = np.random.randint(len(dist))
dist[i] += 1
sample_size += 1
while sample_size > k:
i = np.random.randint(len(dist))
while dist[i] == 0:
i = np.random.randint(len(dist))
dist[i] -= 1
sample_size -= 1
assert sample_size == k
# Generate sample parts
sample_parts = []
for i, v in enumerate(np.nditer(dist)):
if v <= 0:
continue
c = chunk if i < parts - 1 else last_chunk
base = chunk * i
sample_parts.append(base + np.random.choice(c, v, replace=False))
sample = np.concatenate(sample_parts, axis=0)
np.random.shuffle(sample)
return sample
np.random.seed(100)
print(sample_chunked(15, 5, 4))
# [ 8 9 12 13 3]
A quick benchmark of sample_chunked(100000000, 100000, 100000) takes about 3.1 seconds in my computer, while I haven't been able to run the previous algorithm (sample_wo_replacement function above) to completion with the same parameters. It should be possible to implement it in TensorFlow, maybe using tf.TensorArray, although it would require significant effort to get it exactly right.
use the gumbel-max trick here: https://github.com/tensorflow/tensorflow/issues/9260
z = -tf.log(-tf.log(tf.random_uniform(tf.shape(logits),0,1)))
_, indices = tf.nn.top_k(logits + z,K)
indices are what you want. This tick is so easy~!
The following works fairly fast on the GPU, and I did not encounter memory issues when using n~100M and k~10k (using NVIDIA GeForce GTX 1080 Ti):
def random_choice_without_replacement(n, k):
"""equivalent to 'numpy.random.choice(n, size=k, replace=False)'"""
return tf.math.top_k(tf.random.uniform(shape=[n]), k, sorted=False).indices
I need to write a program to generate and display a piecewise quadratic Bezier curve that interpolates each set of data points (I have a txt file contains data points). The curve should have continuous tangent directions, the tangent direction at each data point being a convex combination of the two adjacent chord directions.
0.1 0,
0 0,
0 5,
0.25 5,
0.25 0,
5 0,
5 5,
10 5,
10 0,
9.5 0
The above are the data points I have, does anyone know what formula I can use to calculate control points?
You will need to go with a cubic Bezier to nicely handle multiple slope changes such as occurs in your data set. With quadratic Beziers there is only one control point between data points and so each curve segment much be all on one side of the connecting line segment.
Hard to explain, so here's a quick sketch of your data (black points) and quadratic control points (red) and the curve (blue). (Pretend the curve is smooth!)
Look into Cubic Hermite curves for a general solution.
From here: http://blog.mackerron.com/2011/01/01/javascript-cubic-splines/
To produce interpolated curves like these:
You can use this coffee-script class (which compiles to javascript)
class MonotonicCubicSpline
# by George MacKerron, mackerron.com
# adapted from:
# http://sourceforge.net/mailarchive/forum.php?thread_name=
# EC90C5C6-C982-4F49-8D46-A64F270C5247%40gmail.com&forum_name=matplotlib-users
# (easier to read at http://old.nabble.com/%22Piecewise-Cubic-Hermite-Interpolating-
# Polynomial%22-in-python-td25204843.html)
# with help from:
# F N Fritsch & R E Carlson (1980) 'Monotone Piecewise Cubic Interpolation',
# SIAM Journal of Numerical Analysis 17(2), 238 - 246.
# http://en.wikipedia.org/wiki/Monotone_cubic_interpolation
# http://en.wikipedia.org/wiki/Cubic_Hermite_spline
constructor: (x, y) ->
n = x.length
delta = []; m = []; alpha = []; beta = []; dist = []; tau = []
for i in [0...(n - 1)]
delta[i] = (y[i + 1] - y[i]) / (x[i + 1] - x[i])
m[i] = (delta[i - 1] + delta[i]) / 2 if i > 0
m[0] = delta[0]
m[n - 1] = delta[n - 2]
to_fix = []
for i in [0...(n - 1)]
to_fix.push(i) if delta[i] == 0
for i in to_fix
m[i] = m[i + 1] = 0
for i in [0...(n - 1)]
alpha[i] = m[i] / delta[i]
beta[i] = m[i + 1] / delta[i]
dist[i] = Math.pow(alpha[i], 2) + Math.pow(beta[i], 2)
tau[i] = 3 / Math.sqrt(dist[i])
to_fix = []
for i in [0...(n - 1)]
to_fix.push(i) if dist[i] > 9
for i in to_fix
m[i] = tau[i] * alpha[i] * delta[i]
m[i + 1] = tau[i] * beta[i] * delta[i]
#x = x[0...n] # copy
#y = y[0...n] # copy
#m = m
interpolate: (x) ->
for i in [(#x.length - 2)..0]
break if #x[i] <= x
h = #x[i + 1] - #x[i]
t = (x - #x[i]) / h
t2 = Math.pow(t, 2)
t3 = Math.pow(t, 3)
h00 = 2 * t3 - 3 * t2 + 1
h10 = t3 - 2 * t2 + t
h01 = -2 * t3 + 3 * t2
h11 = t3 - t2
y = h00 * #y[i] +
h10 * h * #m[i] +
h01 * #y[i + 1] +
h11 * h * #m[i + 1]
y