Optimizing specific numbers to reach value - optimization

I'm trying to make a program, that when given specific values (let's say 1, 4 and 10), will try to get how much of each value is needed to reach a certain amount, say 19.
It will always try to use as many high values as possible, so in this case, the result should be 10*1, 4*2, 1*1.
I tried thinking about it, but couldn't end up with an algorithm that could work...
Any help or hints would be welcome!

Here is a python solution that tries all the choices until one is found. If you pass the values it can use in descending order, the first found will be the one that uses the most high values as possible:
def solve(left, idx, nums, used):
if (left == 0):
return True
for i in range(idx, len(nums)):
j = int(left / nums[idx])
while (j > 0):
used.append((nums[idx], j))
if solve(left - j * nums[idx], idx + 1, nums, used):
return True
used.pop()
j -= 1
return False
solution = []
solve(19, 0, [10, 4, 1], solution)
print(solution) # will print [(10, 1), (4, 2), (1, 1)]

If anyone needs a simple algorithm, one way I found was:
sort the values, in descending order
keep track on how many values are kept
for each value, do:
if the sum is equal to the target, stop
if it isn't the first value, remove one of the previous values
while the total sum of values is smaller than the objective:
add the current value once
Have a nice day!
(As juviant mentionned, this won't work if the skips larger numbers, and only uses smaller ones! I'll try to improve it and post a new version when I get it to work)

Related

Constraint Programming, how to add x[i] <= (max(x[:i]) + 1)

I'm building a model using or-tools CP tools. The values I want to find are placed in a vector X, and I want to add a constraint that says up to each position of X, the next position cannot have as a value something bigger than the maximum found until X[:i] + 1
It would be something like this:
X[i] <= (max(X[:i]) + 1)
Of course, I cannot add this as a linear constraint with a max(), and creating one extra feature for each value of X upper bound seems excessive and also I would need to minimize each one to make it the "max", otherwise those are just upper bounds that could be huge and not prune my search space (and I already have an objective function).
I already have an objective function.
I know that one trick to add for instance a min-max (min(max(x[i])) problem is to create another variable that is an upper bound of each x and minimize that one. It would be sth like this:
model = cp_model.CpModel()
lb =0; ub=0
model.NewIntVar(z, lb, ub)
for i in domain(X):
model.NewIntVar(X[i], lb, up)
model.Add(X[i] <= z)
model.Minimize(z)
In case you don't want to program this you can use the method in or-tools:
model.AddMaxEquality(z, X)
Now I want to add a constraint that at each value of X sets an upper limit which is the maximum value found until the previous x. It would be something like this:
X[i] <= max(X[:i]) + 1
I was thinking of replicating the previous idea but that would require creating a "z" for each x... not sure if that is the best approach and how much it will reduce my space of solutions. At the same time couldn't find a method in or-tools to do this.
Any suggestions, please?
PS: I already have as an objective function min(z) like it is in the example presented.
Example:
For instance, you can have as a result of the model:
[0, 1, 2, 0, 2, 3]
But you shouldn't have:
[0, 1, 1, 2, 4]
Since the max until X[:3] is 2, so the ub of X[4] should be 2 + 1.
Thanks!
I have no specific hints except:
you need to experiment. One modeling trick may work on one kind of model and not on the other
make sure to use reuse the max variable at index i - 1. With X the array of variables and M the array of max, i.e. M[i] = max(X[0], .., X[i - 1])
M[i] = max(M[i - 1], X[i - 1])
X[i] <= M[i] + 1

Numpy deleting all rows with condition

I have to edit a CSV file.
I can already import it and transform it into a 2D-Array
Now, my job is to delete all rows where, 0.0005 < array[i, 0]%0.0025 < 0.9995.
(Basically, in the first column, are steps with a 0.0025 interval, and I need to delete all rows, where a step is accidentally bigger than it should)
I already tried the following:
length = len(data)
for i in range data:
if 0.0005 < data[i,0]%0.0025 < 0.9995:
np.delete(data, i, 0)
but it didn`t work. Can anybody help me?
I see some issues with your approach - first, you should not iterate on an array from which elements are deleted mid-iteration. Second, np.delete returns a new array and is not in place. Therefore, your call to this method does nothing. Also, there is a small syntax error in your range definition
Perhaps using multiple condition index
np.delete(data,(data[:,0]%0.0025>0.0005)&(data[:,0]%0.0025<0.9995),0)
We can verify a similar problem with an example: remove all rows where the first element x satisfies 1<x<4 (removed the mod as it doesn't matter for the example):
data = np.array([[i, 2, 3] for i in range(1, 6)])
>>> [[1,2,3],[2,2,3],...,[5,2,3]]
data = np.delete(data, (data[:, 0] > 1) & (data[:, 0] < 4), 0)
>>> [[1,2,3],[4,2,3],[5,2,3]]

Check a row for ascension in Numpy, but ignoring elements = 0

I have the code snippet below that searches each row/column in an array to see if all values are either ascending or descending. Ideally, this code would ignore zeros. For example, a row with (5, 0, 3, 1) would come up True for descending. The code below still looks at the zeros. If the masked technique is a dead end, maybe I could create a copy without zeros? I'm very new to Numpy so I would appreciate specific directions. Thanks!
np.ma.masked_equal(grid, 0)
for row in grid:
if np.all(np.diff(row) <= 0) or np.all(np.diff(row) >= 0):
monoScore += .5
for col in np.transpose(grid):
if np.all(np.diff(col) <= 0) or np.all(np.diff(col) >= 0):
monoScore += .5

Minimum of empty Seq is infinite, Why?

I'm working on this weeks PerlWChallenge.
You are given an array of integers #A. Write a script to create an
array that represents the smaller element to the left of each
corresponding index. If none found then use 0.
Here's my approach:
my #A = (7, 8, 3, 12, 10);
my $L = #A.elems - 1;
say gather for 1 .. $L -> $i { take #A[ 0..$i-1 ].grep( * < #A[$i] ).min };
Which kinda works and outputs:
(7 Inf 3 3)
The Infinity obviously comes from the empty grep. Checking:
> raku -e "().min.say"
Inf
But why is the minimum of an empty Seq Infinity? If anything it should be -Infinity. Or zero?
It's probably a good idea to test for the empty sequence anyway.
I ended up using
take .min with #A[ 0..$i-1 ].grep( * < #A[$i] ) or 0
or
take ( #A[ 0..$i-1 ].grep( * < #A[$i] ) or 0 ).min
Generally, Inf works out quite well in the face of further operations. For example, consider a case where we have a list of lists, and we want to find the minimum across all of them. We can do this:
my #a = [3,1,3], [], [-5,10];
say #a>>.min.min
And it will just work, since (1, Inf, -5).min comes out as -5. Were min to instead have -Inf as its value, then it'd get this wrong. It will also behave reasonably in comparisons, e.g. if #a.min > #b.min { }; by contrast, an undefined value will warn.
TL;DR say min displays Inf.
min is, or at least behaves like, a reduction.
Per the doc for reduction of a List:
When the list contains no elements, an exception is thrown, unless &with is an operator with a known identity value (e.g., the identity value of infix:<+> is 0).
Per the doc for min:
a comparison Callable can be specified with the named argument :by
by is min's spelling of with.
To easily see the "identity value" of an operator/function, call it without any arguments:
say min # Inf
Imo the underlying issue here is one of many unsolved wide challenges of documenting Raku. Perhaps comments here in this SO about doc would best focus on the narrow topic of solving the problem just for min (and maybe max and minmax).
I think, there is inspiration from
infimum
(the greatest lower bound). Let we have the set of integers (or real
numbers) and add there the greatest element Inf and the lowest -Inf.
Then infimum of the empty set (as the subset of the previous set) is the
greatest element Inf. (Every element satisfies that is smaller than
any element of the empty set and Inf is the greatest element that
satisfies this.) Minimum and infimum of any nonempty finite set of real
numbers are equal.
Similarly, min in Raku works as infimum for some Range.
1 ^.. 10
andthen .min; #1
but 1 is not from 1 ^.. 10, so 1 is not minimum, but it is infimum
of the range.
It is useful for some algorithm, see the answer by Jonathan
Worthington or
q{3 1 3
-2
--
-5 10
}.lines
andthen .map: *.comb( /'-'?\d+/ )».Int # (3, 1, 3), (-2,), (), (-5, 10)
andthen .map: *.min # 1,-2,Inf,-5
andthen .produce: &[min]
andthen .fmt: '%2d',',' # 1,-2,-2,-5
this (from the docs) makes sense to me
method min(Range:D:)
Returns the start point of the range.
say (1..5).min; # OUTPUT: «1␤»
say (1^..^5).min; # OUTPUT: «1␤»
and I think the infinimum idea is quite a good mnemonic for the excludes case which also could be 5.1^.. , 5.0001^.. etc.

Possible to break out of a reduce operator in presto?

Wondering if it's possible to break out of a reduce operator in presto. Example use case:
I have a table where one column is an array of bigints, and I want to return all columns where the magnitude of the array is less than say 1000. So I could write
select
*
from table
where reduce(array_col, 0, (s,x) -> s + power(x,2), s -> if(s < power(1000,2), TRUE, FALSE))
but if there are a lot of rows and the arrays are big, this can take a while. I would like the operator to break and return FALSE as soon as the sum exceeds 1000. Currently I have:
select
*
from table
where reduce(array_col, 0, if(s >= power(1000,2), power(1000,2), s + power(x,2), s -> if(s < power(1000,2), TRUE, FALSE))
which at least saves some computation once the sum exceeds the target value, but still has to iterate through each array element.
There is no support for "break" from array reduction.
Note: technically, you may try to hack this by generating a failure (eg. 1/0) when you would want a break and catching it with try. I doubt it's worth it though.