What do gekko's sign3 and abs3 return? - conditional-statements

I have a variable that I want to be either 1 or 0 based on the sign of an intermediate. I could use sign2 which returns 1 or -1 based on the sign and do operations to turn it to 0 or 1. However, when I was exploring, I noticed a sign3 and abs3 which included a binary variable with the signum/absolute value. Do these functions return a list like [signum/absolute value, 0/1]?

The GEKKO functions sign2 and sign3 return the same type of output, either -1 or 1 depending on the sign of the variable they are applied to. The difference between them is how the signum (sign) operation is accomplished. Both implementations are continuously differentiable and thus suitable for gradient-based optimization.
The sign2 function uses an MPCC formulation to generate the sign of an argument, and can be used with any of the solvers. The sign3 function uses a binary switching variable and the MINLP solver APOPT.

Related

How to generate a lazy division?

I want to generate the sequence of
1, 1/2, 1/3, 1/4 ... *
using functional programming approach in raku, in my head it's should be look like:
(1,{1/$_} ...*)[0..5]
but the the output is: 1,1,1,1,1
The idea is simple, but seems enough powerful for me to using to generate for other complex list and work with it.
Others things that i tried are using a lazy list to call inside other lazy list, it doesn't work either, because the output is a repetitive sequence: 1, 0.5, 1, 0.5 ...
my list = 0 ... *;
(1, {1/#list[$_]} ...*)[0..5]
See #wamba's wonderful answer for solutions to the question in your title. They showcase a wide range of applicable Raku constructs.
This answer focuses on Raku's sequence operator (...), and the details in the body of your question, explaining what went wrong in your attempts, and explaining some working sequences.
TL;DR
The value of the Nth term is 1 / N.
# Generator ignoring prior terms, incrementing an N stored in the generator:
{ 1 / ++$ } ... * # idiomatic
{ state $N; $N++; 1 / $N } ... * # longhand
# Generator extracting denominator from prior term and adding 1 to get N:
1/1, 1/2, 1/3, 1/(*.denominator+1) ... * # idiomatic (#jjmerelo++)
1/1, 1/2, 1/3, {1/(.denominator+1)} ... * # longhand (#user0721090601++)
What's wrong with {1/$_}?
1, 1/2, 1/3, 1/4 ... *
What is the value of the Nth term? It's 1/N.
1, {1/$_} ...*
What is the value of the Nth term? It's 1/$_.
$_ is a generic parameter/argument/operand analogous to the English pronoun "it".
Is it set to N?
No.
So your generator (lambda/function) doesn't encode the sequence you're trying to reproduce.
What is $_ set to?
Within a function, $_ is bound either to (Any), or to an argument passed to the function.
If a function explicitly specifies its parameters (a "parameter" specifies an argument that a function expects to receive; this is distinct from the argument that a function actually ends up getting for any given call), then $_ is bound, or not bound, per that specification.
If a function does not explicitly specify its parameters -- and yours doesn't -- then $_ is bound to the argument, if any, that is passed as part of the call of the function.
For a generator function, any value(s) passed as arguments are values of preceding terms in the sequence.
Given that your generator doesn't explicitly specify its parameters, the immediately prior term, if any, is passed and bound to $_.
In the first call of your generator, when 1/$_ gets evaluated, the $_ is bound to the 1 from the first term. So the second term is 1/1, i.e. 1.
Thus the second call, producing the third term, has the same result. So you get an infinite sequence of 1s.
What's wrong with {1/#list[$_+1]}?
For your last example you presumably meant:
my #list = 0 ... *;
(1, {1/#list[$_+1]} ...*)[0..5]
In this case the first call of the generator returns 1/#list[1+1] which is 1/2 (0.5).
So the second call is 1/#list[0.5+1]. This specifies a fractional index into #list, asking for the 1.5th element. Indexes into standard Positionals are rounded down to the nearest integer. So 1.5 is rounded down to 1. And #list[1] evaluates to 1. So the value returned by the second call of the generator is back to 1.
Thus the sequence alternates between 1 and 0.5.
What arguments are passed to a generator?
Raku passes the value of zero or more prior terms in the sequence as the arguments to the generator.
How many? Well, a generator is an ordinary Raku lambda/function. Raku uses the implicit or explicit declaration of parameters to determine how many arguments to pass.
For example, in:
{42} ... * # 42 42 42 ...
the lambda doesn't declare what parameters it has. For such functions Raku presumes a signature including $_?, and thus passes the prior term, if any. (The above lambda ignores it.)
Which arguments do you need/want your generator to be passed?
One could argue that, for the sequence you're aiming to generate, you don't need/want to pass any of the prior terms. Because, arguably, none of them really matter.
From this perspective all that matters is that the Nth term computes 1/N. That is, its value is independent of the values of prior terms and just dependent on counting the number of calls.
State solutions such as {1/++$}
One way to compute this is something like:
{ state $N; $N++; 1/$N } ... *
The lambda ignores the previous term. The net result is just the desired 1 1/2 1/3 ....
(Except that you'll have to fiddle with the stringification because by default it'll use gist which will turn the 1/3 into 0.333333 or similar.)
Or, more succinctly/idiomatically:
{ 1 / ++$ } ... *
(An anonymous $ in a statement/expression is a simultaneous declaration and use of an anonymous state scalar variable.)
Solutions using the prior term
As #user0721090601++ notes in a comment below, one can write a generator that makes use of the prior value:
1/1, 1/2, 1/3, {1/(.denominator+1)} ... *
For a generator that doesn't explicitly specify its parameters, Raku passes the value of the prior term in the sequence as the argument, binding it to the "it" argument $_.
And given that there's no explicit invocant for .denominator, Raku presumes you mean to call the method on $_.
As #jjmerelo++ notes, an idiomatic way to express many lambdas is to use the explicit pronoun "whatever" instead of "it" (implicit or explicit) to form a WhateverCode lambda:
1/1, 1/2, 1/3, 1/(*.denominator+1) ... *
You drop the braces for this form, which is one of its advantages. (You can also use multiple "whatevers" in a single expression rather than just one "it", another part of this construct's charm.)
This construct typically takes some getting used to; perhaps the biggest hurdle is that a * must be combined with a "WhateverCodeable" operator/function for it to form a WhateverCode lambda.
TIMTOWTDI
routine map
(1..*).map: 1/*
List repetition operator xx
1/++$ xx *
The cross metaoperator, X or the zip metaoperator Z
1 X/ 1..*
1 xx * Z/ 1..*
(Control flow) control flow gather take
gather for 1..* { take 1/$_ }
(Seq) method from-loop
Seq.from-loop: { 1/++$ }
(Operators) infix ...
1, 1/(1+1/*) ... *
{1/++$} ... *

Summation iterated over a variable length

I have written an optimization problem in pyomo and need a constraint, which contains a summation that has a variable length:
u_i_t[i, t]*T_min_run - sum (tnewnew in (t-T_min_run+1)..t-1) u_i_t[i,tnewnew] <= sum (tnew in t..(t+T_min_run-1)) u_i_t[i,tnew]
T is my actual timeline and N my machines
usually I iterate over t, but I need to guarantee the machines are turned on for certain amount of time.
def HP_on_rule(model, i, t):
return model.u_i_t[i, t]*T_min_run - sum(model.u_i_t[i, tnewnew] for tnewnew in range((t-T_min_run+1), (t-1))) <= sum(model.u_i_t[i, tnew] for tnew in range(t, (t+T_min_run-1)))
model.HP_on_rule = Constraint(N, rule=HP_on_rule)
I hope you can provide me with the correct formulation in pyomo/python.
The problem is that t is a running variable and I do not know how to implement this in Python. tnew is only a help variable. E.g. t=6 (variable), T_min_run=3 (constant) and u_i_t is binary [00001111100000...] then I get:
1*3 - 1 <= 3
As I said, I do not know how to implement this in my code and the current version is not running.
TypeError: HP_on_rule() missing 1 required positional argument: 't'
It seems like you didn't provide all your arguments to the function rule.
Since t is a parameter of your function, I assume that it corresponds to an element of set T (your timeline).
Then, your last line of your code example should include not only the set N, but also the set T. Try this:
model.HP_on_rule = Constraint(N, T, rule=HP_on_rule)
Please note: Building a Constraint with a "for each" part, you must provide the Pyomo Sets that you want to iterate over at the begining of the call for Constraint construction. As a rule of thumb, your constraint rule function should have 1 more argument than the number of Pyomo Sets specified in the Constraint initilization line.

NLopt with univariate optimization

Anyone know if NLopt works with univariate optimization. Tried to run following code:
using NLopt
function myfunc(x, grad)
x.^2
end
opt = Opt(:LD_MMA, 1)
min_objective!(opt, myfunc)
(minf,minx,ret) = optimize(opt, [1.234])
println("got $minf at $minx (returned $ret)")
But get following error message:
> Error evaluating untitled
LoadError: BoundsError: attempt to access 1-element Array{Float64,1}:
1.234
at index [2]
in myfunc at untitled:8
in nlopt_callback_wrapper at /Users/davidzentlermunro/.julia/v0.4/NLopt/src/NLopt.jl:415
in optimize! at /Users/davidzentlermunro/.julia/v0.4/NLopt/src/NLopt.jl:514
in optimize at /Users/davidzentlermunro/.julia/v0.4/NLopt/src/NLopt.jl:520
in include_string at loading.jl:282
in include_string at /Users/davidzentlermunro/.julia/v0.4/CodeTools/src/eval.jl:32
in anonymous at /Users/davidzentlermunro/.julia/v0.4/Atom/src/eval.jl:84
in withpath at /Users/davidzentlermunro/.julia/v0.4/Requires/src/require.jl:37
in withpath at /Users/davidzentlermunro/.julia/v0.4/Atom/src/eval.jl:53
[inlined code] from /Users/davidzentlermunro/.julia/v0.4/Atom/src/eval.jl:83
in anonymous at task.jl:58
while loading untitled, in expression starting on line 13
If this isn't possible, does anyone know if a univariate optimizer where I can specify bounds and an initial condition?
There are a couple of things that you're missing here.
You need to specify the gradient (i.e. first derivative) of your function within the function. See the tutorial and examples on the github page for NLopt. Not all optimization algorithms require this, but the one that you are using LD_MMA looks like it does. See here for a listing of the various algorithms and which require a gradient.
You should specify the tolerance for conditions you need before you "declare victory" ¹ (i.e. decide that the function is sufficiently optimized). This is the xtol_rel!(opt,1e-4) in the example below. See also the ftol_rel! for another way to specify a different tolerance condition. According to the documentation, for example, xtol_rel will "stop when an optimization step (or an estimate of the optimum) changes every parameter by less than tol multiplied by the absolute value of the parameter." and ftol_rel will "stop when an optimization step (or an estimate of the optimum) changes the objective function value by less than tol multiplied by the absolute value of the function value. " See here under the "Stopping Criteria" section for more information on various options here.
The function that you are optimizing should have a unidimensional output. In your example, your output is a vector (albeit of length 1). (x.^2 in your output denotes a vector operation and a vector output). If you "objective function" doesn't ultimately output a unidimensional number, then it won't be clear what your optimization objective is (e.g. what does it mean to minimize a vector? It's not clear, you could minimize the norm of a vector, for instance, but a whole vector - it isn't clear).
Below is a working example, based on your code. Note that I included the printing output from the example on the github page, which can be helpful for you in diagnosing problems.
using NLopt
count = 0 # keep track of # function evaluations
function myfunc(x::Vector, grad::Vector)
if length(grad) > 0
grad[1] = 2*x[1]
end
global count
count::Int += 1
println("f_$count($x)")
x[1]^2
end
opt = Opt(:LD_MMA, 1)
xtol_rel!(opt,1e-4)
min_objective!(opt, myfunc)
(minf,minx,ret) = optimize(opt, [1.234])
println("got $minf at $minx (returned $ret)")
¹ (In the words of optimization great Yinyu Ye.)

Shamir's Secret Sharing using Bignum or Bigint or ....?

I've got a generic cryptographic implementation using OpenSSL's BIGNUM library in C. Standard decryption is working fine, but i would also like to implement Shamir's secret sharing (SSS).
The problem i've run across is that BIGNUM only supports whole numbers, and as part of the Lagrange interpolation for SSS, i'll need to be multiplying by negative values.
Is there any way to do this? Otherwise: I can do my SSS in another language (python?) so long as it is able to interact with the BIGNUM's produced by OpenSSL.
Any suggestions? TIA!
As you look at BIGNUM structure in OpenSSL, you'll find a flag named neg. If the BIGNUM object represents a negative number, neg will be set to 1. Also, the bn_mul() function handles the multiplication by negative number correctly. So you can implement SSS with OpenSSL, no problem!
Modular arithmetic (using groups) only provides positive results, so I presume you want to use non-modular arithmetic? In that case you could simply keep a separate variable indicating if the value is negative or not. The outcome of positive multiplication is the same except for the sign bit anyway.
It's not as clean a design as possible, but for a few methods it would probably not matter that much. You could create separate methods that mimic the BN methods except for an integer holding the value of the sign (-1, 0 or 1).

What is protected division? (in reference to genetic programming and cryptography)

I am getting references in a paper on genetic programming, to a "protected division" operation. When I google this, i get mostly papers on genetic programming and various results related to cryptography, but none that explain what it actually is. Does anybody know?
Protected division (often notated with %) checks to see if its second
argument is 0. If so, % typically returns the value 1 (regardless of
the value of the first argument).
http://en.wikipedia.org/wiki/Genetic_programming
In cryptography it doesn't seem to be well-defined, but the top google hit is for protecting against side channel attacks (in that case, via power use - you can guess what numbers are being used in the division by looking at the power consumption of the hardware doing the encryption) http://dl.acm.org/citation.cfm?id=1250996 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.9.7298&rep=rep1&type=pdf
In GP protected division is a modified division operator which does not signal "division by zero" error when denominator is 0 (zero). It typically returns 1 when denominator is zero.
It divides on threshold function of argument instead of argument.
Thres(x) = epsilon*Theta(x) if fabs(x)<epsilon.
Where Theta() is non-zero variant of theta-function.
Other threshold functions possible. Or sometimes it is just 'epsilon'.
When evolving programs with Genetic Programming (GP), every generated program is tested to get its fitness value.
The protected division is required when the evolved programs are mathematical expressions. In cryptography, the mathematical expression might be used to model a decision-making process.
In the evaluation step, the program may perform a division by zero, which would cause a crash. To avoid this, the protected division is set to return a specific value if the denominator equals zero. I've seen three settings:
If the denominator equals zero, return 1
If the denominator equals zero, return 0
If the denominator equals zero, return the numerator
The setting should be specified somewhere in the paper.
If not, the safest bet is to assume that the protected division returns the numerator.
Given that 1 is a multiplicative neutral and 0 is an additive neutral, they could cause some bias in the programs generated during the evolution but are still commonly used.