Returning multiple solutions with CPLEX, 'bad suffix .npool' - ampl

I've tried generating multiple solutions with cplex using
option solver cplexamp;
option cplex_options 'poolstub=solfile populate=1 poolintensity=4';
...
for {k in K_mach_RESOURCES} {
solve SUB1[k];
for {l in 1..SUB1[k].npool}{
solution ("solfile" & l & ".sol");
display _varname, _var;
}
Gives the error
Bad suffix .npool for SUB1
context: for {l in >>> 1..SUB1[k].npool} <<< {
Possible suffix values for SUB1.suffix:
astatus exitcode message relax
result sstatus stage
The weird thing is that it's generating .sol files, but I don't know how to access the generated solutions! Possibly relevant info: there's multiple problems declared in the run file. Accessing Current.npool doesn't work either (in fact, it assumes Current is the latest DECLARED problem, not the latest SOLVED problem). Any ideas??

It seems as if the problem arose because the problem wasn't defined to be an INTEGER problem, but a LP-relaxation of an integer problem.
For some reason, CPLEX doesn't seem to support the populate method for linear programs.

I think you forgot the "solve" command
ampl: solve;
and then you can display the results.

Related

Optimization results from xpress do not follow the specified variable type

I found some problems where the optimization result from xpress does not follow the specified variable type. I set vartype=xp.binary when I create the xpress variables, but some of the results have values such as 0.13333, 0.36667, 0.5.
I found that this was caused by one of the constraints. When I disabled most of the constraints, the values are all binary. Then, I enabled the constraint one by one and found one set of constraints that is causing the value to be non-binary.
Has anyone observed this before? Any suggestion on how to enforce the variable value to be binary?
Thanks!
For sake of completeness, expanding Erwin Kalvelagen's comment into an answer:
Fractional values for integer variables usually happen when the problem is infeasible.
After solve() returns, you need to check attributes.mipstatus to make sure you actually have a solution available (see also the potential values for the MIP status here):
p = xp.problem()
...
p.solve()
print(p.getProbStatus(), p.getProbStatusString())
if p.attributes.mipstatus == xp.mip_solution or \
p.attributes.mipstatus == xp.mip_optimal:
print(p.getProbStatusString())
print(p.getSolution())
else:
print('No feasible solution', p.getProbStatusString())
Another way to get the problem status is function p.getProbStatus().

Unable to use pickAFile in TigerJython

In JES, I am able to use:
file=pickAFile()
In TigerJython, however, I get the following error
NameError: name 'pickAFile' is not defined
What am I doing wrong here?
You are not doing anything wrong at all. The thing is that pickAFile() is not a standard function in Python. It is actually rather a function that JES has added for convenience, but which you probably will not find it in any other environment.
Since TigerJython and JES are both based on Jython, you can easily write a pickAFile() function on your own that uses Java's Swing. Here is a possible simple implementation (the pickAFile() found in JES might be a bit more complex, but this should get you started):
def pickAFile():
from javax.swing import JFileChooser
fc = JFileChooser()
retVal = fc.showOpenDialog(None)
if retVal == JFileChooser.APPROVE_OPTION:
return fc.getSelectedFile()
else:
return None
Given that it is certainly a useful function, we might have to consider including it into our next update of TigerJython.
P.S. I would like to apologise for answering so late, I have just joined SO recently and was not aware of your question (I am one of the original authors of TigerJython).

Dollar and exclamation mark (bang) symbols in VTL

I've recently encountered these two variables in some Velocity code:
$!variable1
!$variable2
I was surprised by the similarity of these so I became suspicious about the correctness of the code and become interested in finding the difference between two.
Is it possible that velocity allows any order of these two symbols or do they have different purpose? Do you know the answer?
#Jr. Here is the guide I followed when doing VM R&D: http://velocity.apache.org/engine/1.7/user-guide.html
Velocity uses the !$ and $! annotations for different things. If you use !$ it will basically be the same as a normal "!" operator, but the $! is used as a basic check to see if the variable is blank and if so it prints it out as an empty string. If your variable is empty or null and you don't use the $! annotation it will print the actual variable name as a string.
I googled and stackoverflowed a lot before I finally found the answer at people.apache.org.
According to that:
It is very easy to confuse the quiet reference notation with the
boolean not-Operator. Using the not-Operator, you use !${foo}, while
the quiet reference notation is $!{foo}. And yes, you will end up
sometimes with !$!{foo}...
Easy after all, shame it didn't struck me immediately. Hope this helps someone.

#NLConstraint with vectorized constraint JuMP/Julia

I am trying to solve a problem involving the equating of sums of exponentials.
This is how I would do it hardcoded:
#NLconstraint(m, exp(x[25])==exp(x[14])+exp(x[18]))
This works fine with the rest of the code. However, when I try to do it for an arbitrary set of equations like the above I get an error. Here's my code:
#NLconstraint(m,[k=1:length(LHSSum)],sum(exp.(LHSSum[k][i]) for i=1:length(LHSSum[k]))==sum(exp.(RHSSum[k][i]) for i=1:length(RHSSum[k])))
where LHSSum and RHSSum are arrays containing arrays of the elements that need to be exponentiated and then summed over. That is LHSSum[1]=[x[1],x[2],x[3],...,x[n]]. Where x[i] are variables of type JuMP.Variable. Note that length(LHSSum)=length(RHSSum).
The error returned is:
LoadError: exp is not defined for type Variable. Are you trying to build a nonlinear problem? Make sure you use #NLconstraint/#NLobjective.
So a simple solution would be to simply do all the exponentiating and summing outside of the #NLconstraint function, so the input would be a scalar. However, this too presents a problem since exp(x) is not defined since x is of type JuMP.variable, whereas exp expects something of type real. This is strange since I am able to calculate exponentials just fine when the function is called within an #NLconstraint(). I.e. when I code this line#NLconstraint(m,exp(x)==exp(z)+exp(y)) instead of the earlier line, no errors are thrown.
Another thing I thought to do would be a Taylor Series expansion, but this too presents a problem since it goes into #NLconstraint land for powers greater than 2, and then I get stuck with the same vectorization problem.
So I feel stuck, I feel like if JuMP would allow for the vectorized evaluation of #NLconstraint like it does for #constraint, this would not even be an issue. Another fix would be if JuMP implements it's own exp function to allow for the exponentiation of JuMP.Variable type. However, as it is I don't see a way to solve this problem in general using the JuMP framework. Do any of you have any solutions to this problem? Any clever workarounds that I am missing?
I'm confused why i isn't used in the expressions you wrote. Do you mean:
#NLconstraint(m, [k = 1:length(LHSSum)],
sum(exp(LHSSum[k][i]) for i in 1:length(LHSSum[k]))
==
sum(exp(RHSSum[k][i]) for i in 1:length(RHSSum[k])))

Does histogram_summary respect name_scope

I am getting a Duplicate tag error when I try to write out histogram summaries for a multi-layer network that I generate procedurally. I think that the problem might be related to naming. Imagine code like the following:
with tf.name_scope(some_unique_name):
...
_ = tf.histogram_summary('weights', kernel_weights)
I'd naively assumed that 'weights' would be scoped to some_unique_name but I'm suspecting that it is not. Are summary names independent of name_scope?
As Dave points out, the tag argument to tf.histogram_summary(tag, ...) is indeed independent of the current name scope. Part of the reason for this is that the tag may be a string Tensor (i.e. computed by part of your graph), whereas name scopes are a purely client-side construct (i.e. Python-only), so there's no good way to make the scoping work consistently across the two modes of use.
However, if you're using TensorFlow build from source (and should be available in the next release, 0.8.0), you can use the following recipe to scope your tags (using Graph.unique_name(..., mark_as_used=False)):
with tf.name_scope(some_unique_name):
# ...
tf.histogram_summary(
tf.get_default_graph().unique_name('weights', mark_as_used=False),
kernel_weights)
Alternatively, you can do the following in the current version:
with tf.name_scope(some_unique_name) as scope:
# ...
tf.histogram_summary(scope + 'weights', kernel_weights)
They are.
I'm with you in thinking this is a bug, but I haven't run it past the designers of the op yet. Go ahead and open an issue for it on GitHub!
(I've run into this also and found it terribly annoying -- it prevents reuse of the model without deliberately parameterizing the summary op invocations.)